• postgres 源码解析 35 -- 可见性判断加速heapgetpage


    1 简介

      不管是何种类型的数据库,用户访问数据时需要检查数据(记录/元组Tuple)的可见性,该操作是每个事务(查、删、改)都会执行的一个环节。因此对于这种高频热点操作若能加以优化,可想而知在高并发场景下其性能的提升会有质的改善。

    2 可见性判断

    postgres中每一条元组的头信息包含如下字段:
    xmin : 该元组插入时对应的事务号
    xmax : 该元组被删除或者更新时对应的事务号
    t_ctid : 执行操作的命令id,常用于同一个事务中的若干个执行操作,在特定的场景下可用于可见性的判断

    typedef struct HeapTupleFields
    {
    	TransactionId t_xmin;		/* inserting xact ID */
    	TransactionId t_xmax;		/* deleting or locking xact ID */
    
    	union
    	{
    		CommandId	t_cid;		/* inserting or deleting command ID, or both */
    		TransactionId t_xvac;	/* old-style VACUUM FULL xact ID */
    	}			t_field3;
    } HeapTupleFields;
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11

    同时,postgres采用MVCC和快照技术进一步提升事务的并发效率,不同的事务读到的可能是同一记录的不同版本,此时可见性判断需要结合快照来进行,此部分的相关内容见:postgres源码解析32 快照可见性判断HeapTupleSatisfiesMVCC

    例子:

    postgres=# SELECT lp,lp_off, lp_flags, t_xmin, t_xmax, t_field3 as t_cid, t_ctid,t_infomask, infomask(t_infomask,1) as infomask,t_infomask2,infomask(t_infomask2,2) as infomask2, t_data  FROM heap_page_items(get_raw_page('test', 0));
     lp | lp_off | lp_flags | t_xmin | t_xmax | t_cid | t_ctid | t_infomask |                infomask                 | t_infomask2 | infomask2 |            t_data            
    ----+--------+----------+--------+--------+-------+--------+------------+-----------------------------------------+-------------+-----------+------------------------------
      1 |   8152 |        1 |    737 |      0 |     0 | (0,1)  |       2306 | XMAX_INVALID|XMIN_COMMITTED|HASVARWIDTH |           2 |           | \x0100000013706f737467726573
      2 |   8112 |        1 |    737 |      0 |     0 | (0,2)  |       2306 | XMAX_INVALID|XMIN_COMMITTED|HASVARWIDTH |           2 |           | \x0200000013706f737467726573
      3 |   8072 |        1 |    737 |      0 |     0 | (0,3)  |       2306 | XMAX_INVALID|XMIN_COMMITTED|HASVARWIDTH |           2 |           | \x0300000013706f737467726573
      4 |   8032 |        1 |    737 |      0 |     0 | (0,4)  |       2306 | XMAX_INVALID|XMIN_COMMITTED|HASVARWIDTH |           2 |           | \x0400000013706f737467726573
      5 |   7992 |        1 |    737 |      0 |     0 | (0,5)  |       2306 | XMAX_INVALID|XMIN_COMMITTED|HASVARWIDTH |           2 |           | \x0500000013706f737467726573
      6 |   7952 |        1 |    737 |      0 |     0 | (0,6)  |       2306 | XMAX_INVALID|XMIN_COMMITTED|HASVARWIDTH |           2 |           | \x0600000013706f737467726573
      7 |   7912 |        1 |    737 |      0 |     0 | (0,7)  |       2306 | XMAX_INVALID|XMIN_COMMITTED|HASVARWIDTH |           2 |           | \x0700000013706f737467726573
      8 |   7872 |        1 |    737 |      0 |     0 | (0,8)  |       2306 | XMAX_INVALID|XMIN_COMMITTED|HASVARWIDTH |           2 |           | \x0800000013706f737467726573
      9 |   7832 |        1 |    737 |      0 |     0 | (0,9)  |       2306 | XMAX_INVALID|XMIN_COMMITTED|HASVARWIDTH |           2 |           | \x0900000013706f737467726573
     10 |   7792 |        1 |    737 |      0 |     0 | (0,10) |       2306 | XMAX_INVALID|XMIN_COMMITTED|HASVARWIDTH |           2 |           | \x0a00000013706f737467726573
    (10 rows)
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15

    在postgres中CLOG日志模块记录了所有事务的最终状态:提交/回滚,用于判断元组是否可见/不可见,由于共享内存的限制,需要从磁盘加载至内存中/或者淘汰页面以供新页面的加载,其访问速度会限制事务的执行效率,为此,在每条元组的元组头设置标识位 t_infomask,在第一次查找时会新增 XMIN_COMMITTED标识信息,以后查找见此标识信息便可知该元组可见,避免频繁访问CLOG日志。相关知识见:postgres 源码解析2 元组可见性判断 t_infomask标识位

    heapgetpage可见性判断的逻辑

    pg14源代码
    heapgetpage:该函数的主要功能为扫描指定页获取可见的元组

    /*
     * heapgetpage - subroutine for heapgettup()
     *
     * This routine reads and pins the specified page of the relation.
     * In page-at-a-time mode it performs additional work, namely determining
     * which tuples on the page are visible.
     */
    void
    heapgetpage(TableScanDesc sscan, BlockNumber page)
    {
    	HeapScanDesc scan = (HeapScanDesc) sscan;
    	Buffer		buffer;
    	Snapshot	snapshot;
    	Page		dp;
    	int			lines;
    	int			ntup;
    	OffsetNumber lineoff;
    	ItemId		lpp;
    	bool		all_visible;
    
    	Assert(page < scan->rs_nblocks);
    
    	/* release previous scan buffer, if any */
    	if (BufferIsValid(scan->rs_cbuf))
    	{
    		ReleaseBuffer(scan->rs_cbuf);
    		scan->rs_cbuf = InvalidBuffer;
    	}
    
    	/*
    	 * Be sure to check for interrupts at least once per page.  Checks at
    	 * higher code levels won't be able to stop a seqscan that encounters many
    	 * pages' worth of consecutive dead tuples.
    	 */
    	CHECK_FOR_INTERRUPTS();
    
    	/* read page using selected strategy */
    	// 根据指定的策略将指定数据页读至共享内存中的某一缓冲块
    	scan->rs_cbuf = ReadBufferExtended(scan->rs_base.rs_rd, MAIN_FORKNUM, page,
    									   RBM_NORMAL, scan->rs_strategy);
    	scan->rs_cblock = page;
    
    	if (!(scan->rs_base.rs_flags & SO_ALLOW_PAGEMODE))
    		return;
    
    	buffer = scan->rs_cbuf;
    	snapshot = scan->rs_base.rs_snapshot;
    
    	/*
    	 * Prune and repair fragmentation for the whole page, if possible.
    	 */
    	 // 有需要进行页内整理工作
    	heap_page_prune_opt(scan->rs_base.rs_rd, buffer);
    
    	/*
    	 * We must hold share lock on the buffer content while examining tuple
    	 * visibility.  Afterwards, however, the tuples we have found to be
    	 * visible are guaranteed good as long as we hold the buffer pin.
    	 */
    	 // 判断元组可见性时需持有 BUFFER_LOCK_SHARE
    	LockBuffer(buffer, BUFFER_LOCK_SHARE);
    
    	dp = BufferGetPage(buffer);
    	TestForOldSnapshot(snapshot, scan->rs_base.rs_rd, dp);
    	lines = PageGetMaxOffsetNumber(dp);
    	ntup = 0;
    
    	/*
    	 * If the all-visible flag indicates that all tuples on the page are
    	 * visible to everyone, we can skip the per-tuple visibility tests.
    	 *
    	 * Note: In hot standby, a tuple that's already visible to all
    	 * transactions on the primary might still be invisible to a read-only
    	 * transaction in the standby. We partly handle this problem by tracking
    	 * the minimum xmin of visible tuples as the cut-off XID while marking a
    	 * page all-visible on the primary and WAL log that along with the
    	 * visibility map SET operation. In hot standby, we wait for (or abort)
    	 * all transactions that can potentially may not see one or more tuples on
    	 * the page. That's how index-only scans work fine in hot standby. A
    	 * crucial difference between index-only scans and heap scans is that the
    	 * index-only scan completely relies on the visibility map where as heap
    	 * scan looks at the page-level PD_ALL_VISIBLE flag. We are not sure if
    	 * the page-level flag can be trusted in the same way, because it might
    	 * get propagated somehow without being explicitly WAL-logged, e.g. via a
    	 * full page write. Until we can prove that beyond doubt, let's check each
    	 * tuple for visibility the hard way.
    	 */
    	 // 初步可见性判断流程
    	 // 若否存在 all-visible flag,存在表明都可见,无需走后续复杂判断流程
    	 //
    	all_visible = PageIsAllVisible(dp) && !snapshot->takenDuringRecovery;
        
        // 遍历该页中的所有元组项
    	for (lineoff = FirstOffsetNumber, lpp = PageGetItemId(dp, lineoff);
    		 lineoff <= lines;
    		 lineoff++, lpp++)
    	{
    		if (ItemIdIsNormal(lpp))
    		{
    			HeapTupleData loctup;
    			bool		valid;
    
    			loctup.t_tableOid = RelationGetRelid(scan->rs_base.rs_rd);
    			loctup.t_data = (HeapTupleHeader) PageGetItem((Page) dp, lpp);
    			loctup.t_len = ItemIdGetLength(lpp);
    			ItemPointerSet(&(loctup.t_self), page, lineoff);
    
    			if (all_visible)
    				valid = true;
    			else
    				valid = HeapTupleSatisfiesVisibility(&loctup, snapshot, buffer);  // 可见性检查
    
    			HeapCheckForSerializableConflictOut(valid, scan->rs_base.rs_rd,
    												&loctup, buffer, snapshot);
    
    			if (valid)
    				scan->rs_vistuples[ntup++] = lineoff;
    		}
    	}
    	// 释放锁
    	LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
    
    	Assert(ntup <= MaxHeapTuplesPerPage);
    	scan->rs_ntuples = ntup;
    }
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68
    • 69
    • 70
    • 71
    • 72
    • 73
    • 74
    • 75
    • 76
    • 77
    • 78
    • 79
    • 80
    • 81
    • 82
    • 83
    • 84
    • 85
    • 86
    • 87
    • 88
    • 89
    • 90
    • 91
    • 92
    • 93
    • 94
    • 95
    • 96
    • 97
    • 98
    • 99
    • 100
    • 101
    • 102
    • 103
    • 104
    • 105
    • 106
    • 107
    • 108
    • 109
    • 110
    • 111
    • 112
    • 113
    • 114
    • 115
    • 116
    • 117
    • 118
    • 119
    • 120
    • 121
    • 122
    • 123
    • 124
    • 125

    从上述获取元组逻辑可以看出,会对每一条元组调用 HeapTupleSatisfiesVisibility函数来判断元组的可见性,在AP场景下会对大量数据进行分析,此数据页中的大多数据都需读取,此时会耗费大量的时间在执行HeapTupleSatisfiesVisibility逻辑,显然在此情况下不尽人意。

    优化逻辑

    heapgetpage函数改进

    void
    heapgetpage(TableScanDesc sscan, BlockNumber page)
    {
    	HeapScanDesc scan = (HeapScanDesc) sscan;
    	Buffer		buffer;
    	Snapshot	snapshot;
    	Page		dp;
    	int			lines;
    	int			ntup;
    	OffsetNumber lineoff;
    	ItemId		lpp;
    	bool		all_visible;
    	TransactionId t_xmin;
    	CommandId	t_cid;
    
    	Assert(page < scan->rs_nblocks);
    
    	/* release previous scan buffer, if any */
    	if (BufferIsValid(scan->rs_cbuf))
    	{
    		ReleaseBuffer(scan->rs_cbuf);
    		scan->rs_cbuf = InvalidBuffer;
    	}
    
    	/*
    	 * Be sure to check for interrupts at least once per page.  Checks at
    	 * higher code levels won't be able to stop a seqscan that encounters many
    	 * pages' worth of consecutive dead tuples.
    	 */
    	CHECK_FOR_INTERRUPTS();
    
    	/* read page using selected strategy */
    	scan->rs_cbuf = ReadBufferExtended(scan->rs_base.rs_rd, MAIN_FORKNUM, page,
    									   RBM_NORMAL, scan->rs_strategy);
    	scan->rs_cblock = page;
    
    	if (!(scan->rs_base.rs_flags & SO_ALLOW_PAGEMODE))
    		return;
    
    	buffer = scan->rs_cbuf;
    	snapshot = scan->rs_base.rs_snapshot;
    
    	/*
    	 * Prune and repair fragmentation for the whole page, if possible.
    	 */
    	heap_page_prune_opt(scan->rs_base.rs_rd, buffer);
    
    	/*
    	 * We must hold share lock on the buffer content while examining tuple
    	 * visibility.  Afterwards, however, the tuples we have found to be
    	 * visible are guaranteed good as long as we hold the buffer pin.
    	 */
    	LockBuffer(buffer, BUFFER_LOCK_SHARE);
    
    	dp = BufferGetPage(buffer);
    	TestForOldSnapshot(snapshot, scan->rs_base.rs_rd, dp);
    	lines = PageGetMaxOffsetNumber(dp);
    	ntup = 0;
    
     |----------------|
     
    	t_xmin = 0;
    	t_cid = 0;
     |----------------|
    	/*
    	 * If the all-visible flag indicates that all tuples on the page are
    	 * visible to everyone, we can skip the per-tuple visibility tests.
    	 *
    	 * Note: In hot standby, a tuple that's already visible to all
    	 * transactions in the master might still be invisible to a read-only
    	 * transaction in the standby. We partly handle this problem by tracking
    	 * the minimum xmin of visible tuples as the cut-off XID while marking a
    	 * page all-visible on master and WAL log that along with the visibility
    	 * map SET operation. In hot standby, we wait for (or abort) all
    	 * transactions that can potentially may not see one or more tuples on the
    	 * page. That's how index-only scans work fine in hot standby. A crucial
    	 * difference between index-only scans and heap scans is that the
    	 * index-only scan completely relies on the visibility map where as heap
    	 * scan looks at the page-level PD_ALL_VISIBLE flag. We are not sure if
    	 * the page-level flag can be trusted in the same way, because it might
    	 * get propagated somehow without being explicitly WAL-logged, e.g. via a
    	 * full page write. Until we can prove that beyond doubt, let's check each
    	 * tuple for visibility the hard way.
    	 */
    	all_visible = PageIsAllVisible(dp) && !snapshot->takenDuringRecovery;
    
    	for (lineoff = FirstOffsetNumber, lpp = PageGetItemId(dp, lineoff);
    		 lineoff <= lines;
    		 lineoff++, lpp++)
    	{
    		if (ItemIdIsNormal(lpp))
    		{
    			HeapTupleData loctup;
    			bool		valid;
    			HeapTupleHeader theader = (HeapTupleHeader) PageGetItem((Page) dp, lpp);
    
    			loctup.t_tableOid = RelationGetRelid(scan->rs_base.rs_rd);
    			loctup.t_data = (HeapTupleHeader) PageGetItem((Page) dp, lpp);
    			loctup.t_len = ItemIdGetLength(lpp);
    			ItemPointerSet(&(loctup.t_self), page, lineoff);
    
    			if (all_visible)
    			{
    				valid = true;
    			}
    			|----------------------------------------------------------------------|
    			else
    			{
    				/*
    				 * We have a one-item cache for the common case that a
    				 * lot of tuples have the same visibility info. Don't use the
    				 * cache, if the tuple was ever deleted, though (i.e. if xmax
    				 * is valid, and not just for tuple-locking). We could cache
    				 * the xmax too, but the visibility rules get more complicated
    				 * with locked-only tuples and multi-XIDs, so it seems better
    				 * to just give up early.
    				 */
    				bool		use_cache;
    				
    				// 未被删除或锁住,借助缓存加速可见性判断
    				if ((theader->t_infomask & HEAP_XMAX_INVALID) != 0 ||
    					HEAP_XMAX_IS_LOCKED_ONLY(theader->t_infomask))
    					use_cache = true;
    				else
    					use_cache = false;
    
    				if (use_cache &&
    					t_xmin == HeapTupleHeaderGetXmin(theader) &&
    					t_cid == HeapTupleHeaderGetRawCommandId(theader))
    				{
    					valid = true;
    				}
    				else
    				{
    					valid = HeapTupleSatisfiesVisibility(&loctup, snapshot, buffer);
    
    					if (valid && use_cache)
    					{
    						t_xmin = HeapTupleHeaderGetXmin(loctup.t_data);
    						t_cid = HeapTupleHeaderGetRawCommandId(loctup.t_data);
    					}
    				}
    			}
    			|-----------------------------------------------------------------------------|
    			HeapCheckForSerializableConflictOut(valid, scan->rs_base.rs_rd,
    											&loctup, buffer, snapshot);
    
    			if (valid)
    				scan->rs_vistuples[ntup++] = lineoff;
    		}
    	}
    
    	LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
    
    	Assert(ntup <= MaxHeapTuplesPerPage);
    	scan->rs_ntuples = ntup;
    }
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68
    • 69
    • 70
    • 71
    • 72
    • 73
    • 74
    • 75
    • 76
    • 77
    • 78
    • 79
    • 80
    • 81
    • 82
    • 83
    • 84
    • 85
    • 86
    • 87
    • 88
    • 89
    • 90
    • 91
    • 92
    • 93
    • 94
    • 95
    • 96
    • 97
    • 98
    • 99
    • 100
    • 101
    • 102
    • 103
    • 104
    • 105
    • 106
    • 107
    • 108
    • 109
    • 110
    • 111
    • 112
    • 113
    • 114
    • 115
    • 116
    • 117
    • 118
    • 119
    • 120
    • 121
    • 122
    • 123
    • 124
    • 125
    • 126
    • 127
    • 128
    • 129
    • 130
    • 131
    • 132
    • 133
    • 134
    • 135
    • 136
    • 137
    • 138
    • 139
    • 140
    • 141
    • 142
    • 143
    • 144
    • 145
    • 146
    • 147
    • 148
    • 149
    • 150
    • 151
    • 152
    • 153
    • 154
    • 155
    • 156
    • 157
    • 158

    改进思路:通过引入cache加速元组的可见性判断,避免频繁调用 HeapTupleSatisfiesVisibility函数。而只需要进行简单的数值比较,进一步加速了元组的访问。不过此优化对于页中大量具有相同可见性信息的元组非常有利。该优化策略值得本人学习借鉴

    测试结果
    pgbench测试
    结论:修改后的版本在原版本基础之上TPS有所提升,以自身的案例为 7 % 以上+。

    pgbench -i -s 500 -F 90 -h 10.229.89.212 -p 5678 -U pg14 -d postgres
    pgbench -c 256 -j 10 -M prepared -n -T 600 -r -h 10.229.89.212 -p 5678 -U pg14 -d postgres

    版本TranscationsTPSLatency averageconnectionsthreadsTime
    pg14513083855.0/s299.408 ms2561010 min
    pg14_modify549722915.5/s279.619 ms2561010 min

    测试日志如下

    pg14
    number of transactions actually processed: 513083
    latency average = 299.408 ms
    initial connection time = 94.252 ms
    tps = 855.020331 (without initial connection time)
    statement latencies in milliseconds:
    0.406 \set aid random(1, 100000 * :scale)
    0.446 \set bid random(1, 1 * :scale)
    0.417 \set tid random(1, 10 * :scale)
    0.418 \set delta random(-5000, 5000)
    14.504 BEGIN;
    41.115 UPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid = :aid;
    31.457 SELECT abalance FROM pgbench_accounts WHERE aid = :aid;
    40.618 UPDATE pgbench_tellers SET tbalance = tbalance + :delta WHERE tid = :tid;
    49.002 UPDATE pgbench_branches SET bbalance = bbalance + :delta WHERE bid = :bid;
    30.345 INSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);
    57.918 END;

    pg14_modify
    number of transactions actually processed: 549722
    latency average = 279.619 ms
    initial connection time = 97.837 ms
    tps = 915.532786 (without initial connection time)
    statement latencies in milliseconds:
    0.355 \set aid random(1, 100000 * :scale)
    0.438 \set bid random(1, 1 * :scale)
    0.365 \set tid random(1, 10 * :scale)
    0.393 \set delta random(-5000, 5000)
    13.389 BEGIN;
    38.190 UPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid = :aid;
    28.927 SELECT abalance FROM pgbench_accounts WHERE aid = :aid;
    37.723 UPDATE pgbench_tellers SET tbalance = tbalance + :delta WHERE tid = :tid;
    45.889 UPDATE pgbench_branches SET bbalance = bbalance + :delta WHERE bid = :bid;
    27.738 INSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);
    55.228 END;

  • 相关阅读:
    个人网站的SEO优化系列——如何实现搜索引擎的收录
    第28篇 Spring Boot简介
    ASO优化:影响 APP权重与吸量的五大要素
    【面试题】JSON.stringify 和fast-json-stringify有什么区别
    Redux-状态管理组件
    SpringBoot框架分层(View层、Controller层、Service层、Mapper层、pojo层)
    01,完全,多重,混合,分组背包相关题目
    苹果注定要输给欧盟,USB-C成为标准接口已是大势所趋
    20240416,深拷贝&浅拷贝,对象初始化和清理,对象模型和THIS指针
    Python优雅遍历字典删除元素的方法
  • 原文地址:https://blog.csdn.net/qq_52668274/article/details/127891273