diff options
author | Andres Freund <andres@anarazel.de> | 2025-04-02 14:25:17 -0400 |
---|---|---|
committer | Andres Freund <andres@anarazel.de> | 2025-04-02 14:54:20 -0400 |
commit | 459e7bf8e2f8ab894dc613fa8555b74c4eef6969 (patch) | |
tree | d89ead863ddc22c0615d244c97ce26d3cf9cda32 /src/backend/access/heap/heapam_handler.c | |
parent | 0dca5d68d7bebf2c1036fd84875533afef6df992 (diff) |
Remove HeapBitmapScan's skip_fetch optimization
The optimization does not take the removal of TIDs by a concurrent vacuum into
account. The concurrent vacuum can remove dead TIDs and make pages ALL_VISIBLE
while those dead TIDs are referenced in the bitmap. This can lead to a
skip_fetch scan returning too many tuples.
It likely would be possible to implement this optimization safely, but we
don't have the necessary infrastructure in place. Nor is it clear that it's
worth building that infrastructure, given how limited the skip_fetch
optimization is.
In the backbranches we just disable the optimization by always passing
need_tuples=true to table_beginscan_bm(). We can't perform API/ABI changes in
the backbranches and we want to make the change as minimal as possible.
Author: Matthias van de Meent <boekewurm+postgres@gmail.com>
Reported-By: Konstantin Knizhnik <knizhnik@garret.ru>
Discussion: https://postgr.es/m/CAEze2Wg3gXXZTr6_rwC+s4-o2ZVFB5F985uUSgJTsECx6AmGcQ@mail.gmail.com
Backpatch-through: 13
Diffstat (limited to 'src/backend/access/heap/heapam_handler.c')
-rw-r--r-- | src/backend/access/heap/heapam_handler.c | 46 |
1 files changed, 2 insertions, 44 deletions
diff --git a/src/backend/access/heap/heapam_handler.c b/src/backend/access/heap/heapam_handler.c index 24d3765aa20..ac082fefa77 100644 --- a/src/backend/access/heap/heapam_handler.c +++ b/src/backend/access/heap/heapam_handler.c @@ -2138,32 +2138,6 @@ heapam_scan_bitmap_next_tuple(TableScanDesc scan, while (hscan->rs_cindex >= hscan->rs_ntuples) { /* - * Emit empty tuples before advancing to the next block - */ - if (bscan->rs_empty_tuples_pending > 0) - { - /* - * If we don't have to fetch the tuple, just return nulls. - */ - ExecStoreAllNullTuple(slot); - bscan->rs_empty_tuples_pending--; - - /* - * We do not recheck all NULL tuples. Because the streaming read - * API only yields TBMIterateResults for blocks actually fetched - * from the heap, we must unset `recheck` ourselves here to ensure - * correct results. - * - * Our read stream callback accrues a count of empty tuples to - * emit and then emits them after emitting tuples from the next - * fetched block. If no blocks need fetching, we'll emit the - * accrued count at the end of the scan. - */ - *recheck = false; - return true; - } - - /* * Returns false if the bitmap is exhausted and there are no further * blocks we need to scan. */ @@ -2516,24 +2490,8 @@ BitmapHeapScanNextBlock(TableScanDesc scan, if (BufferIsInvalid(hscan->rs_cbuf)) { - if (BufferIsValid(bscan->rs_vmbuffer)) - { - ReleaseBuffer(bscan->rs_vmbuffer); - bscan->rs_vmbuffer = InvalidBuffer; - } - - /* - * The bitmap is exhausted. Now emit any remaining empty tuples. The - * read stream API only returns TBMIterateResults for blocks actually - * fetched from the heap. Our callback will accrue a count of empty - * tuples to emit for all blocks we skipped fetching. So, if we skip - * fetching heap blocks at the end of the relation (or no heap blocks - * are fetched) we need to ensure we emit empty tuples before ending - * the scan. We don't recheck empty tuples so ensure `recheck` is - * unset. - */ - *recheck = false; - return bscan->rs_empty_tuples_pending > 0; + /* the bitmap is exhausted */ + return false; } Assert(per_buffer_data); |