diff options
author | Noah Misch <noah@leadboat.com> | 2024-10-29 09:39:55 -0700 |
---|---|---|
committer | Noah Misch <noah@leadboat.com> | 2024-10-29 09:40:00 -0700 |
commit | 2a912bc1abdbaa2f73555cf2c71bb4c401aa515b (patch) | |
tree | 46e47b22d67843562cb11410cd675dcd6a9f064f /src/backend/access/index | |
parent | 8a8486175042477c3ce17976ce384d430cd1530f (diff) |
Unpin buffer before inplace update waits for an XID to end.
Commit a07e03fd8fa7daf4d1356f7cb501ffe784ea6257 changed inplace updates
to wait for heap_update() commands like GRANT TABLE and GRANT DATABASE.
By keeping the pin during that wait, a sequence of autovacuum workers
and an uncommitted GRANT starved one foreground LockBufferForCleanup()
for six minutes, on buildfarm member sarus. Prevent, at the cost of a
bit of complexity. Back-patch to v12, like the earlier commit. That
commit and heap_inplace_lock() have not yet appeared in any release.
Discussion: https://postgr.es/m/20241026184936.ae.nmisch@google.com
Diffstat (limited to 'src/backend/access/index')
-rw-r--r-- | src/backend/access/index/genam.c | 12 |
1 files changed, 5 insertions, 7 deletions
diff --git a/src/backend/access/index/genam.c b/src/backend/access/index/genam.c index 085f783a0bc..bd63bf617ad 100644 --- a/src/backend/access/index/genam.c +++ b/src/backend/access/index/genam.c @@ -708,6 +708,7 @@ systable_inplace_update_begin(Relation relation, int retries = 0; SysScanDesc scan; HeapTuple oldtup; + BufferHeapTupleTableSlot *bslot; /* * For now, we don't allow parallel updates. Unlike a regular update, @@ -729,10 +730,9 @@ systable_inplace_update_begin(Relation relation, Assert(IsInplaceUpdateRelation(relation) || !IsSystemRelation(relation)); /* Loop for an exclusive-locked buffer of a non-updated tuple. */ - for (;;) + do { TupleTableSlot *slot; - BufferHeapTupleTableSlot *bslot; CHECK_FOR_INTERRUPTS(); @@ -758,11 +758,9 @@ systable_inplace_update_begin(Relation relation, slot = scan->slot; Assert(TTS_IS_BUFFERTUPLE(slot)); bslot = (BufferHeapTupleTableSlot *) slot; - if (heap_inplace_lock(scan->heap_rel, - bslot->base.tuple, bslot->buffer)) - break; - systable_endscan(scan); - }; + } while (!heap_inplace_lock(scan->heap_rel, + bslot->base.tuple, bslot->buffer, + (void (*) (void *)) systable_endscan, scan)); *oldtupcopy = heap_copytuple(oldtup); *state = scan; |