summaryrefslogtreecommitdiff
path: root/src/backend/storage/page
diff options
context:
space:
mode:
authorPeter Geoghegan <pg@bowt.ie>2021-12-08 17:24:45 -0800
committerPeter Geoghegan <pg@bowt.ie>2021-12-08 17:24:45 -0800
commitbcf60585e6e0e95f0b9e5d64c7a6edca99ec6e86 (patch)
treeb9791886d37b9fe9712874a4affbb8141f266424 /src/backend/storage/page
parent6f0e6ab04de5f52e4e0872d3ace2bb6a35e8b0b1 (diff)
Standardize cleanup lock terminology.
The term "super-exclusive lock" is a synonym for "buffer cleanup lock" that first appeared in nbtree many years ago. Standardize things by consistently using the term cleanup lock. This finishes work started by commit 276db875. There is no good reason to have two terms. But there is a good reason to only have one: to avoid confusion around why VACUUM acquires a full cleanup lock (not just an ordinary exclusive lock) in index AMs, during ambulkdelete calls. This has nothing to do with protecting the physical index data structure itself. It is needed to implement a locking protocol that ensures that TIDs pointing to the heap/table structure cannot get marked for recycling by VACUUM before it is safe (which is somewhat similar to how VACUUM uses cleanup locks during its first heap pass). Note that it isn't strictly necessary for index AMs to implement this locking protocol -- several index AMs use an MVCC snapshot as their sole interlock to prevent unsafe TID recycling. In passing, update the nbtree README. Cleanly separate discussion of the aforementioned index vacuuming locking protocol from discussion of the "drop leaf page pin" optimization added by commit 2ed5b87f. We now structure discussion of the latter by describing how individual index scans may safely opt out of applying the standard locking protocol (and so can avoid blocking progress by VACUUM). Also document why the optimization is not safe to apply during nbtree index-only scans. Author: Peter Geoghegan <pg@bowt.ie> Discussion: https://postgr.es/m/CAH2-WzngHgQa92tz6NQihf4nxJwRzCV36yMJO_i8dS+2mgEVKw@mail.gmail.com Discussion: https://postgr.es/m/CAH2-WzkHPgsBBvGWjz=8PjNhDefy7XRkDKiT5NxMs-n5ZCf2dA@mail.gmail.com
Diffstat (limited to 'src/backend/storage/page')
-rw-r--r--src/backend/storage/page/bufpage.c8
1 files changed, 4 insertions, 4 deletions
diff --git a/src/backend/storage/page/bufpage.c b/src/backend/storage/page/bufpage.c
index 82ca91f5977..a5c94b0a7ee 100644
--- a/src/backend/storage/page/bufpage.c
+++ b/src/backend/storage/page/bufpage.c
@@ -701,7 +701,7 @@ compactify_tuples(itemIdCompact itemidbase, int nitems, Page page, bool presorte
* there is, in general, a good chance that even large groups of unused line
* pointers that we see here will be recycled quickly.
*
- * Caller had better have a super-exclusive lock on page's buffer. As a side
+ * Caller had better have a full cleanup lock on page's buffer. As a side
* effect the page's PD_HAS_FREE_LINES hint bit will be set or unset as
* needed.
*/
@@ -820,9 +820,9 @@ PageRepairFragmentation(Page page)
* arbitrary, but it seems like a good idea to avoid leaving a PageIsEmpty()
* page behind.
*
- * Caller can have either an exclusive lock or a super-exclusive lock on
- * page's buffer. The page's PD_HAS_FREE_LINES hint bit will be set or unset
- * based on whether or not we leave behind any remaining LP_UNUSED items.
+ * Caller can have either an exclusive lock or a full cleanup lock on page's
+ * buffer. The page's PD_HAS_FREE_LINES hint bit will be set or unset based
+ * on whether or not we leave behind any remaining LP_UNUSED items.
*/
void
PageTruncateLinePointerArray(Page page)