summaryrefslogtreecommitdiff
path: root/src
diff options
context:
space:
mode:
authorAndres Freund <andres@anarazel.de>2025-11-24 17:37:09 -0500
committerAndres Freund <andres@anarazel.de>2025-11-24 18:10:48 -0500
commit81f773895321ac69d3d71fe9d203e09d072f9c36 (patch)
tree3e8e98efe0d141533bf884e67feb473e8b6b8b31 /src
parentf81bf78ce12b9fd3e50eb00dd875440007262ec4 (diff)
lwlock: Fix, currently harmless, bug in LWLockWakeup()
Accidentally the code in LWLockWakeup() checked the list of to-be-woken up processes to see if LW_FLAG_HAS_WAITERS should be unset. That means that HAS_WAITERS would not get unset immediately, but only during the next, unnecessary, call to LWLockWakeup(). Luckily, as the code stands, this is just a small efficiency issue. However, if there were (as in a patch of mine) a case in which LWLockWakeup() would not find any backend to wake, despite the wait list not being empty, we'd wrongly unset LW_FLAG_HAS_WAITERS, leading to potentially hanging. While the consequences in the backbranches are limited, the code as-is confusing, and it is possible that there are workloads where the additional wait list lock acquisitions hurt, therefore backpatch. Discussion: https://postgr.es/m/fvfmkr5kk4nyex56ejgxj3uzi63isfxovp2biecb4bspbjrze7@az2pljabhnff Backpatch-through: 14
Diffstat (limited to 'src')
-rw-r--r--src/backend/storage/lmgr/lwlock.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/src/backend/storage/lmgr/lwlock.c b/src/backend/storage/lmgr/lwlock.c
index b017880f5e4..255cfa8fa95 100644
--- a/src/backend/storage/lmgr/lwlock.c
+++ b/src/backend/storage/lmgr/lwlock.c
@@ -998,7 +998,7 @@ LWLockWakeup(LWLock *lock)
else
desired_state &= ~LW_FLAG_RELEASE_OK;
- if (proclist_is_empty(&wakeup))
+ if (proclist_is_empty(&lock->waiters))
desired_state &= ~LW_FLAG_HAS_WAITERS;
desired_state &= ~LW_FLAG_LOCKED; /* release lock */