summaryrefslogtreecommitdiff
path: root/src/backend/replication/logical/logical.c
diff options
context:
space:
mode:
authorAlexander Korotkov <akorotkov@postgresql.org>2025-06-14 03:33:15 +0300
committerAlexander Korotkov <akorotkov@postgresql.org>2025-06-14 04:15:04 +0300
commitdd9bc1a17d0324448c45c6fc0a2d258b9134bfc3 (patch)
treee5261792f17c11894934581e20d14c3d3b9acc4c /src/backend/replication/logical/logical.c
parentd2ec671092a1144fcaa6b465b4672e937a68b65f (diff)
Keep WAL segments by the flushed value of the slot's restart LSN
The patch fixes the issue with the unexpected removal of old WAL segments after checkpoint, followed by an immediate restart. The issue occurs when a slot is advanced after the start of the checkpoint and before old WAL segments are removed at the end of the checkpoint. The idea of the patch is to get the minimal restart_lsn at the beginning of checkpoint (or restart point) creation and use this value when calculating the oldest LSN for WAL segments removal at the end of checkpoint. This idea was proposed by Tomas Vondra in the discussion. Unlike 291221c46575, this fix doesn't affect ABI and is intended for back branches. Discussion: https://postgr.es/m/flat/1d12d2-67235980-35-19a406a0%4063439497 Author: Vitaly Davydov <v.davydov@postgrespro.ru> Reviewed-by: Tomas Vondra <tomas@vondra.me> Reviewed-by: Alexander Korotkov <aekorotkov@gmail.com> Reviewed-by: Amit Kapila <amit.kapila16@gmail.com> Backpatch-through: 13
Diffstat (limited to 'src/backend/replication/logical/logical.c')
-rw-r--r--src/backend/replication/logical/logical.c10
1 files changed, 9 insertions, 1 deletions
diff --git a/src/backend/replication/logical/logical.c b/src/backend/replication/logical/logical.c
index 74e22fff78d..e9105edaef5 100644
--- a/src/backend/replication/logical/logical.c
+++ b/src/backend/replication/logical/logical.c
@@ -1803,7 +1803,15 @@ LogicalConfirmReceivedLocation(XLogRecPtr lsn)
SpinLockRelease(&MyReplicationSlot->mutex);
- /* first write new xmin to disk, so we know what's up after a crash */
+ /*
+ * First, write new xmin and restart_lsn to disk so we know what's up
+ * after a crash. Even when we do this, the checkpointer can see the
+ * updated restart_lsn value in the shared memory; then, a crash can
+ * happen before we manage to write that value to the disk. Thus,
+ * checkpointer still needs to make special efforts to keep WAL
+ * segments required by the restart_lsn written to the disk. See
+ * CreateCheckPoint() and CreateRestartPoint() for details.
+ */
if (updated_xmin || updated_restart)
{
ReplicationSlotMarkDirty();