summaryrefslogtreecommitdiff
path: root/src/backend/replication/logical/worker.c
diff options
context:
space:
mode:
authorAmit Kapila <akapila@postgresql.org>2025-09-08 06:10:15 +0000
committerAmit Kapila <akapila@postgresql.org>2025-09-08 06:10:15 +0000
commit1f7e9ba3ac4eff13041abcc4c9c517ad835fa449 (patch)
tree008c16a2d39134ebacc6ad9b2ff99b7a3a4cb7e8 /src/backend/replication/logical/worker.c
parent43eb2c541941479714c11de9cfb7c67b54f1810d (diff)
Post-commit review fixes for 228c370868.
This commit fixes three issues: 1) When a disabled subscription is created with retain_dead_tuples set to true, the launcher is not woken up immediately, which may lead to delays in creating the conflict detection slot. Creating the conflict detection slot is essential even when the subscription is not enabled. This ensures that dead tuples are retained, which is necessary for accurately identifying the type of conflict during replication. 2) Conflict-related data was unnecessarily retained when the subscription does not have a table. 3) Conflict-relevant data could be prematurely removed before applying prepared transactions on the publisher that are in the commit critical section. This issue occurred because the backend executing COMMIT PREPARED was not accounted for during the computation of oldestXid in the commit phase on the publisher. As a result, the subscriber could advance the conflict slot's xmin without waiting for such COMMIT PREPARED transactions to complete. We fixed this issue by identifying prepared transactions that are in the commit critical section during computation of oldestXid in commit phase. Author: Zhijie Hou <houzj.fnst@fujitsu.com> Reviewed-by: shveta malik <shveta.malik@gmail.com> Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com> Reviewed-by: Nisha Moond <nisha.moond412@gmail.com> Reviewed-by: Amit Kapila <amit.kapila16@gmail.com> Discussion: https://postgr.es/m/OS9PR01MB16913DACB64E5721872AA5C02943BA@OS9PR01MB16913.jpnprd01.prod.outlook.com Discussion: https://postgr.es/m/OS9PR01MB16913F67856B0DA2A909788129400A@OS9PR01MB16913.jpnprd01.prod.outlook.com
Diffstat (limited to 'src/backend/replication/logical/worker.c')
-rw-r--r--src/backend/replication/logical/worker.c25
1 files changed, 21 insertions, 4 deletions
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index f1ebd63e792..c0f6bef5c28 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -4595,11 +4595,28 @@ wait_for_local_flush(RetainDeadTuplesData *rdt_data)
* workers is complex and not worth the effort, so we simply return if not
* all tables are in the READY state.
*
- * It is safe to add new tables with initial states to the subscription
- * after this check because any changes applied to these tables should
- * have a WAL position greater than the rdt_data->remote_lsn.
+ * Advancing the transaction ID is necessary even when no tables are
+ * currently subscribed, to avoid retaining dead tuples unnecessarily.
+ * While it might seem safe to skip all phases and directly assign
+ * candidate_xid to oldest_nonremovable_xid during the
+ * RDT_GET_CANDIDATE_XID phase in such cases, this is unsafe. If users
+ * concurrently add tables to the subscription, the apply worker may not
+ * process invalidations in time. Consequently,
+ * HasSubscriptionRelationsCached() might miss the new tables, leading to
+ * premature advancement of oldest_nonremovable_xid.
+ *
+ * Performing the check during RDT_WAIT_FOR_LOCAL_FLUSH is safe, as
+ * invalidations are guaranteed to be processed before applying changes
+ * from newly added tables while waiting for the local flush to reach
+ * remote_lsn.
+ *
+ * Additionally, even if we check for subscription tables during
+ * RDT_GET_CANDIDATE_XID, they might be dropped before reaching
+ * RDT_WAIT_FOR_LOCAL_FLUSH. Therefore, it's still necessary to verify
+ * subscription tables at this stage to prevent unnecessary tuple
+ * retention.
*/
- if (!AllTablesyncsReady())
+ if (HasSubscriptionRelationsCached() && !AllTablesyncsReady())
{
TimestampTz now;