summaryrefslogtreecommitdiff
path: root/src
diff options
context:
space:
mode:
authorTom Lane <tgl@sss.pgh.pa.us>2012-05-22 19:42:18 -0400
committerTom Lane <tgl@sss.pgh.pa.us>2012-05-22 19:42:18 -0400
commitc676f835b544d73b3e75d994000d586f878fcb21 (patch)
tree248747228fbfea945bcda87bbad5f8735bf67232 /src
parent26d73ddac43667f80cec530ac8644beeecfd666f (diff)
Ensure that seqscans check for interrupts at least once per page.
If a seqscan encounters many consecutive pages containing only dead tuples, it can remain in the loop in heapgettup for a long time, and there was no CHECK_FOR_INTERRUPTS anywhere in that loop. This meant there were real-world situations where a query would be effectively uncancelable for long stretches. Add a check placed to occur once per page, which should be enough to provide reasonable response time without adding any measurable overhead. Report and patch by Merlin Moncure (though I tweaked it a bit). Back-patch to all supported branches.
Diffstat (limited to 'src')
-rw-r--r--src/backend/access/heap/heapam.c7
1 files changed, 7 insertions, 0 deletions
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index 1c934007b2e..1ec1efcd112 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -218,6 +218,13 @@ heapgetpage(HeapScanDesc scan, BlockNumber page)
scan->rs_cbuf = InvalidBuffer;
}
+ /*
+ * Be sure to check for interrupts at least once per page. Checks at
+ * higher code levels won't be able to stop a seqscan that encounters
+ * many pages' worth of consecutive dead tuples.
+ */
+ CHECK_FOR_INTERRUPTS();
+
/* read page using selected strategy */
scan->rs_cbuf = ReadBufferExtended(scan->rs_rd, MAIN_FORKNUM, page,
RBM_NORMAL, scan->rs_strategy);