summaryrefslogtreecommitdiff
path: root/src
diff options
context:
space:
mode:
authorTom Lane <tgl@sss.pgh.pa.us>2012-05-22 19:42:23 -0400
committerTom Lane <tgl@sss.pgh.pa.us>2012-05-22 19:42:23 -0400
commit3ce6fa5568a3d554551cfec4167a9f55510a9468 (patch)
treebc6c39fe0ad0209851055e1a7b21e55f48269e5c /src
parent6e088424e2d41c01324f4bb730fab8ff44b68b81 (diff)
Ensure that seqscans check for interrupts at least once per page.
If a seqscan encounters many consecutive pages containing only dead tuples, it can remain in the loop in heapgettup for a long time, and there was no CHECK_FOR_INTERRUPTS anywhere in that loop. This meant there were real-world situations where a query would be effectively uncancelable for long stretches. Add a check placed to occur once per page, which should be enough to provide reasonable response time without adding any measurable overhead. Report and patch by Merlin Moncure (though I tweaked it a bit). Back-patch to all supported branches.
Diffstat (limited to 'src')
-rw-r--r--src/backend/access/heap/heapam.c7
1 files changed, 7 insertions, 0 deletions
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index 5294f30016f..89cf249447a 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -217,6 +217,13 @@ heapgetpage(HeapScanDesc scan, BlockNumber page)
scan->rs_cbuf = InvalidBuffer;
}
+ /*
+ * Be sure to check for interrupts at least once per page. Checks at
+ * higher code levels won't be able to stop a seqscan that encounters
+ * many pages' worth of consecutive dead tuples.
+ */
+ CHECK_FOR_INTERRUPTS();
+
/* read page using selected strategy */
scan->rs_cbuf = ReadBufferExtended(scan->rs_rd, MAIN_FORKNUM, page,
RBM_NORMAL, scan->rs_strategy);