summaryrefslogtreecommitdiff
path: root/src
diff options
context:
space:
mode:
authorTom Lane <tgl@sss.pgh.pa.us>2012-05-22 19:42:28 -0400
committerTom Lane <tgl@sss.pgh.pa.us>2012-05-22 19:42:28 -0400
commitc994b9211fd0cf7a5b680ae115330117604b9f7c (patch)
treea067e2fa987d01ab84c5c274bea9881b81e0f623 /src
parent57615562504a3a10784d10a5205ed4bab41dba6e (diff)
Ensure that seqscans check for interrupts at least once per page.
If a seqscan encounters many consecutive pages containing only dead tuples, it can remain in the loop in heapgettup for a long time, and there was no CHECK_FOR_INTERRUPTS anywhere in that loop. This meant there were real-world situations where a query would be effectively uncancelable for long stretches. Add a check placed to occur once per page, which should be enough to provide reasonable response time without adding any measurable overhead. Report and patch by Merlin Moncure (though I tweaked it a bit). Back-patch to all supported branches.
Diffstat (limited to 'src')
-rw-r--r--src/backend/access/heap/heapam.c7
1 files changed, 7 insertions, 0 deletions
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index 10644c7d8f1..0dc6611d8dc 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -206,6 +206,13 @@ heapgetpage(HeapScanDesc scan, BlockNumber page)
scan->rs_cbuf = InvalidBuffer;
}
+ /*
+ * Be sure to check for interrupts at least once per page. Checks at
+ * higher code levels won't be able to stop a seqscan that encounters
+ * many pages' worth of consecutive dead tuples.
+ */
+ CHECK_FOR_INTERRUPTS();
+
/* read page using selected strategy */
scan->rs_cbuf = ReadBufferWithStrategy(scan->rs_rd,
page,