From 691c5ebf79bb011648fad0e6b234b94a28177e3c Mon Sep 17 00:00:00 2001 From: Tom Lane Date: Tue, 11 Dec 2012 22:09:05 -0500 Subject: Add defenses against integer overflow in dynahash numbuckets calculations. The dynahash code requires the number of buckets in a hash table to fit in an int; but since we calculate the desired hash table size dynamically, there are various scenarios where we might calculate too large a value. The resulting overflow can lead to infinite loops, division-by-zero crashes, etc. I (tgl) had previously installed some defenses against that in commit 299d1716525c659f0e02840e31fbe4dea3, but that covered only one call path. Moreover it worked by limiting the request size to work_mem, but in a 64-bit machine it's possible to set work_mem high enough that the problem appears anyway. So let's fix the problem at the root by installing limits in the dynahash.c functions themselves. Trouble report and patch by Jeff Davis. --- src/backend/executor/nodeHash.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) (limited to 'src/backend/executor/nodeHash.c') diff --git a/src/backend/executor/nodeHash.c b/src/backend/executor/nodeHash.c index c90fe40b3c9..5d0fc77c301 100644 --- a/src/backend/executor/nodeHash.c +++ b/src/backend/executor/nodeHash.c @@ -500,7 +500,9 @@ ExecChooseHashTableSize(double ntuples, int tupwidth, bool useskew, * Both nbuckets and nbatch must be powers of 2 to make * ExecHashGetBucketAndBatch fast. We already fixed nbatch; now inflate * nbuckets to the next larger power of 2. We also force nbuckets to not - * be real small, by starting the search at 2^10. + * be real small, by starting the search at 2^10. (Note: above we made + * sure that nbuckets is not more than INT_MAX / 2, so this loop cannot + * overflow, nor can the final shift to recalculate nbuckets.) */ i = 10; while ((1 << i) < nbuckets) -- cgit v1.2.3