summaryrefslogtreecommitdiff
path: root/src/backend/optimizer/plan/subselect.c
diff options
context:
space:
mode:
authorPeter Geoghegan <pg@bowt.ie>2020-07-29 14:14:58 -0700
committerPeter Geoghegan <pg@bowt.ie>2020-07-29 14:14:58 -0700
commitd6c08e29e7bc8bc3bf49764192c4a9c71fc0b097 (patch)
tree8d0d2cdb7d18504b50a49433f9181130f74186c4 /src/backend/optimizer/plan/subselect.c
parent6023b7ea717ca04cf1bd53709d9c862db07eaefb (diff)
Add hash_mem_multiplier GUC.
Add a GUC that acts as a multiplier on work_mem. It gets applied when sizing executor node hash tables that were previously size constrained using work_mem alone. The new GUC can be used to preferentially give hash-based nodes more memory than the generic work_mem limit. It is intended to enable admin tuning of the executor's memory usage. Overall system throughput and system responsiveness can be improved by giving hash-based executor nodes more memory (especially over sort-based alternatives, which are often much less sensitive to being memory constrained). The default value for hash_mem_multiplier is 1.0, which is also the minimum valid value. This means that hash-based nodes continue to apply work_mem in the traditional way by default. hash_mem_multiplier is generally useful. However, it is being added now due to concerns about hash aggregate performance stability for users that upgrade to Postgres 13 (which added disk-based hash aggregation in commit 1f39bce0). While the old hash aggregate behavior risked out-of-memory errors, it is nevertheless likely that many users actually benefited. Hash agg's previous indifference to work_mem during query execution was not just faster; it also accidentally made aggregation resilient to grouping estimate problems (at least in cases where this didn't create destabilizing memory pressure). hash_mem_multiplier can provide a certain kind of continuity with the behavior of Postgres 12 hash aggregates in cases where the planner incorrectly estimates that all groups (plus related allocations) will fit in work_mem/hash_mem. This seems necessary because hash-based aggregation is usually much slower when only a small fraction of all groups can fit. Even when it isn't possible to totally avoid hash aggregates that spill, giving hash aggregation more memory will reliably improve performance (the same cannot be said for external sort operations, which appear to be almost unaffected by memory availability provided it's at least possible to get a single merge pass). The PostgreSQL 13 release notes should advise users that increasing hash_mem_multiplier can help with performance regressions associated with hash aggregation. That can be taken care of by a later commit. Author: Peter Geoghegan Reviewed-By: Álvaro Herrera, Jeff Davis Discussion: https://postgr.es/m/20200625203629.7m6yvut7eqblgmfo@alap3.anarazel.de Discussion: https://postgr.es/m/CAH2-WzmD%2Bi1pG6rc1%2BCjc4V6EaFJ_qSuKCCHVnH%3DoruqD-zqow%40mail.gmail.com Backpatch: 13-, where disk-based hash aggregation was introduced.
Diffstat (limited to 'src/backend/optimizer/plan/subselect.c')
-rw-r--r--src/backend/optimizer/plan/subselect.c9
1 files changed, 5 insertions, 4 deletions
diff --git a/src/backend/optimizer/plan/subselect.c b/src/backend/optimizer/plan/subselect.c
index b02fcb9bfe7..9a8f738c9d0 100644
--- a/src/backend/optimizer/plan/subselect.c
+++ b/src/backend/optimizer/plan/subselect.c
@@ -200,7 +200,7 @@ make_subplan(PlannerInfo *root, Query *orig_subquery,
* XXX If an ANY subplan is uncorrelated, build_subplan may decide to hash
* its output. In that case it would've been better to specify full
* retrieval. At present, however, we can only check hashability after
- * we've made the subplan :-(. (Determining whether it'll fit in work_mem
+ * we've made the subplan :-(. (Determining whether it'll fit in hash_mem
* is the really hard part.) Therefore, we don't want to be too
* optimistic about the percentage of tuples retrieved, for fear of
* selecting a plan that's bad for the materialization case.
@@ -278,7 +278,7 @@ make_subplan(PlannerInfo *root, Query *orig_subquery,
plan = create_plan(subroot, best_path);
- /* Now we can check if it'll fit in work_mem */
+ /* Now we can check if it'll fit in hash_mem */
/* XXX can we check this at the Path stage? */
if (subplan_is_hashable(plan))
{
@@ -716,16 +716,17 @@ static bool
subplan_is_hashable(Plan *plan)
{
double subquery_size;
+ int hash_mem = get_hash_mem();
/*
- * The estimated size of the subquery result must fit in work_mem. (Note:
+ * The estimated size of the subquery result must fit in hash_mem. (Note:
* we use heap tuple overhead here even though the tuples will actually be
* stored as MinimalTuples; this provides some fudge factor for hashtable
* overhead.)
*/
subquery_size = plan->plan_rows *
(MAXALIGN(plan->plan_width) + MAXALIGN(SizeofHeapTupleHeader));
- if (subquery_size > work_mem * 1024L)
+ if (subquery_size > hash_mem * 1024L)
return false;
return true;