summaryrefslogtreecommitdiff
path: root/src/backend/executor
AgeCommit message (Collapse)Author
2025-02-27Generalize hash and ordering support in amapiPeter Eisentraut
Stop comparing access method OID values against HASH_AM_OID and BTREE_AM_OID, and instead check the IndexAmRoutine for an index to see if it advertises its ability to perform the necessary ordering, hashing, or cross-type comparing functionality. A field amcanorder already existed, this uses it more widely. Fields amcanhash and amcancrosscompare are added for the other purposes. Author: Mark Dilger <mark.dilger@enterprisedb.com> Discussion: https://www.postgresql.org/message-id/flat/E72EAA49-354D-4C2E-8EB9-255197F55330@enterprisedb.com
2025-02-25Fix bug in cbc127917 to handle nested Append correctlyAmit Langote
A non-leaf partition with a subplan that is an Append node was omitted from PlannedStmt.unprunableRelids because it was mistakenly included in PlannerGlobal.prunableRelids due to the way PartitionedRelPruneInfo.leafpart_rti_map[] is constructed. This happened when a non-leaf partition used an unflattened Append or MergeAppend. As a result, ExecGetRangeTableRelation() reported an error when called from CreatePartitionPruneState() to process the partition's own PartitionPruneInfo, since it was treated as prunable when it should not have been. Reported-by: Alexander Lakhin <exclusion@gmail.com> (via sqlsmith) Diagnosed-by: Tender Wang <tndrwang@gmail.com> Reviewed-by: Tender Wang <tndrwang@gmail.com> Discussion: https://postgr.es/m/74839af6-aadc-4f60-ae77-ae65f94bf607@gmail.com
2025-02-21Drop opcintype from index AM strategy translation APIPeter Eisentraut
The type argument wasn't actually really necessary. It was a remnant of converting the API of the gist strategy translation from using opclass to using opfamily+opcintype (commits c09e5a6a016, 622f678c102). For looking up the gist translation function, we used the convention "amproclefttype = amprocrighttype = opclass's opcintype" (see pg_amproc.h). But each operator family should only have one translation function, and getting the right type for the lookup is sometimes cumbersome and fragile, so this is all unnecessarily complicated. To simplify this, change the gist stategy support procedure to take "any", "any" as argument. (This is arbitrary but seems intuitive. The alternative of using InvalidOid as argument(s) upsets various DDL commands, so it's not practical.) Then we don't need opcintype for the lookup, and we can remove it from all the API layers introduced by commit c09e5a6a016. This also adds some more documentation about the correct signature of the gist support function and adds more checks in gistvalidate(). This was previously underspecified. (It relied implicitly on convention mentioned above.) Discussion: https://www.postgresql.org/message-id/flat/E72EAA49-354D-4C2E-8EB9-255197F55330@enterprisedb.com
2025-02-20Don't lock partitions pruned by initial pruningAmit Langote
Before executing a cached generic plan, AcquireExecutorLocks() in plancache.c locks all relations in a plan's range table to ensure the plan is safe for execution. However, this locks runtime-prunable relations that will later be pruned during "initial" runtime pruning, introducing unnecessary overhead. This commit defers locking for such relations to executor startup and ensures that if the CachedPlan is invalidated due to concurrent DDL during this window, replanning is triggered. Deferring these locks avoids unnecessary locking overhead for pruned partitions, resulting in significant speedup, particularly when many partitions are pruned during initial runtime pruning. * Changes to locking when executing generic plans: AcquireExecutorLocks() now locks only unprunable relations, that is, those found in PlannedStmt.unprunableRelids (introduced in commit cbc127917e), to avoid locking runtime-prunable partitions unnecessarily. The remaining locks are taken by ExecDoInitialPruning(), which acquires them only for partitions that survive pruning. This deferral does not affect the locks required for permission checking in InitPlan(), which takes place before initial pruning. ExecCheckPermissions() now includes an Assert to verify that all relations undergoing permission checks, none of which can be in the set of runtime-prunable relations, are properly locked. * Plan invalidation handling: Deferring locks introduces a window where prunable relations may be altered by concurrent DDL, invalidating the plan. A new function, ExecutorStartCachedPlan(), wraps ExecutorStart() to detect and handle invalidation caused by deferred locking. If invalidation occurs, ExecutorStartCachedPlan() updates CachedPlan using the new UpdateCachedPlan() function and retries execution with the updated plan. To ensure all code paths that may be affected by this handle invalidation properly, all callers of ExecutorStart that may execute a PlannedStmt from a CachedPlan have been updated to use ExecutorStartCachedPlan() instead. UpdateCachedPlan() replaces stale plans in CachedPlan.stmt_list. A new CachedPlan.stmt_context, created as a child of CachedPlan.context, allows freeing old PlannedStmts while preserving the CachedPlan structure and its statement list. This ensures that loops over statements in upstream callers of ExecutorStartCachedPlan() remain intact. ExecutorStart() and ExecutorStart_hook implementations now return a boolean value indicating whether plan initialization succeeded with a valid PlanState tree in QueryDesc.planstate, or false otherwise, in which case QueryDesc.planstate is NULL. Hook implementations are required to call standard_ExecutorStart() at the beginning, and if it returns false, they should do the same without proceeding. * Testing: To verify these changes, the delay_execution module tests scenarios where cached plans become invalid due to changes in prunable relations after deferred locks. * Note to extension authors: ExecutorStart_hook implementations must verify plan validity after calling standard_ExecutorStart(), as explained earlier. For example: if (prev_ExecutorStart) plan_valid = prev_ExecutorStart(queryDesc, eflags); else plan_valid = standard_ExecutorStart(queryDesc, eflags); if (!plan_valid) return false; <extension-code> return true; Extensions accessing child relations, especially prunable partitions, via ExecGetRangeTableRelation() must now ensure their RT indexes are present in es_unpruned_relids (introduced in commit cbc127917e), or they will encounter an error. This is a strict requirement after this change, as only relations in that set are locked. The idea of deferring some locks to executor startup, allowing locks for prunable partitions to be skipped, was first proposed by Tom Lane. Reviewed-by: Robert Haas <robertmhaas@gmail.com> (earlier versions) Reviewed-by: David Rowley <dgrowleyml@gmail.com> (earlier versions) Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> (earlier versions) Reviewed-by: Tomas Vondra <tomas@vondra.me> Reviewed-by: Junwang Zhao <zhjwpku@gmail.com> Discussion: https://postgr.es/m/CA+HiwqFGkMSge6TgC9KQzde0ohpAycLQuV7ooitEEpbKB0O_mg@mail.gmail.com
2025-02-19Assert that ExecOpenIndices and ExecCloseIndices are not repeated.Tom Lane
These functions should be called at most once per ResultRelInfo; it's wasteful to do otherwise, and certainly the pattern of opening twice and then closing twice is a bad idea. Moreover, aminsertcleanup functions might not be prepared to be called twice, as the just-hardened code in BRIN demonstrates. This amounts to an API change, since such coding patterns were safe even if wasteful before v17. Hence, apply to HEAD only. (Extension code violating this new rule faces some risk in v17, but we just fixed brininsertcleanup and there are probably few other aminsertcleanup functions as yet. So the odds of breaking usable code seem higher than the odds of doing something useful with a back-patch.) Bug: #18815 Reported-by: Sergey Belyashov <sergey.belyashov@gmail.com> Discussion: https://postgr.es/m/18815-2a0407cc7f40b327@postgresql.org
2025-02-19Consider BufFiles when adjusting hashjoin parametersTomas Vondra
Until now ExecChooseHashTableSize() considered only the size of the in-memory hash table, and ignored the memory needed for the batch files. Which can be a significant amount, because each batch needs two BufFiles (each with a BLCKSZ buffer). The same issue applies to increasing the number of batches during execution. It's also possible to trigger a "batch explosion", e.g. due to duplicate values or skew. We've seen reports of joins with hundreds of thousands (or even millions) of batches, consuming gigabytes of memory, triggering OOM errors. These cases may be fairly rare, but it's clearly possible to hit them. These issues can't be prevented during planning. Even if we improve that, it does not help with execution-time batch explosion. We can however reduce the impact and use as little memory as possible. This patch improves the behavior by adjusting how the memory is divided between the hash table and batch files. It may be better to use fewer batch files, even if it means the hash table will exceed the limit. The capacity of the hash node may be increased either by doubling he number of batches, or doubling the size of the in-memory hash table. The outcome is the same, but the memory usage may be very different. For low nbatch values it's better to add batches, for high nbatch values it's better to allow a larger hash table. The patch considers both options, both during the initial sizing and then during execution, to minimize how much the limit gets exceeded. It might seem this patch is relaxing the memory limit - allowing it to be exceeded. But that's not really the case. It has always been like that, except the memory used by batches was ignored. Allowing the hash table to grow may also prevent the batch explosion. If there's a large batch that can't be split (due to hash collisions or duplicate values), at some point the memory limit will increase enough for the batch to fit into the hash table. This patch was in the works for a long time. The early versions were posted in 2019, and revived every year or two when we happened to get the next report of OOM due to a hashjoin batch explosion. Each of those patch versions were reviewed by a couple people. I'm mentioning only Melanie Plageman and Robert Haas, because they reviewed the last version, and the older patches are very different. Reviewed-by: Melanie Plageman, Robert Haas Discussion: https://postgr.es/m/7bed6c08-72a0-4ab9-a79c-e01fcdd0940f@vondra.me Discussion: https://postgr.es/m/20190504003414.bulcbnge3rhwhcsh%40development Discussion: https://postgr.es/m/20190428141901.5dsbge2ka3rxmpk6%40development
2025-02-18Update outdated comments in nodeAgg.c.Jeff Davis
Author: Zhang Mingli Reviewed-by: Richard Guo Discussion: https://postgr.es/m/198a8d1e-0792-4e7f-828e-902aa342f36e@Spark
2025-02-17Fix an oversight in cbc127917 to handle MERGE correctlyAmit Langote
ExecInitModifyTable() forgot to trim MERGE-related lists to exclude entries for result relations pruned during initial pruning, so fix that. While at it, make the function's use of the pruned resultRelations list, rather than ModifyTable.resultRelations, more consistent. Reported-by: Alexander Lakhin <exclusion@gmail.com> (via sqlsmith) Reviewed-by: Junwang Zhao <zhjwpku@gmail.com> Discussion: https://postgr.es/m/e72c94d9-e5f9-4753-9bc1-69d72bd54b8a@gmail.com
2025-02-17Move wal_buffers_full from PgStat_PendingWalStats to WalUsageMichael Paquier
wal_buffers_full has been introduced in pg_stat_wal in 8d9a935965f, as some information providing metrics for the tuning of the GUC wal_buffers. WalUsage has been introduced before that in df3b181499. Moving this field is proving to be beneficial for several reasons: - This information can now be made available in more layers, providing more granularity than just pg_stat_wal, on a per-query basis: EXPLAIN, pgss and VACUUM/ANALYZE logs. - A patch is under discussion to provide statistics for WAL at backend level, and this move simplifies a bit the handling of pending statistics. The remaining data in PgStat_PendingWalStats now relates to write/sync counters and times, with equivalents present in pg_stat_io, that backend statistics are able to already track. So this should cut all the dependencies between PgStat_PendingWalStats and WAL stats at backend level. As of this change, wal_buffers_full only shows in pg_stat_wal. Author: Bertrand Drouvot Reviewed-by: Ilia Evdokimov Discussion: https://postgr.es/m/Z6SOha5YFFgvpwQY@ip-10-97-1-34.eu-west-3.compute.internal
2025-02-14Fix assertion on dereferenced objectDaniel Gustafsson
Commit 27cc7cd2bc8a accidentally placed the assertion ensuring that the pointer isn't NULL after it had already been accessed. Fix by moving the pointer dereferencing to after the assertion. Backpatch to all supported branches. Author: Dmitry Koval <d.koval@postgrespro.ru> Reviewed-by: Daniel Gustafsson <daniel@yesql.se> Reviewed-by: Michael Paquier <michael@paquier.xyz> Discussion: https://postgr.es/m/1618848d-cdc7-414b-9c03-08cf4bef4408@postgrespro.ru Backpatch-through: 13
2025-02-11Injection points for hash aggregation.Jeff Davis
Requires adding a guard against shift-by-32. Previously, that was impossible because the number of partitions was always greater than 1, but a new injection point can force the number of partitions to 1. Discussion: https://postgr.es/m/ff4e59305e5d689e03cd256a736348d3e7958f8f.camel@j-davis.com
2025-02-07Virtual generated columnsPeter Eisentraut
This adds a new variant of generated columns that are computed on read (like a view, unlike the existing stored generated columns, which are computed on write, like a materialized view). The syntax for the column definition is ... GENERATED ALWAYS AS (...) VIRTUAL and VIRTUAL is also optional. VIRTUAL is the default rather than STORED to match various other SQL products. (The SQL standard makes no specification about this, but it also doesn't know about VIRTUAL or STORED.) (Also, virtual views are the default, rather than materialized views.) Virtual generated columns are stored in tuples as null values. (A very early version of this patch had the ambition to not store them at all. But so much stuff breaks or gets confused if you have tuples where a column in the middle is completely missing. This is a compromise, and it still saves space over being forced to use stored generated columns. If we ever find a way to improve this, a bit of pg_upgrade cleverness could allow for upgrades to a newer scheme.) The capabilities and restrictions of virtual generated columns are mostly the same as for stored generated columns. In some cases, this patch keeps virtual generated columns more restricted than they might technically need to be, to keep the two kinds consistent. Some of that could maybe be relaxed later after separate careful considerations. Some functionality that is currently not supported, but could possibly be added as incremental features, some easier than others: - index on or using a virtual column - hence also no unique constraints on virtual columns - extended statistics on virtual columns - foreign-key constraints on virtual columns - not-null constraints on virtual columns (check constraints are supported) - ALTER TABLE / DROP EXPRESSION - virtual column cannot have domain type - virtual columns are not supported in logical replication The tests in generated_virtual.sql have been copied over from generated_stored.sql with the keyword replaced. This way we can make sure the behavior is mostly aligned, and the differences can be visible. Some tests for currently not supported features are currently commented out. Reviewed-by: Jian He <jian.universality@gmail.com> Reviewed-by: Dean Rasheed <dean.a.rasheed@gmail.com> Tested-by: Shlok Kyal <shlok.kyal.oss@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/a368248e-69e4-40be-9c07-6c3b5880b0a6@eisentraut.org
2025-02-07Track unpruned relids to avoid processing pruned relationsAmit Langote
This commit introduces changes to track unpruned relations explicitly, making it possible for top-level plan nodes, such as ModifyTable and LockRows, to avoid processing partitions pruned during initial pruning. Scan-level nodes, such as Append and MergeAppend, already avoid the unnecessary processing by accessing partition pruning results directly via part_prune_index. In contrast, top-level nodes cannot access pruning results directly and need to determine which partitions remain unpruned. To address this, this commit introduces a new bitmapset field, es_unpruned_relids, which the executor uses to track the set of unpruned relations. This field is referenced during plan initialization to skip initializing certain nodes for pruned partitions. It is initialized with PlannedStmt.unprunableRelids, a new field that the planner populates with RT indexes of relations that cannot be pruned during runtime pruning. These include relations not subject to partition pruning and those required for execution regardless of pruning. PlannedStmt.unprunableRelids is computed during set_plan_refs() by removing the RT indexes of runtime-prunable relations, identified from PartitionPruneInfos, from the full set of relation RT indexes. ExecDoInitialPruning() then updates es_unpruned_relids by adding partitions that survive initial pruning. To support this, PartitionedRelPruneInfo and PartitionedRelPruningData now include a leafpart_rti_map[] array that maps partition indexes to their corresponding RT indexes. The former is used in set_plan_refs() when constructing unprunableRelids, while the latter is used in ExecDoInitialPruning() to convert partition indexes returned by get_matching_partitions() into RT indexes, which are then added to es_unpruned_relids. These changes make it possible for ModifyTable and LockRows nodes to process only relations that remain unpruned after initial pruning. ExecInitModifyTable() trims lists, such as resultRelations, withCheckOptionLists, returningLists, and updateColnosLists, to consider only unpruned partitions. It also creates ResultRelInfo structs only for these partitions. Similarly, child RowMarks for pruned relations are skipped. By avoiding unnecessary initialization of structures for pruned partitions, these changes improve the performance of updates and deletes on partitioned tables during initial runtime pruning. Due to ExecInitModifyTable() changes as described above, EXPLAIN on a plan for UPDATE and DELETE that uses runtime initial pruning no longer lists partitions pruned during initial pruning. Reviewed-by: Robert Haas <robertmhaas@gmail.com> (earlier versions) Reviewed-by: Tomas Vondra <tomas@vondra.me> Discussion: https://postgr.es/m/CA+HiwqFGkMSge6TgC9KQzde0ohpAycLQuV7ooitEEpbKB0O_mg@mail.gmail.com
2025-02-03Integrate GistTranslateCompareType() into IndexAmTranslateCompareType()Peter Eisentraut
This turns GistTranslateCompareType() into a callback function of the gist index AM instead of a standalone function. The existing callers are changed to use IndexAmTranslateCompareType(). This then makes that code not hardcoded toward gist. This means in particular that the temporal keys code is now independent of gist. Also, this generalizes commit 74edabce7a3, so other index access methods other than the previously hardcoded ones could now work as REPLICA IDENTITY in a logical replication subscriber. Author: Mark Dilger <mark.dilger@enterprisedb.com> Co-authored-by: Peter Eisentraut <peter@eisentraut.org> Discussion: https://www.postgresql.org/message-id/flat/E72EAA49-354D-4C2E-8EB9-255197F55330@enterprisedb.com
2025-02-01Rename GistTranslateStratnum() to GistTranslateCompareType()Peter Eisentraut
Follow up to commit 630f9a43cec. The previous name had become confusing, because it doesn't actually translate a strategy number but a CompareType into a strategy number. We might add the inverse at some point, which would then probably be called something like GistTranslateStratnum. Reviewed-by: Mark Dilger <mark.dilger@enterprisedb.com> Discussion: https://www.postgresql.org/message-id/flat/E72EAA49-354D-4C2E-8EB9-255197F55330@enterprisedb.com
2025-01-31Get rid of our dependency on type "long" for memory size calculations.Tom Lane
Consistently use "Size" (or size_t, or in some places int64 or double) as the type for variables holding memory allocation sizes. In most places variables' data types were fine already, but we had an ancient habit of computing bytes from kilobytes-units GUCs with code like "work_mem * 1024L". That risks overflow on Win64 where they did not make "long" as wide as "size_t". We worked around that by restricting such GUCs' ranges, so you couldn't set work_mem et al higher than 2GB on Win64. This patch removes that restriction, after replacing such calculations with "work_mem * (Size) 1024" or variants of that. It should be noted that this patch was constructed by searching outwards from the GUCs that have MAX_KILOBYTES as upper limit. So I can't positively guarantee there are no other places doing memory-size arithmetic in int or long variables. I do however feel pretty confident that increasing MAX_KILOBYTES on Win64 is safe now. Also, nothing in our code should be dealing in multiple-gigabyte allocations without authorization from a relevant GUC, so it seems pretty likely that this search caught everything that could be at risk of overflow. Author: Vladlen Popolitov <v.popolitov@postgrespro.ru> Co-authored-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/1a01f0-66ec2d80-3b-68487680@27595217
2025-01-31Fix bad indentation introduced in commit d47cbf474Amit Langote
Per buildfarm member koel
2025-01-31Perform runtime initial pruning outside ExecInitNode()Amit Langote
This commit builds on the prior change that moved PartitionPruneInfos out of individual plan nodes into a list in PlannedStmt, making it possible to initialize PartitionPruneStates without traversing the plan tree and perform runtime initial pruning before ExecInitNode() initializes the plan trees. These tasks are now handled in a new routine, ExecDoInitialPruning(), which is called by InitPlan() before calling ExecInitNode() on various plan trees. ExecDoInitialPruning() performs the initial pruning and saves the result -- a Bitmapset of indexes for surviving child subnodes -- in es_part_prune_results, a list in EState. PartitionPruneStates created for initial pruning are stored in es_part_prune_states, another list in EState, for later use during exec pruning. Both lists are parallel to es_part_prune_infos, which holds the PartitionPruneInfos from PlannedStmt, enabling shared indexing. PartitionPruneStates initialized in ExecDoInitialPruning() now include only the PartitionPruneContexts for initial pruning steps. Exec pruning contexts are initialized later in ExecInitPartitionExecPruning() when the parent plan node is initialized, as the exec pruning step expressions depend on the parent node's PlanState. The existing function PartitionPruneFixSubPlanMap() has been repurposed for this initialization to avoid duplicating a similar loop structure for finding PartitionedRelPruningData to initialize exec pruning contexts for. It has been renamed to InitExecPruningContexts() to reflect its new primary responsibility. The original logic to "fix subplan maps" remains intact but is now encapsulated within the renamed function. This commit removes two obsolete Asserts in partkey_datum_from_expr(). The ExprContext used for pruning expression evaluation is now independent of the parent PlanState, making these Asserts unnecessary. By centralizing pruning logic and decoupling it from the plan initialization step (ExecInitNode()), this change sets the stage for future patches that will use the result of initial pruning to save the overhead of redundant processing for pruned partitions. Reviewed-by: Robert Haas <robertmhaas@gmail.com> Reviewed-by: Tomas Vondra <tomas@vondra.me> Discussion: https://postgr.es/m/CA+HiwqFGkMSge6TgC9KQzde0ohpAycLQuV7ooitEEpbKB0O_mg@mail.gmail.com
2025-01-30Move BitmapTableScan per-scan setup into a helperMelanie Plageman
Add BitmapTableScanSetup(), a helper which contains all of the code that must be done on every scan of the table in a bitmap table scan. This includes scanning the index, building the bitmap, and setting up the scan descriptors. Pushing this setup into a helper function makes BitmapHeapNext() more readable. Reviewed-by: Nazir Bilal Yavuz <byavuz81@gmail.com> Discussion: https://postgr.es/m/CAN55FZ1vXu%2BZdT0_MM-i1vbTdfHHf0KR3cK6R5gs6dNNNpyrJw%40mail.gmail.com
2025-01-30Simplify executor's handling of CaseTestExpr & CoerceToDomainValue.Tom Lane
Instead of deciding at runtime whether to read from casetest.value or caseValue_datum, split EEOP_CASE_TESTVAL into two opcodes and make the decision during expression compilation. Similarly for EEOP_DOMAIN_TESTVAL. This actually results in net less code, mainly because llvmjit_expr.c's code for handling these opcodes gets shorter. The performance gain is doubtless negligible, but this seems worth changing anyway on grounds of simplicity and understandability. Author: Andreas Karlsson <andreas@proxel.se> Co-authored-by: Xing Guo <higuoxing@gmail.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/CACpMh+AiBYAWn+D1aU7Rsy-V1tox06Cbc0H3qA7rwL5zdJ=anQ@mail.gmail.com
2025-01-30Move PartitionPruneInfo out of plan nodes into PlannedStmtAmit Langote
This moves PartitionPruneInfo from plan nodes to PlannedStmt, simplifying traversal by centralizing all PartitionPruneInfo structures in a single list in it, which holds all instances for the main query and its subqueries. Instead of plan nodes (Append or MergeAppend) storing PartitionPruneInfo pointers, they now reference an index in this list. A bitmapset field is added to PartitionPruneInfo to store the RT indexes corresponding to the apprelids field in Append or MergeAppend. This allows execution pruning logic to verify that it operates on the correct plan node, mainly to facilitate debugging. Duplicated code in set_append_references() and set_mergeappend_references() is refactored into a new function, register_pruneinfo(). This updates RT indexes by applying rtoffet and adds PartitionPruneInfo to the global list in PlannerGlobal. By allowing pruning to be performed without traversing the plan tree, this change lays the groundwork for runtime initial pruning to occur independently of plan tree initialization. Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org> (earlier version) Reviewed-by: Robert Haas <robertmhaas@gmail.com> Reviewed-by: Tomas Vondra <tomas@vondra.me> Discussion: https://postgr.es/m/CA+HiwqFGkMSge6TgC9KQzde0ohpAycLQuV7ooitEEpbKB0O_mg@mail.gmail.com
2025-01-21Refactor ExecScan() to allow inlining of its core logicAmit Langote
This commit refactors ExecScan() by moving its tuple-fetching, filtering, and projection logic into an inline-able function, ExecScanExtended(), defined in src/include/executor/execScan.h. ExecScanExtended() accepts parameters for EvalPlanQual state, qualifiers (ExprState), and projection (ProjectionInfo). Specialized variants of the execution function of a given Scan node (for example, ExecSeqScan() for SeqScan) can then pass const-NULL for unused parameters. This allows the compiler to inline the logic and eliminate unnecessary branches or checks. Each variant function thus contains only the necessary code, optimizing execution for scans where these features are not needed. The variant function to be used is determined in the ExecInit*() function of the node and assigned to the ExecProcNode function pointer in the node's PlanState, effectively turning runtime checks and conditional branches on the NULLness of epqstate, qual, and projInfo into static ones, provided the compiler successfully eliminates unnecessary checks from the inlined code of ExecScanExtended(). Currently, only ExecSeqScan() is modified to take advantage of this inline-ability. Other Scan nodes might benefit from such specialized variant functions but that is left as future work. Benchmarks performed by Junwang Zhao, David Rowley and myself show up to a 5% reduction in execution time for queries that rely heavily on Seq Scans. The most significant improvements were observed in scenarios where EvalPlanQual, qualifiers, and projection were not required, but other cases also benefit from reduced runtime overhead due to the inlining and removal of unnecessary code paths. The idea for this patch first came from Andres Freund in an off-list discussion. The refactoring approach implemented here is based on a proposal by David Rowley, significantly improving upon the patch I (amitlan) initially proposed. Suggested-by: Andres Freund <andres@anarazel.de> Co-authored-by: David Rowley <dgrowleyml@gmail.com> Reviewed-by: David Rowley <dgrowleyml@gmail.com> Reviewed-by: Junwang Zhao <zhjwpku@gmail.com> Tested-by: Junwang Zhao <zhjwpku@gmail.com> Tested-by: David Rowley <dgrowleyml@gmail.com> Discussion: https://postgr.es/m/CA+HiwqGaH-otvqW_ce-paL=96JvU4j+Xbuk+14esJNDwefdkOg@mail.gmail.com
2025-01-16Add OLD/NEW support to RETURNING in DML queries.Dean Rasheed
This allows the RETURNING list of INSERT/UPDATE/DELETE/MERGE queries to explicitly return old and new values by using the special aliases "old" and "new", which are automatically added to the query (if not already defined) while parsing its RETURNING list, allowing things like: RETURNING old.colname, new.colname, ... RETURNING old.*, new.* Additionally, a new syntax is supported, allowing the names "old" and "new" to be changed to user-supplied alias names, e.g.: RETURNING WITH (OLD AS o, NEW AS n) o.colname, n.colname, ... This is useful when the names "old" and "new" are already defined, such as inside trigger functions, allowing backwards compatibility to be maintained -- the interpretation of any existing queries that happen to already refer to relations called "old" or "new", or use those as aliases for other relations, is not changed. For an INSERT, old values will generally be NULL, and for a DELETE, new values will generally be NULL, but that may change for an INSERT with an ON CONFLICT ... DO UPDATE clause, or if a query rewrite rule changes the command type. Therefore, we put no restrictions on the use of old and new in any DML queries. Dean Rasheed, reviewed by Jian He and Jeff Davis. Discussion: https://postgr.es/m/CAEZATCWx0J0-v=Qjc6gXzR=KtsdvAE7Ow=D=mu50AgOe+pvisQ@mail.gmail.com
2025-01-15Change gist stratnum function to use CompareTypePeter Eisentraut
This changes commit 7406ab623fe in that the gist strategy number mapping support function is changed to use the CompareType enum as input, instead of the "well-known" RT*StrategyNumber strategy numbers. This is a bit cleaner, since you are not dealing with two sets of strategy numbers. Also, this will enable us to subsume this system into a more general system of using CompareType to define operator semantics across index methods. Discussion: https://www.postgresql.org/message-id/flat/E72EAA49-354D-4C2E-8EB9-255197F55330@enterprisedb.com
2025-01-15Rename RowCompareType to CompareTypePeter Eisentraut
RowCompareType served as a way to describe the fundamental meaning of an operator, notionally independent of an operator class (although so far this was only really supported for btrees). Its original purpose was for use inside RowCompareExpr, and it has also found some small use outside, such as for get_op_btree_interpretation(). We want to expand this now, as a more general way to describe operator semantics for other index access methods, including gist (to improve GistTranslateStratnum()) and others not written yet. To avoid future confusion, we rename the type to CompareType and the symbols from ROWCOMPARE_XXX to COMPARE_XXX to reflect their more general purpose. Reviewed-by: Mark Dilger <mark.dilger@enterprisedb.com> Discussion: https://www.postgresql.org/message-id/flat/E72EAA49-354D-4C2E-8EB9-255197F55330@enterprisedb.com
2025-01-13Revert "TupleHashTable: store additional data along with tuple."Jeff Davis
This reverts commit e0ece2a981ee9068f50c4423e303836c2585eb02 due to performance regressions. Reported-by: David Rowley
2025-01-13Fix pgindent damageRichard Guo
Oversight in commit e0ece2a98.
2025-01-11Add support for NOT ENFORCED in CHECK constraintsPeter Eisentraut
This adds support for the NOT ENFORCED/ENFORCED flag for constraints, with support for check constraints. The plan is to eventually support this for foreign key constraints, where it is typically more useful. Note that CHECK constraints do not currently support ALTER operations, so changing the enforceability of an existing constraint isn't possible without dropping and recreating it. This could be added later. Author: Amul Sul <amul.sul@enterprisedb.com> Reviewed-by: Peter Eisentraut <peter@eisentraut.org> Reviewed-by: jian he <jian.universality@gmail.com> Tested-by: Triveni N <triveni.n@enterprisedb.com> Discussion: https://www.postgresql.org/message-id/flat/CAAJ_b962c5AcYW9KUt_R_ER5qs3fUGbe4az-SP-vuwPS-w-AGA@mail.gmail.com
2025-01-10Fix redefinition of type in commit e0ece2a981.Jeff Davis
2025-01-10TupleHashTable: store additional data along with tuple.Jeff Davis
Previously, the caller needed to allocate the memory and the TupleHashTable would store a pointer to it. That wastes space for the palloc overhead as well as the size of the pointer itself. Now, the TupleHashTable relies on the caller to correctly specify the additionalsize, and allocates that amount of space. The caller can then request a pointer into that space. Discussion: https://postgr.es/m/b9cbf0219a9859dc8d240311643ff4362fd9602c.camel@j-davis.com Reviewed-by: Heikki Linnakangas
2025-01-07ExecInitAgg: update aggstate->numaggs and ->numtrans earlier.Jeff Davis
Functions hash_agg_entry_size() and build_hash_tables() make use of those values for memory size estimates. Because this change only affects memory estimates, don't backpatch. Discussion: https://postgr.es/m/7530bd8783b1a78d53a3c70383e38d8da0a5ffe5.camel%40j-davis.com
2025-01-07nodeSetOp.c: missing additionalsize for BuildTupleHashTable().Jeff Davis
Provide additionalsize argument, which can affect the calculations for 'nbuckets'. Also, future work for Hash Aggregation will rely on the correct additionalsize. Discussion: https://postgr.es/m/7530bd8783b1a78d53a3c70383e38d8da0a5ffe5.camel%40j-davis.com
2025-01-07Remove unused TupleHashTableData->entrysize.Jeff Davis
Discussion: https://postgr.es/m/7530bd8783b1a78d53a3c70383e38d8da0a5ffe5.camel%40j-davis.com
2025-01-02Fix outdated CHUNKHDRSZ value in nodeAgg.cDavid Rowley
CHUNKHDRSZ was defined as 16 bytes, which was true when that code went in, but since c6e0fe1f2, 8 is a more accurate value. Here we adjust it to use sizeof(MemoryChunk), which is normally 8, or 16 for cassert builds. c6e0fe1f2 first appeared in v16, so this is technically wrong in v16 up to master, but let's apply this only to master as adjusting this does influence the estimated number of batches in the aggregate costing code and we don't want to cause plan instability in released versions. Reviewed-by: Tom Lane Discussion: https://postgr.es/m/CAApHDvpMpRQvsTqZo3FinXkgytwxwF8sCyZm83xDj-1s_hLe+w@mail.gmail.com
2025-01-01Update copyright for 2025Bruce Momjian
Backpatch-through: 13
2024-12-28Speedup tuple deformation with additional function inliningDavid Rowley
This adjusts slot_deform_heap_tuple() to add special-case loops to eliminate much of the branching that was done within the body of the main deform loop. Previously, while looping over each attribute to deform, slot_deform_heap_tuple() would always recheck if the given attribute was NULL by looking at HeapTupleHasNulls() and if so, went on to check the tuple's NULL bitmap. Since many tuples won't contain any NULLs, we can just check HeapTupleHasNulls() once and when there are no NULLs, use a more compact version of the deforming loop which contains no NULL checking code at all. The same is possible for the "slow" mode checking part of the loop. That variable was checked several times for each attribute, once to determine if the offset to the attribute value could be taken from the attcacheoff, and again to check if the offset could be cached for next time. These "slow" checks can mostly be eliminated by instead having multiple loops. Initially, we can start in the non-slow loop and break out of that loop if and only if we must stop caching the offset. This eliminates branching for both slow and non-slow deforming methods. The amount of code required for the no nulls / non-slow version is very small. It's possible to have separate loops like this due to the fact that once we move into slow mode, we never need to switch back into non-slow mode for a given tuple. We have the compiler take care of writing out the multiple required loops by having a pg_attribute_always_inline function which gets called various times passing in constant values for the "slow" and "hasnulls" parameters. This allows the compiler to eliminate const-false branches and remove comparisons for const-true ones. This commit has shown overall query performance increases of around 5-20% in deform-heavy OLAP-type workloads. Author: David Rowley Reviewed-by: Victor Yegorov Discussion: https://postgr.es/m/CAGnEbog92Og2CpC2S8=g_HozGsWtt_3kRS1sXjLz0jKSoCNfLw@mail.gmail.com Discussion: https://postgr.es/m/CAApHDvo9e0XG71WrefYaRv5n4xNPLK4k8LjD0mSR3c9KR2vi2Q@mail.gmail.com
2024-12-21Optimize alignment calculations in tuple form/deformDavid Rowley
Here we convert CompactAttribute.attalign from a char, which is directly derived from pg_attribute.attalign into a uint8, which stores the number of bytes to align the column's value by in the tuple. This allows tuple deformation and tuple size calculations to move away from using the inefficient att_align_nominal() macro, which manually checks each TYPALIGN_* char to translate that into the alignment bytes for the given type. Effectively, this commit changes those to TYPEALIGN calls, which are branchless and only perform some simple arithmetic with some bit-twiddling. The removed branches were often mispredicted by CPUs, especially so in real-world tables which often contain a mishmash of different types with different alignment requirements. Author: David Rowley Reviewed-by: Andres Freund, Victor Yegorov Discussion: https://postgr.es/m/CAApHDvrBztXP3yx=NKNmo3xwFAFhEdyPnvrDg3=M0RhDs+4vYw@mail.gmail.com
2024-12-20Introduce CompactAttribute array in TupleDesc, take 2David Rowley
The new compact_attrs array stores a few select fields from FormData_pg_attribute in a more compact way, using only 16 bytes per column instead of the 104 bytes that FormData_pg_attribute uses. Using CompactAttribute allows performance-critical operations such as tuple deformation to be performed without looking at the FormData_pg_attribute element in TupleDesc which means fewer cacheline accesses. For some workloads, tuple deformation can be the most CPU intensive part of processing the query. Some testing with 16 columns on a table where the first column is variable length showed around a 10% increase in transactions per second for an OLAP type query performing aggregation on the 16th column. However, in certain cases, the increases were much higher, up to ~25% on one AMD Zen4 machine. This also makes pg_attribute.attcacheoff redundant. A follow-on commit will remove it, thus shrinking the FormData_pg_attribute struct by 4 bytes. Author: David Rowley Reviewed-by: Andres Freund, Victor Yegorov Discussion: https://postgr.es/m/CAApHDvrBztXP3yx=NKNmo3xwFAFhEdyPnvrDg3=M0RhDs+4vYw@mail.gmail.com
2024-12-19Get rid of old version of BuildTupleHashTable().Tom Lane
It was reasonable to preserve the old API of BuildTupleHashTable() in the back branches, but in HEAD we should actively discourage use of that version. There are no remaining callers in core, so just get rid of it. Then rename BuildTupleHashTableExt() back to BuildTupleHashTable(). While at it, fix up the miserably-poorly-maintained header comment for BuildTupleHashTable[Ext]. It looks like more than one patch in this area has had the opinion that updating comments is beneath them. Discussion: https://postgr.es/m/538343.1734646986@sss.pgh.pa.us
2024-12-19Use ExecGetCommonSlotOps infrastructure in more places.Tom Lane
Append, MergeAppend, and RecursiveUnion can all use the support functions added in commit 276279295. The first two can report a fixed result slot type if all their children return the same fixed slot type. That does nothing for the append step itself, but might allow optimizations in the parent plan node. RecursiveUnion can optimize tuple hash table operations in the same way as SetOp now does. Patch by me; thanks to Richard Guo and David Rowley for review. Discussion: https://postgr.es/m/1850138.1731549611@sss.pgh.pa.us
2024-12-19Convert SetOp to read its inputs as outerPlan and innerPlan.Tom Lane
The original design for set operations involved appending the two input relations into one and adding a flag column that allows distinguishing which side each row came from. Then the SetOp node pries them apart again based on the flag. This is bizarre. The only apparent reason to do it is that when sorting, we'd only need one Sort node not two. But since sorting is at least O(N log N), sorting all the data is actually worse than sorting each side separately --- plus, we have no chance of taking advantage of presorted input. On top of that, adding the flag column frequently requires an additional projection step that adds cycles, and then the Append node isn't free either. Let's get rid of all of that and make the SetOp node have two separate children, using the existing outerPlan/innerPlan infrastructure. This initial patch re-implements nodeSetop.c and does a bare minimum of work on the planner side to generate correctly-shaped plans. In particular, I've tried not to change the cost estimates here, so that the visible changes in the regression test results will only involve removal of useless projection steps and not any changes in whether to use sorted vs hashed mode. For SORTED mode, we combine successive identical tuples from each input into groups, and then merge-join the groups. The tuple comparisons now use SortSupport instead of simple equality, but the group-formation part should involve roughly the same number of tuple comparisons as before. The cross-comparisons between left and right groups probably add to that, but I'm not sure to quantify how many more comparisons we might need. For HASHED mode, nodeSetop's logic is almost the same as before, just refactored into two separate loops instead of one loop that has an assumption that it will see all the left-hand inputs first. In both modes, I added early-exit logic to not bother reading the right-hand relation if the left-hand input is empty, since neither INTERSECT nor EXCEPT modes can produce any output if the left input is empty. This could have been done before in the hashed mode, but not in sorted mode. Sorted mode can also stop as soon as it exhausts the left input; any remaining right-hand tuples cannot have matches. Also, this patch adds some infrastructure for detecting whether child plan nodes all output the same type of tuple table slot. If they do, the hash table logic can use slightly more efficient code based on assuming that that's the input slot type it will see. We'll make use of that infrastructure in other plan node types later. Patch by me; thanks to Richard Guo and David Rowley for review. Discussion: https://postgr.es/m/1850138.1731549611@sss.pgh.pa.us
2024-12-19Remove extra prefetch iterator setup for Bitmap Table ScanMelanie Plageman
1a0da347a7ac98db replaced Bitmap Table Scan's separate private and shared bitmap iterators with a unified iterator. It accidentally set up the prefetch iterator twice for non-parallel bitmap table scans. Remove the extra set up call to tbm_begin_iterate().
2024-12-19Fix bitmap table scan crash on iterator releaseMelanie Plageman
1a0da347a7ac98db replaced Bitmap Table Scan's individual private and shared iterators with a unified iterator. It neglected, however, to check if the iterator had already been cleaned up before doing so on rescan. Add this check both on rescan and end scan to be safe. Reported-by: Richard Guo Author: Richard Guo Discussion: https://postgr.es/m/CAMbWs48nrhcLY1kcd-u9oD%2B6yiS631F_8Fx8ZGsO-BYDwH%2Bbyw%40mail.gmail.com
2024-12-19Optimize grouping equality checks with virtual slotsDavid Rowley
8f4ee9626 fixed an old Assert failure that could happen when the slot type used to look up the hash table for BuildTupleHashTableExt() users wasn't a TTSOpsMinimalTuple slot. The fix for that in the back branches had to be to pass the TupleTableSlotOps as NULL, however in master, since we have the inputOps parameter as was added by d96d1d515, we can pass that down instead. At least one caller uses a fixed slot that's always TTSOpsVirtual, so passing down inputOps for these cases allows ExecBuildGroupingEqual() to skip adding the EEOP_INNER_FETCHSOME ExprEvalStep. This should increase the performance of hashed subplans very slightly. Author: Tom Lane, David Rowley Discussion: https://postgr.es/m/2543667.1734483723@sss.pgh.pa.us
2024-12-19Fix Assert failure in WITH RECURSIVE UNION queriesDavid Rowley
If the non-recursive part of a recursive CTE ended up using TTSOpsBufferHeapTuple as the table slot type, then a duplicate value could cause an Assert failure in CheckOpSlotCompatibility() when checking the hash table for the duplicate value. The expected slot type for the deform step was TTSOpsMinimalTuple so the Assert failed when the TTSOpsBufferHeapTuple slot was used. This is a long-standing bug which we likely didn't notice because it seems much more likely that the non-recursive term would have required projection and used a TTSOpsVirtual slot, which CheckOpSlotCompatibility is ok with. There doesn't seem to be any harm done here other than the Assert failure. Both TTSOpsMinimalTuple and TTSOpsBufferHeapTuple slot types require tuple deformation, so the EEOP_*_FETCHSOME ExprState step would have properly existed in the ExprState. The solution is to pass NULL for the ExecBuildGroupingEqual's 'lops' parameter. This means the ExprState's EEOP_*_FETCHSOME step won't expect a fixed slot type. This makes CheckOpSlotCompatibility() happy as no checking is performed when the ExprEvalStep is not expecting a fixed slot type. Reported-by: Richard Guo Reviewed-by: Tom Lane Discussion: https://postgr.es/m/CAMbWs4-8U9q2LAtf8+ghV11zeUReA3AmrYkxzBEv0vKnDxwkKA@mail.gmail.com Backpatch-through: 13, all supported versions
2024-12-18Bitmap Table Scans use unified TBMIteratorMelanie Plageman
With the repurposing of TBMIterator as an interface for both parallel and serial iteration through TIDBitmaps in commit 7f9d4187e7bab10329cc, bitmap table scans may now use it. Modify bitmap table scan code to use the TBMIterator. This requires moving around a bit of code, so a few variables are initialized elsewhere. Author: Melanie Plageman Reviewed-by: Tomas Vondra Discussion: https://postgr.es/m/c736f6aa-8b35-4e20-9621-62c7c82e2168%40vondra.me
2024-12-18Add common interface for TBMIteratorsMelanie Plageman
Add and use TBMPrivateIterator, which replaces the current TBMIterator for serial use cases, and repurpose TBMIterator to be a unified interface for both the serial ("private") and parallel ("shared") TID Bitmap iterator interfaces. This encapsulation simplifies call sites for callers supporting both parallel and serial TID Bitmap access. TBMIterator is not yet used in this commit. Author: Melanie Plageman Reviewed-by: Tomas Vondra, Heikki Linnakangas Discussion: https://postgr.es/m/063e4eb4-32d9-439e-a0b1-75565a9835a8%40iki.fi
2024-12-18Fix incorrect slot type in BuildTupleHashTableExtDavid Rowley
0f5738202 adjusted the execGrouping.c code so it made use of ExprStates to generate hash values. That commit made a wrong assumption that the slot type to pass to ExecBuildHash32FromAttrs() is always &TTSOpsMinimalTuple. That's not the case as the slot type depends on the slot type passed to LookupTupleHashEntry(), which for nodeRecursiveunion.c, could be any of the current slot types. Here we fix this by adding a new parameter to BuildTupleHashTableExt() to allow the slot type to be passed in. In the case of nodeSubplan.c and nodeAgg.c the slot type is always &TTSOpsVirtual, so for both of those cases, it's beneficial to pass the known slot type as that allows ExecBuildHash32FromAttrs() to skip adding the tuple deform step to the resulting ExprState. Another possible fix would have been to have ExecBuildHash32FromAttrs() set "fetch.kind" to NULL so that ExecComputeSlotInfo() always determines the EEOP_INNER_FETCHSOME is required, however, that option isn't favorable as slows down aggregation and hashed subplan evaluation due to the extra (needless) deform step. Thanks to Nathan Bossart for bisecting to find the offending commit based on Paul's report. Reported-by: Paul Ramsey <pramsey@cleverelephant.ca> Discussion: https://postgr.es/m/99F064C1-B3EB-4BE7-97D2-D2A0AA487A71@cleverelephant.ca
2024-12-11Use ExprStates for hashing in GROUP BY and SubPlansDavid Rowley
This speeds up obtaining hash values for GROUP BY and hashed SubPlans by using the ExprState support for hashing, thus allowing JIT compilation for obtaining hash values for these operations. This, even without JIT compilation, has been shown to improve Hash Aggregate performance in some cases by around 15% and hashed NOT IN queries in one case by over 30%, however, real-world cases are likely to see smaller gains as the test cases used were purposefully designed to have high hashing overheads by keeping the hash table small to prevent additional memory overheads that would be a factor when working with large hash tables. In passing, fix a hypothetical bug in ExecBuildHash32Expr() so that the initial value is stored directly in the ExprState's result field if there are no expressions to hash. None of the current users of this function use an initial value, so the bug is only hypothetical. Reviewed-by: Andrei Lepikhov <lepihov@gmail.com> Discussion: https://postgr.es/m/CAApHDvpYSO3kc9UryMevWqthTBrxgfd9djiAjKHMPUSQeX9vdQ@mail.gmail.com
2024-12-11Speedup Hash Joins with dedicated functions for ExprState hashingDavid Rowley
Hashing of a single Var is a very common operation for ExprState to perform. Here we add dedicated ExecJust* functions which helps speed up Hash Joins by removing the interpretation overhead in ExecInterpExpr(). This change currently only affects Hash Joins on a single column. Hash Joins with multiple join keys or an expression still run through ExecInterpExpr(). Some testing has shown up to 10% query performance increases on recent AMD hardware and nearly 7% increase on an Apple M2 for a query performing a hash join with a large number of lookups on a small hash table. This change was extracted from a larger patch which adjusts GROUP BY / hashed subplans / hashed set operations to use ExprState hashing. Discussion: https://postgr.es/m/CAApHDvr8Zc0ZgzVoCZLdHGOFNhiJeQ6vrUcS9V7N23zMWQb-eA@mail.gmail.com