summaryrefslogtreecommitdiff
path: root/src
AgeCommit message (Collapse)Author
2025-10-09Add "ALL SEQUENCES" support to publications.Amit Kapila
This patch adds support for the ALL SEQUENCES clause in publications, enabling synchronization/replication of all sequences that is useful for upgrades. Publications can now include all sequences via FOR ALL SEQUENCES. psql enhancements: \d shows publications for a given sequence. \dRp indicates if a publication includes all sequences. ALL SEQUENCES can be combined with ALL TABLES, but not with other options like TABLE or TABLES IN SCHEMA. We can extend support for more granular clauses in future. The view pg_publication_sequences provides information about the mapping between publications and sequences. This patch enables publishing of sequences; subscriber-side support will be added in upcoming patches. Author: vignesh C <vignesh21@gmail.com> Author: Tomas Vondra <tomas@vondra.me> Reviewed-by: shveta malik <shveta.malik@gmail.com> Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com> Reviewed-by: Peter Smith <smithpb2250@gmail.com> Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com> Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com> Reviewed-by: Nisha Moond <nisha.moond412@gmail.com> Reviewed-by: Shlok Kyal <shlok.kyal.oss@gmail.com> Reviewed-by: Amit Kapila <amit.kapila16@gmail.com> Discussion: https://postgr.es/m/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
2025-10-09Fix internal error from CollateExpr in SQL/JSON DEFAULT expressionsAmit Langote
SQL/JSON functions such as JSON_VALUE could fail with "unrecognized node type" errors when a DEFAULT clause contained an explicit COLLATE expression. That happened because assign_collations_walker() could invoke exprSetCollation() on a JsonBehavior expression whose DEFAULT still contained a CollateExpr, which exprSetCollation() does not handle. For example: SELECT JSON_VALUE('{"a":1}', '$.c' RETURNING text DEFAULT 'A' COLLATE "C" ON EMPTY); Fix by validating in transformJsonBehavior() that the DEFAULT expression's collation matches the enclosing JSON expression’s collation. In exprSetCollation(), replace the recursive call on the JsonBehavior expression with an assertion that its collation already matches the target, since the parser now enforces that condition. Reported-by: Jian He <jian.universality@gmail.com> Author: Jian He <jian.universality@gmail.com> Reviewed-by: Amit Langote <amitlangote09@gmail.com> Discussion: https://postgr.es/m/CACJufxHVwYYSyiVQ6o+PsRX6zQ7rAFinh_fv1kCfTsT1xG4Zeg@mail.gmail.com Backpatch-through: 17
2025-10-09Make truncate_useless_pathkeys() consider WindowFuncsDavid Rowley
truncate_useless_pathkeys() seems to have neglected to account for PathKeys that might be useful for WindowClause evaluation. Modify it so that it properly accounts for that. Making this work required adjusting two things: 1. Change from checking query_pathkeys to check sort_pathkeys instead. 2. Add explicit check for window_pathkeys For #1, query_pathkeys gets set in standard_qp_callback() according to the sort order requirements for the first operation to be applied after the join planner is finished, so this changes depending on which upper planner operations a particular query needs. If the query has window functions and no GROUP BY, then query_pathkeys gets set to window_pathkeys. Before this change, this meant PathKeys useful for the ORDER BY were not accounted for in queries with window functions. Because of #1, #2 is now required so that we explicitly check to ensure we don't truncate away PathKeys useful for window functions. Author: David Rowley <dgrowleyml@gmail.com> Discussion: https://postgr.es/m/CAApHDvrj3HTKmXoLMbUjTO=_MNMxM=cnuCSyBKidAVibmYPnrg@mail.gmail.com
2025-10-08bufmgr: Don't lock buffer header in StrategyGetBuffer()Andres Freund
Previously StrategyGetBuffer() acquired the buffer header spinlock for every buffer, whether it was reusable or not. If reusable, it'd be returned, with the lock held, to GetVictimBuffer(), which then would pin the buffer with PinBuffer_Locked(). That's somewhat violating the spirit of the guidelines for holding spinlocks (i.e. that they are only held for a few lines of consecutive code) and necessitates using PinBuffer_Locked(), which scales worse than PinBuffer() due to holding the spinlock. This alone makes it worth changing the code. However, the main reason to change this is that a future commit will make PinBuffer_Locked() slower (due to making UnlockBufHdr() slower), to gain scalability for the much more common case of pinning a pre-existing buffer. By pinning the buffer with a single atomic operation, iff the buffer is reusable, we avoid any potential regression for miss-heavy workloads. There strictly are fewer atomic operations for each potential buffer after this change. The price for this improvement is that freelist.c needs two CAS loops and needs to be able to set up the resource accounting for pinned buffers. The latter is achieved by exposing a new function for that purpose from bufmgr.c, that seems better than exposing the entire private refcount infrastructure. The improvement seems worth the complexity. Reviewed-by: Robert Haas <robertmhaas@gmail.com> Reviewed-by: Matthias van de Meent <boekewurm+postgres@gmail.com> Discussion: https://postgr.es/m/fvfmkr5kk4nyex56ejgxj3uzi63isfxovp2biecb4bspbjrze7@az2pljabhnff
2025-10-08bufmgr: fewer calls to BufferDescriptorGetContentLockAndres Freund
We're planning to merge buffer content locks into BufferDesc.state. To reduce the size of that patch, centralize calls to BufferDescriptorGetContentLock(). The biggest part of the change is in assertions, by introducing BufferIsLockedByMe[InMode]() (and removing BufferIsExclusiveLocked()). This seems like an improvement even without aforementioned plans. Additionally replace some direct calls to LWLockAcquire() with calls to LockBuffer(). Reviewed-by: Matthias van de Meent <boekewurm+postgres@gmail.com> Discussion: https://postgr.es/m/fvfmkr5kk4nyex56ejgxj3uzi63isfxovp2biecb4bspbjrze7@az2pljabhnff
2025-10-08bufmgr: Fix signedness of mask variable in BufferSync()Andres Freund
BM_PERMANENT is defined as 1U<<31, which is a negative number when interpreted as a signed integer. Unfortunately the mask variable in BufferSync() was signed. This has been wrong for a long time, but failed to fail, due to integer conversion rules. However, in an upcoming patch the width of the state variable will be increased, with the wrong signedness leading to never flushing permanent buffers - luckily caught in a test. It seems better to fix this separately, instead of doing so as part of a large, otherwise mechanical, patch. Reviewed-by: Matthias van de Meent <boekewurm+postgres@gmail.com> Discussion: https://postgr.es/m/fvfmkr5kk4nyex56ejgxj3uzi63isfxovp2biecb4bspbjrze7@az2pljabhnff
2025-10-08bufmgr: Introduce FlushUnlockedBufferAndres Freund
There were several copies of code locking a buffer, flushing its contents, and unlocking the buffer. It seems worth centralizing that into a helper function. Reviewed-by: Matthias van de Meent <boekewurm+postgres@gmail.com> Discussion: https://postgr.es/m/fvfmkr5kk4nyex56ejgxj3uzi63isfxovp2biecb4bspbjrze7@az2pljabhnff
2025-10-08Improve ReadRecentBuffer() scalabilityAndres Freund
While testing a new potential use for ReadRecentBuffer(), Andres reported that it scales badly when called concurrently for the same buffer by many backends. Instead of a naive (but wrong) coding with PinBuffer(), it used the spinlock, so that it could be careful to pin only if the buffer was valid and holding the expected block, to avoid breaking invariants in eg GetVictimBuffer(). Unfortunately that made it less scalable than PinBuffer(), which uses compare-exchange instead. We can fix that by giving PinBuffer() a new skip_if_not_valid mode that doesn't pin invalid buffers. It might occasionally skip when it shouldn't due to the unlocked read of the header flags, but that's unlikely and perfectly acceptable for an opportunistic optimisation routine, and it can only succeed when it really should due to the compare-exchange loop. Note that this fixes ReadRecentBuffer()'s failure to bump the usage count. While this could be seen as a bug, there currently aren't cases affected by this in core, so it doesn't seem worth backpatching that portion. Author: Thomas Munro <thomas.munro@gmail.com> Reported-by: Andres Freund <andres@anarazel.de> Reviewed-by: Andres Freund <andres@anarazel.de> Reviewed-by: Matthias van de Meent <boekewurm+postgres@gmail.com> Discussion: https://postgr.es/m/20230627020546.t6z4tntmj7wmjrfh%40awork3.anarazel.de Discussion: https://postgr.es/m/fvfmkr5kk4nyex56ejgxj3uzi63isfxovp2biecb4bspbjrze7@az2pljabhnff
2025-10-08Add mem_exceeded_count column to pg_stat_replication_slots.Masahiko Sawada
This commit introduces a new column mem_exceeded_count to the pg_stat_replication_slots view. This counter tracks how often the memory used by logical decoding exceeds the logical_decoding_work_mem limit. The new statistic helps users determine whether exceeding the logical_decoding_work_mem limit is a rare occurrences or a frequent issue, information that wasn't available through existing statistics. Bumps catversion. Author: Bertrand Drouvot <bertranddrouvot.pg@gmail.com> Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com> Reviewed-by: Amit Kapila <amit.kapila16@gmail.com> Reviewed-by: shveta malik <shveta.malik@gmail.com> Reviewed-by: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> Reviewed-by: Chao Li <li.evan.chao@gmail.com> Discussion: https://postgr.es/m/978D21E8-9D3B-40EA-A4B1-F87BABE7868C@yesql.se
2025-10-08Cleanup NAN code in float.h, too.Tom Lane
In the same spirit as 3bf905692, assume that all compilers we still support provide the NAN macro, and get rid of workarounds for that. The C standard allows implementations to omit NAN if the underlying float arithmetic lacks quiet (non-signaling) NaNs. However, we've required that feature for years: the workarounds only supported lack of the macro, not lack of the functionality. I put in a compile-time #error if there's no macro, just for clarity. Also fix up the copies of these functions in ecpglib, and leave a breadcrumb for the next hacker who touches them. History of the hacks being removed here can be found in commits 1bc2d544b, 4d17a2146, cec8394b5. Author: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/1952095.1759764279@sss.pgh.pa.us
2025-10-08Add extension_state member to PlannedStmt.Robert Haas
Extensions can stash data computed at plan time into this list using planner_shutdown_hook (or perhaps other mechanisms) and then access it from any code that has access to the PlannedStmt (such as explain hooks), allowing for extensible debugging and instrumentation of plans. Reviewed-by: Andrei Lepikhov <lepihov@gmail.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: http://postgr.es/m/CA+TgmoYWKHU2hKr62Toyzh-kTDEnMDeLw7gkOOnjL-TnOUq0kQ@mail.gmail.com
2025-10-08Add planner_setup_hook and planner_shutdown_hook.Robert Haas
These hooks allow plugins to get control at the earliest point at which the PlannerGlobal object is fully initialized, and then just before it gets destroyed. This is useful in combination with the extendable plan state facilities (see extendplan.h) and perhaps for other purposes as well. Reviewed-by: Andrei Lepikhov <lepihov@gmail.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: http://postgr.es/m/CA+TgmoYWKHU2hKr62Toyzh-kTDEnMDeLw7gkOOnjL-TnOUq0kQ@mail.gmail.com
2025-10-08Add ExplainState argument to pg_plan_query() and planner().Robert Haas
This allows extensions to have access to any data they've stored in the ExplainState during planning. Unfortunately, it won't help with EXPLAIN EXECUTE is used, but since that case is less common, this still seems like an improvement. Since planner() has quite a few arguments now, also add some documentation of those arguments and the return value. Author: Robert Haas <rhaas@postgresql.org> Co-authored-by: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Andrei Lepikhov <lepihov@gmail.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: http://postgr.es/m/CA+TgmoYWKHU2hKr62Toyzh-kTDEnMDeLw7gkOOnjL-TnOUq0kQ@mail.gmail.com
2025-10-08Implement Eager AggregationRichard Guo
Eager aggregation is a query optimization technique that partially pushes aggregation past a join, and finalizes it once all the relations are joined. Eager aggregation may reduce the number of input rows to the join and thus could result in a better overall plan. In the current planner architecture, the separation between the scan/join planning phase and the post-scan/join phase means that aggregation steps are not visible when constructing the join tree, limiting the planner's ability to exploit aggregation-aware optimizations. To implement eager aggregation, we collect information about aggregate functions in the targetlist and HAVING clause, along with grouping expressions from the GROUP BY clause, and store it in the PlannerInfo node. During the scan/join planning phase, this information is used to evaluate each base or join relation to determine whether eager aggregation can be applied. If applicable, we create a separate RelOptInfo, referred to as a grouped relation, to represent the partially-aggregated version of the relation and generate grouped paths for it. Grouped relation paths can be generated in two ways. The first method involves adding sorted and hashed partial aggregation paths on top of the non-grouped paths. To limit planning time, we only consider the cheapest or suitably-sorted non-grouped paths in this step. Alternatively, grouped paths can be generated by joining a grouped relation with a non-grouped relation. Joining two grouped relations is currently not supported. To further limit planning time, we currently adopt a strategy where partial aggregation is pushed only to the lowest feasible level in the join tree where it provides a significant reduction in row count. This strategy also helps ensure that all grouped paths for the same grouped relation produce the same set of rows, which is important to support a fundamental assumption of the planner. For the partial aggregation that is pushed down to a non-aggregated relation, we need to consider all expressions from this relation that are involved in upper join clauses and include them in the grouping keys, using compatible operators. This is essential to ensure that an aggregated row from the partial aggregation matches the other side of the join if and only if each row in the partial group does. This ensures that all rows within the same partial group share the same "destiny", which is crucial for maintaining correctness. One restriction is that we cannot push partial aggregation down to a relation that is in the nullable side of an outer join, because the NULL-extended rows produced by the outer join would not be available when we perform the partial aggregation, while with a non-eager-aggregation plan these rows are available for the top-level aggregation. Pushing partial aggregation in this case may result in the rows being grouped differently than expected, or produce incorrect values from the aggregate functions. If we have generated a grouped relation for the topmost join relation, we finalize its paths at the end. The final paths will compete in the usual way with paths built from regular planning. The patch was originally proposed by Antonin Houska in 2017. This commit reworks various important aspects and rewrites most of the current code. However, the original patch and reviews were very useful. Author: Richard Guo <guofenglinux@gmail.com> Author: Antonin Houska <ah@cybertec.at> (in an older version) Reviewed-by: Robert Haas <robertmhaas@gmail.com> Reviewed-by: Jian He <jian.universality@gmail.com> Reviewed-by: Tender Wang <tndrwang@gmail.com> Reviewed-by: Matheus Alcantara <matheusssilv97@gmail.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: David Rowley <dgrowleyml@gmail.com> Reviewed-by: Tomas Vondra <tomas@vondra.me> (in an older version) Reviewed-by: Andy Fan <zhihuifan1213@163.com> (in an older version) Reviewed-by: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> (in an older version) Discussion: https://postgr.es/m/CAMbWs48jzLrPt1J_00ZcPZXWUQKawQOFE8ROc-ADiYqsqrpBNw@mail.gmail.com
2025-10-08Allow negative aggtransspace to indicate unbounded state sizeRichard Guo
This patch reuses the existing aggtransspace in pg_aggregate to signal that an aggregate's transition state can grow unboundedly. If aggtransspace is set to a negative value, it now indicates that the transition state may consume unpredictable or large amounts of memory, such as in aggregates like array_agg or string_agg that accumulate input rows. This information can be used by the planner to avoid applying memory-sensitive optimizations (e.g., eager aggregation) when there is a risk of excessive memory usage during partial aggregation. Bump catalog version. Per idea from Robert Haas, though applied differently than originally suggested. Discussion: https://postgr.es/m/CA+TgmoYbkvYwLa+1vOP7RDY7kO2=A7rppoPusoRXe44VDOGBPg@mail.gmail.com
2025-10-08Improve description of some WAL records for GINMichael Paquier
The following information is added in the description of some GIN records: - In INSERT_LISTPAGE, the number of tuples and the right link block. - In UPDATE_META_PAGE, the number of tuples, the previous tail block, and the right link block. - In SPLIT, the left and right children blocks. Author: Kirill Reshke <reshkekirill@gmail.com> Reviewed-by: Michael Paquier <michael@paquier.xyz> Reviewed-by: Andrey Borodin <x4mmm@yandex-team.ru> Discussion: https://postgr.es/m/CALdSSPgnAt5L=D_xGXRXLYO5FK1H31_eYEESxdU1n-r4g+6GqA@mail.gmail.com
2025-10-08Add stats_reset to pg_stat_user_functionsMichael Paquier
It is possible to call pg_stat_reset_single_function_counters() for a single function, but the reset time was missing the system view showing its statistics. Like all the fields of pg_stat_user_functions, the GUC track_functions needs to be enabled to show the statistics about function executions. Bump catalog version. Bump PGSTAT_FILE_FORMAT_ID, as a result of the new field added to PgStat_StatFuncEntry. Author: Bertrand Drouvot <bertranddrouvot.pg@gmail.com> Discussion: https://postgr.es/m/aONjnsaJSx-nEdfU@paquier.xyz
2025-10-08Fix typo in function header comment.Amit Kapila
Reported-by: Robert Haas <robertmhaas@gmail.com> Discussion: https://postgr.es/m/CA+TgmoZYh_nw-2j_Fi9y6ZAvrpN+W1aSOFNM7Rus2Q-zTkCsQw@mail.gmail.com
2025-10-08Fix Coverity issues reported in commit 25a30bbd423.Tatsuo Ishii
Fix several issues pointed out by Coverity (reported by Tome Lane). - In row_is_in_frame(), return value of window_gettupleslot() was not checked. - WinGetFuncArgInPartition() tried to derefference "isout" pointer even if it could be NULL in some places. Besides the issues, I also fixed a compiler warning reported by Álvaro Herrera. Moreover, in WinGetFuncArgInPartition refactor the do...while loop so that the codes inside the loop simpler. Also simplify the case when abs_pos < 0. Author: Tatsuo Ishii <ishii@postgresql.org> Reviewed-by: Paul Ramsey <pramsey@cleverelephant.ca> Reported-by: Tom Lane <tgl@sss.pgh.pa.us> Reported-by: Álvaro Herrera <alvherre@kurilemu.de> Discussion: https://postgr.es/m/1686755.1759679957%40sss.pgh.pa.us Discussion: https://postgr.es/m/202510051612.gw67jlc2iqpw%40alvherre.pgsql
2025-10-08Cleanup INFINITY code in float.hDavid Rowley
The INFINITY macro is always defined per C99 standard, so this should mean we can now get rid of the workaround code for when that macro isn't defined. Also, delete the (now unneeded) #pragma code which was disabling a compiler warning in MSVC. There was a comment explaining why the #pragma was placed outside the function body to work around a MSVC compiler bug, but the link explaining that was dead, as reported by jian he. Author: David Rowley <dgrowleyml@gmail.com> Reported-by: jian he <jian.universality@gmail.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/CACJufxGARYETnNwtCK7QC0zE_7gq-tfN0mME=gT5rTNtC=VSHQ@mail.gmail.com
2025-10-07Remove PlannerInfo's join_search_private method.Robert Haas
Instead, use the new mechanism that allows planner extensions to store private state inside a PlannerInfo, treating GEQO as an in-core planner extension. This is a useful test of the new facility, and also buys back a few bytes of storage. To make this work, we must remove innerrel_is_unique_ext's hack of testing whether join_search_private is set as a proxy for whether the join search might be retried. Add a flag that extensions can use to explicitly signal their intentions instead. Reviewed-by: Andrei Lepikhov <lepihov@gmail.com> Reviewed-by: Melanie Plageman <melanieplageman@gmail.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: http://postgr.es/m/CA+TgmoYWKHU2hKr62Toyzh-kTDEnMDeLw7gkOOnjL-TnOUq0kQ@mail.gmail.com
2025-10-07Allow private state in certain planner data structures.Robert Haas
Extension that make extensive use of planner hooks may want to coordinate their efforts, for example to avoid duplicate computation, but that's currently difficult because there's no really good way to pass data between different hooks. To make that easier, allow for storage of extension-managed private state in PlannerGlobal, PlannerInfo, and RelOptInfo, along very similar lines to what we have permitted for ExplainState since commit c65bc2e1d14a2d4daed7c1921ac518f2c5ac3d17. Reviewed-by: Andrei Lepikhov <lepihov@gmail.com> Reviewed-by: Melanie Plageman <melanieplageman@gmail.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: http://postgr.es/m/CA+TgmoYWKHU2hKr62Toyzh-kTDEnMDeLw7gkOOnjL-TnOUq0kQ@mail.gmail.com
2025-10-07Adjust new TAP test to work on macOS.Tom Lane
Seems Apple's version of "wc -l" puts spaces before the number. (I wonder why the cfbot didn't find this.) While here, make the failure case log what it got, to aid debugging future issues. Per buildfarm.
2025-10-07Improve psql's ability to select pager mode accurately.Tom Lane
We try to use the pager only when more than a screenful's worth of data is to be printed. However, the code in print.c that's concerned with counting the number of lines that will be needed missed a lot of edge cases: * While plain aligned mode accounted for embedded newlines in column headers and table cells, unaligned and vertical output modes did not. * In particular, since vertical mode repeats the headers for each record, we need to account for embedded newlines in the headers for each record. * Multi-line table titles were not accounted for. * tuples_only mode (where headers aren't printed) wasn't accounted for. * Footers were accounted for as one line per footer, again missing the possibility of multi-line footers. (In some cases such as "\d+" on a view, there can be many lines in a footer.) Also, we failed to account for the default footer. To fix, move the entire responsibility for counting lines into IsPagerNeeded (or actually, into a new subroutine count_table_lines), and then expand the logic as appropriate. Also restructure to make it perhaps a bit easier to follow. It's still only completely accurate for ALIGNED/WRAPPED/UNALIGNED formats, but the other formats are not typically used with interactive output. Arrange to not run count_table_lines at all unless we will use its result, and teach it to quit early as soon as it's proven that the output is long enough to require use of the pager. When dealing with large tables this should save a noticeable amount of time, since pg_wcssize() isn't exactly cheap. In passing, move the "flog" output step to the bottom of printTable(), rather than running it when we've already opened the pager in some modes. In principle it shouldn't interfere with the pager because flog should always point to a non-interactive file; but it seems silly to risk any interference, especially when the existing positioning seems to have been chosen with the aid of a dartboard. Also add a TAP test to exercise pager mode. Up to now, we have had zero test coverage of these code paths, because they aren't reached unless isatty(stdout). We do have the test infrastructure to improve that situation, though. Following the lead of 010_tab_completion.pl, set up an interactive psql and feed it some test cases. To detect whether it really did invoke the pager, point PSQL_PAGER to "wc -l". The test is skipped if that utility isn't available. Author: Erik Wienhold <ewie@ewie.name> Test-authored-by: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/2dd2430f-dd20-4c89-97fd-242616a3d768@ewie.name
2025-10-07Assign each subquery a unique name prior to planning it.Robert Haas
Previously, subqueries were given names only after they were planned, which makes it difficult to use information from a previous execution of the query to guide future planning. If, for example, you knew something about how you want "InitPlan 2" to be planned, you won't know whether the subquery you're currently planning will end up being "InitPlan 2" until after you've finished planning it, by which point it's too late to use the information that you had. To fix this, assign each subplan a unique name before we begin planning it. To improve consistency, use textual names for all subplans, rather than, as we did previously, a mix of numbers (such as "InitPlan 1") and names (such as "CTE foo"), and make sure that the same name is never assigned more than once. We adopt the somewhat arbitrary convention of using the type of sublink to set the plan name; for example, a query that previously had two expression sublinks shown as InitPlan 2 and InitPlan 1 will now end up named expr_1 and expr_2. Because names are assigned before rather than after planning, some of the regression test outputs show the numerical part of the name switching positions: what was previously SubPlan 2 was actually the first one encountered, but we finished planning it later. We assign names even to subqueries that aren't shown as such within the EXPLAIN output. These include subqueries that are a FROM clause item or a branch of a set operation, rather than something that will be turned into an InitPlan or SubPlan. The purpose of this is to make sure that, below the topmost query level, there's always a name for each subquery that is stable from one planning cycle to the next (assuming no changes to the query or the database schema). Author: Robert Haas <rhaas@postgresql.org> Co-authored-by: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Alexandra Wang <alexandra.wang.oss@gmail.com> Reviewed-by: Richard Guo <guofenglinux@gmail.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Junwang Zhao <zhjwpku@gmail.com> Discussion: http://postgr.es/m/3641043.1758751399@sss.pgh.pa.us
2025-10-07Fix compile of src/tutorial/funcs.cÁlvaro Herrera
I broke this with recent #include removals. Fix by adding an explicit Reported-by: Devrim Gündüz <devrim@gunduz.org> Discussion: https://postgr.es/m/5e2c2d7c44434f3f0af7523864b27fe4fb590902.camel@gunduz.org
2025-10-07Teach planner to short-circuit EXCEPT/INTERSECT with dummy inputsDavid Rowley
When either inputs of an INTERSECT [ALL] operator are proven not to return any results (a dummy rel), then mark the entire INTERSECT operation as dummy. Likewise, if an EXCEPT [ALL] operation's left input is proven empty, then mark the entire operation as dummy. With EXCEPT ALL, we can easily handle the right input being dummy as we can return the left input without any processing. That can lead to significant performance gains during query execution. We can't easily handle dummy right inputs for EXCEPT (without ALL), as that would require deduplication of the left input. Wiring up those Paths is likely more complex than it's worth as the gains during execution aren't that great, so let's leave that one to be handled by the normal Path generation code. Author: David Rowley <dgrowleyml@gmail.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/CAApHDvri53PPF76c3M94_QNWbJfXjyCnjXuj_2=LYM-0m8WZtw@mail.gmail.com
2025-10-07Fix incorrect targetlist in dummy UNIONsDavid Rowley
The prior code, added in 03d40e4b5 attempted to use the targetlist of the first UNION child when all UNION children were proven as dummy rels. That's not going to work when some operation atop of the Result node must find target entries within the Result's targetlist. This could have been something as simple as trying to sort the results of the UNION operation, which would lead to: ERROR: could not find pathkey item to sort Instead, use the top-level UNION's targetlist and fix the varnos in setrefs.c. Because set operation targetlists always use varno==0, we can rewrite those to become varno==1, i.e. use the Vars from the first UNION child. This does result in showing Vars from relations that are not present in the final plan, but that's no different to what we see when normal base relations are proven dummy. Without this fix it would be possible to see the following error in EXPLAIN VERBOSE when all UNION inputs were proven empty. ERROR: bogus varno: 0 Author: David Rowley <dgrowleyml@gmail.com> Discussion: https://postgr.es/m/CAApHDvrUASy9sfULMEsM2udvZJP6AoBRCZvHYXYxZTy2tX9FYw@mail.gmail.com
2025-10-06Avoid unnecessary GinFormTuple() calls for incompressible posting lists.Masahiko Sawada
Previously, we attempted to form a posting list tuple even when ginCompressPostingList() failed to compress the posting list due to its size. While there was no functional failure, it always wasted one GinFormTuple() call when item pointers didn't fit in a posting list tuple. This commit ensures that a GIN index tuple is formed only when all item pointers in the posting list are successfully compressed. Author: Arseniy Mukhin <arseniy.mukhin.dev@gmail.com> Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com> Discussion: https://postgr.es/m/CAE7r3M+C=jcpTD93f_RBHrQp3C+=TAXFs+k4tTuZuuxboK8AvA@mail.gmail.com
2025-10-06Optimize hex_encode() and hex_decode() using SIMD.Nathan Bossart
The hex_encode() and hex_decode() functions serve as the workhorses for hexadecimal data for bytea's text format conversion functions, and some workloads are sensitive to their performance. This commit adds new implementations that use routines from port/simd.h, which testing indicates are much faster for larger inputs. For small or invalid inputs, we fall back on the existing scalar versions. Since we are using port/simd.h, these optimizations apply to both x86-64 and AArch64. Author: Nathan Bossart <nathandbossart@gmail.com> Co-authored-by: Chiranmoy Bhattacharya <chiranmoy.bhattacharya@fujitsu.com> Co-authored-by: Susmitha Devanga <devanga.susmitha@fujitsu.com> Reviewed-by: John Naylor <johncnaylorls@gmail.com> Discussion: https://postgr.es/m/aLhVWTRy0QPbW2tl%40nathan
2025-10-06Expose sequence page LSN via pg_get_sequence_data.Amit Kapila
This patch enhances the pg_get_sequence_data function to include the page-level LSN (Log Sequence Number) of the sequence. This additional metadata will be used by upcoming patches to support synchronization of sequences during logical replication. By exposing the LSN, we enable more accurate tracking of sequence changes, which is essential for maintaining consistency across replicated nodes. Author: vignesh C <vignesh21@gmail.com> Reviewed-by: shveta malik <shveta.malik@gmail.com> Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com> Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com> Reviewed-by: Amit Kapila <amit.kapila16@gmail.com> Discussion: https://www.postgresql.org/message-id/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
2025-10-06Add comment in ginxlog.h about block used with ginxlogInsertListPageMichael Paquier
All the other structures describe the list of blocks used, and in the case of a GIN_INSERT_LISTPAGE record block 0 refers to a list page with the items added to it. Author: Kirill Reshke <reshkekirill@gmail.com> Reviewed-by: Andrey Borodin <x4mmm@yandex-team.ru> Discussion: https://postgr.es/m/CALdSSPgk=9WRoXhZy5fdk+T1hiau7qbL_vn94w_L1N=gtEdbsg@mail.gmail.com
2025-10-06Remove block information from description of some WAL records for GINMichael Paquier
The WAL records XLOG_GIN_INSERT and XLOG_GIN_VACUUM_DATA_LEAF_PAGE included some information about the blocks added to the record. This information is already provided by XLogRecGetBlockRefInfo() with much more details about the blocks included in each record, like the compression information, for example. This commit removes the block information that existed in the record descriptions specific to GIN. Author: Kirill Reshke <reshkekirill@gmail.com> Reviewed-by: Andrey Borodin <x4mmm@yandex-team.ru> Discussion: https://postgr.es/m/CALdSSPgk=9WRoXhZy5fdk+T1hiau7qbL_vn94w_L1N=gtEdbsg@mail.gmail.com
2025-10-06Add stats_reset to pg_stat_all_{tables,indexes} and related viewsMichael Paquier
It is possible to call pg_stat_reset_single_table_counters() on a relation (index or table) but the reset time was missing from the system views showing their statistics. This commit adds the reset time as an attribute of pg_stat_all_tables, pg_stat_all_indexes, and other relations related to them. Bump catalog version. Bump PGSTAT_FILE_FORMAT_ID, as a result of the new field added to PgStat_StatTabEntry. Author: Bertrand Drouvot <bertranddrouvot.pg@gmail.com> Reviewed-by: Sami Imseih <samimseih@gmail.com> Reviewed-by: Michael Paquier <michael@paquier.xyz> Discussion: https://postgr.es/m/aN8l182jKxEq1h9f@paquier.xyz
2025-10-06Add test for pg_stat_reset_single_table_counters() on indexMichael Paquier
stats.sql is already doing some tests coverage on index statistics, by retrieving for example idx_scan and friends in pg_stat_all_tables. pg_stat_reset_single_table_counters() is supported for an index for a long time, but the case was never covered. This commit closes the gap, by using this reset function on an index, cross-checking the contents of pg_stat_all_indexes. Author: Bertrand Drouvot <bertranddrouvot.pg@gmail.com> Discussion: https://postgr.es/m/aN8l182jKxEq1h9f@paquier.xyz
2025-10-06Fix two comments in numeric.cMichael Paquier
The comments at the top of numeric_int4_safe() and numeric_int8_safe() mentioned respectively int4_numeric() and int8_numeric(). The intention is to refer to numeric_int4() and numeric_int8(). Oversights in 4246a977bad6. Reported-by: jian he <jian.universality@gmail.com> Discussion: https://postgr.es/m/CACJufxFfVt7Jx9_j=juxXyP-6tznN8OcvS9E-QSgp0BrD8KUgA@mail.gmail.com
2025-10-05Use SOCK_ERRNO[_SET] in fe-secure-gssapi.c.Tom Lane
On Windows, this code did not handle error conditions correctly at all, since it looked at "errno" which is not used for socket-related errors on that platform. This resulted, for example, in failure to connect to a PostgreSQL server with GSSAPI enabled. We have a convention for dealing with this within libpq, which is to use SOCK_ERRNO and SOCK_ERRNO_SET rather than touching errno directly; but the GSSAPI code is a relative latecomer and did not get that memo. (The equivalent backend code continues to use errno, because the backend does this differently. Maybe libpq's approach should be rethought someday.) Apparently nobody tries to build libpq with GSSAPI support on Windows, or we'd have heard about this before, because it's been broken all along. Back-patch to all supported branches. Author: Ning Wu <ning94803@gmail.com> Co-authored-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/CAFGqpvg-pRw=cdsUpKYfwY6D3d-m9tw8WMcAEE7HHWfm-oYWvw@mail.gmail.com Backpatch-through: 13
2025-10-05Don't include access/htup_details.h in executor/tuptable.hÁlvaro Herrera
This is not at all needed; I suspect it was a simple mistake in commit 5408e233f066. It causes htup_details.h to bleed into a huge number of places via execnodes.h. Remove it and fix fallout. Discussion: https://postgr.es/m/202510021240.ptc2zl5cvwen@alvherre.pgsql
2025-10-05Don't include execnodes.h in brin.h or gin.hÁlvaro Herrera
These headers don't need execnodes.h for anything. I think they never have. Discussion: https://postgr.es/m/202510021240.ptc2zl5cvwen@alvherre.pgsql
2025-10-04Teach UNION planner to remove dummy inputsDavid Rowley
This adjusts UNION planning so that the planner produces more optimal plans when one or more of the UNION's subqueries have been proven to be empty (a dummy rel). If any of the inputs are empty, then that input can be removed from the Append / MergeAppend. Previously, a const-false "Result" node would appear to represent this. Removing empty inputs has a few extra benefits when only 1 union child remains as it means the Append or MergeAppend can be removed in setrefs.c, making the plan slightly faster to execute. Also, we can provide better n_distinct estimates by looking at the sole remaining input rel's statistics. Author: David Rowley <dgrowleyml@gmail.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/CAApHDvri53PPF76c3M94_QNWbJfXjyCnjXuj_2=LYM-0m8WZtw@mail.gmail.com
2025-10-04Use bms_add_members() instead of bms_union() when possibleDavid Rowley
bms_union() causes a new set to be allocated. What this caller needs is members added to an existing set. bms_add_members() is the tool for that job. This is just a matter of fixing an inefficiency due to surplus memory allocations. No bugs being fixed. The only other place I found that might be valid to apply this change is in markNullableIfNeeded(), but I opted not to do that due to the risk to reward ratio not looking favorable. The risk being that there *could* be another pointer pointing to the Bitmapset. Author: David Rowley <dgrowleyml@gmail.com> Reviewed-by: Greg Burd <greg@burd.me> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/CAApHDvoCcoS-p5tZNJLTxFOKTYNjqVh7Dwf+5ikDUBwnvWftRw@mail.gmail.com
2025-10-03Optimize vector8_has_le() on AArch64.Nathan Bossart
Presently, the SIMD implementation of this function uses unsigned saturating subtraction to find bytes less than or equal to the given value, which is a workaround for the lack of unsigned comparison instructions on some architectures. However, Neon offers vminvq_u8(), which returns the minimum (unsigned) value in the vector. This commit adds a Neon-specific implementation that uses vminvq_u8() to optimize vector8_has_le() on AArch64. In passing, adjust the SSE2 implementation to use vector8_min() and vector8_eq() to find values less than or equal to the given value. This was the only use of vector8_ssub(), so it has been removed. Reviewed-by: John Naylor <johncnaylorls@gmail.com> Discussion: https://postgr.es/m/aNHDNDSHleq0ogC_%40nathan
2025-10-03Make some use of anonymous unions [DSM registry].Nathan Bossart
Make some use of anonymous unions, which are allowed as of C11, as examples and encouragement for future code, and to test compilers. This commit changes the DSMRegistryEntry struct. Reviewed-by: Peter Eisentraut <peter@eisentraut.org> Discussion: https://postgr.es/m/aNKsDg0fJwqhZdXX%40nathan
2025-10-03Tidy-up unneeded NULL parameter checks from SQL functionDavid Rowley
This function is marked as strict, so we can safely remove the checks checking for NULL input parameters. Author: David Rowley <dgrowleyml@gmail.com> Reviewed-by: Michael Paquier <michael@paquier.xyz> Reviewed-by: Daniel Gustafsson <daniel@yesql.se> Discussion: https://postgr.es/m/CAApHDvqiN0+mbooUOSCDALc=GoM8DmTbCdvwnCwak6Wb2O1ZJA@mail.gmail.com
2025-10-03Fix reuse-after-free hazard in dead_items_resetJohn Naylor
In similar vein to commit ccc8194e427, a reset instance of a shared memory TID store happened to occupy the same private memory as the old one for the entry point, since the chunk freed after the last round of index vacuuming was put on the context's freelist. The failure to update the vacrel->dead_items pointer was evident by nudging the system to allocate memory in a different area. This was not discovered at the time of the earlier commit since our regression tests didn't cover multiple index passes with parallel vacuum. Backpatch to v17, when TidStore came in. Author: Kevin Oommen Anish <kevin.o@zohocorp.com> Reviewed-by: Richard Guo <guofenglinux@gmail.com> Tested-by: Richard Guo <guofenglinux@gmail.com> Discussion: https://postgr.es/m/199a07cbdfc.7a1c4aac25838.1675074408277594551%40zohocorp.com Backpatch-through: 17
2025-10-03Fix incorrect function reference in commentRichard Guo
The comment incorrectly references the defunct function BufFileOpenShared(), which was replaced in commit dcac5e7ac. This patch updates the comment to refer to the current function BufFileOpenFileSet(). Author: Zhang Mingli <zmlpostgres@gmail.com> Reviewed-by: wenhui qiu <qiuwenhuifx@gmail.com> Reviewed-by: Richard Guo <guofenglinux@gmail.com> Discussion: https://postgr.es/m/1cb48b4c-54ab-40cc-b355-0b3c2af6d3f7@Spark
2025-10-03pgbench: Fail cleanly when finding a COPY result stateMichael Paquier
Currently, pgbench aborts when a COPY response is received in readCommandResponse(). However, as PQgetResult() returns an empty result when there is no asynchronous result, through getCopyResult(), the logic done at the end of readCommandResponse() for the error path leads to an infinite loop. This commit forcefully exits the COPY state with PQendcopy() before moving to the error handler when fiding a COPY state, avoiding the infinite loop. The COPY protocol is not supported by pgbench anyway, as an error is assumed in this case, so giving up is better than having the tool be stuck forever. pgbench was interruptible in this state. A TAP test is added to check that an error happens if trying to use COPY. Author: Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com> Discussion: https://postgr.es/m/CAO6_XqpHyF2m73ifV5a=5jhXxH2chk=XrgefY+eWWPe2Eft3=A@mail.gmail.com Backpatch-through: 13
2025-10-03Add IGNORE NULLS/RESPECT NULLS option to Window functions.Tatsuo Ishii
Add IGNORE NULLS/RESPECT NULLS option (null treatment clause) to lead, lag, first_value, last_value and nth_value window functions. If unspecified, the default is RESPECT NULLS which includes NULL values in any result calculation. IGNORE NULLS ignores NULL values. Built-in window functions are modified to call new API WinCheckAndInitializeNullTreatment() to indicate whether they accept IGNORE NULLS/RESPECT NULLS option or not (the API can be called by user defined window functions as well). If WinGetFuncArgInPartition's allowNullTreatment argument is true and IGNORE NULLS option is given, WinGetFuncArgInPartition() or WinGetFuncArgInFrame() will return evaluated function's argument expression on specified non NULL row (if it exists) in the partition or the frame. When IGNORE NULLS option is given, window functions need to visit and evaluate same rows over and over again to look for non null rows. To mitigate the issue, 2-bit not null information array is created while executing window functions to remember whether the row has been already evaluated to NULL or NOT NULL. If already evaluated, we could skip the evaluation work, thus we could get better performance. Author: Oliver Ford <ojford@gmail.com> Co-authored-by: Tatsuo Ishii <ishii@postgresql.org> Reviewed-by: Krasiyan Andreev <krasiyan@gmail.com> Reviewed-by: Andrew Gierth <andrew@tao11.riddles.org.uk> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: David Fetter <david@fetter.org> Reviewed-by: Vik Fearing <vik@postgresfriends.org> Reviewed-by: "David G. Johnston" <david.g.johnston@gmail.com> Reviewed-by: Chao Li <lic@highgo.com> Discussion: https://postgr.es/m/flat/CAGMVOdsbtRwE_4+v8zjH1d9xfovDeQAGLkP_B6k69_VoFEgX-A@mail.gmail.com
2025-10-02Remove check for NULL in STRICT functionDaniel Gustafsson
test_bms_make_singleton is defined as STRICT and only takes a single parameter, so there is no need to check that parameter for NULL as a NULL input won't ever reach there. Author: Daniel Gustafsson <daniel@yesql.se> Discussion: https://postgr.es/m/BC483901-9587-4076-B20F-9A414C66AB78@yesql.se
2025-10-02Fixes for comments in test_bitmapsetDaniel Gustafsson
This fixes a typo in the sql/expected test files and removes a leftover comment from test_bitmapset.c from when the functions invoked bms_free. Author: Daniel Gustafsson <daniel@yesql.se> Reported-by: David Rowley <dgrowleyml@gmail.com> Discussion: https://postgr.es/m/978D21E8-9D3B-40EA-A4B1-F87BABE7868C@yesql.se