summaryrefslogtreecommitdiff
path: root/src/test
AgeCommit message (Collapse)Author
2025-11-26ssl: Add connection and reload tests for key passphrasesDaniel Gustafsson
ssl_passphrase_command_supports_reload was not covered by the SSL testsuite, and connection tests after unlocking secrets with the passphrase was also missing. This adds test coverage for reloads of passphrase commands as well as connection attempts which tests the different codepaths for Windows and non-EXEC_BACKEND builds. Author: Daniel Gustafsson <daniel@yesql.se> Reviewed-by: Chao Li <li.evan.chao@gmail.com> Reviewed-by: Álvaro Herrera <alvherre@kurilemu.de> Reviewed-by: Peter Eisentraut <peter@eisentraut.org> Discussion: https://postgr.es/m/5F301096-921A-427D-8EC1-EBAEC2A35082@yesql.se
2025-11-25oauth_validator: Shorten JSON responses in test logsJacob Champion
Response padding from the oauth_validator abuse tests was adding a couple megabytes to the test logs. We don't need the buildfarm to hold onto that, and we don't need to read it when debugging; truncate it. Reported-by: Álvaro Herrera <alvherre@kurilemu.de> Discussion: https://postgr.es/m/202511251218.zfs4nu2qnh2m%40alvherre.pgsql Backpatch-through: 18
2025-11-26Fix test failure caused by commit 76b78721ca.Amit Kapila
The test failed because it assumed that a newly created logical replication slot could be synced to the standby by the slotsync worker. However, the presence of an existing physical slot caused the new logical slot to use a non-latest xmin. On the standby, the DDL had already been replayed, advancing xmin, which led to the slotsync worker failing to sync the lagging logical slot. To resolve this, we moved the slot sync statistics tests to run after the tests that do not require the newly created slot to be sync-ready. As per buildfarm. Author: Hou Zhijie <houzj.fnst@fujitsu.com> Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com> Reviewed-by: Amit Kapila <amit.kapila16@gmail.com> Discussion: https://postgr.es/m/OSCPR01MB14966FE0BFB6C212298BFFEDEF5D1A@OSCPR01MB14966.jpnprd01.prod.outlook.com
2025-11-26Add input function for data type pg_dependenciesMichael Paquier
pg_dependencies is used as data type for the contents of dependencies extended statistics. This new input function consumes the format that has been established by e76defbcf09e for the output function of pg_dependencies, enforcing some sanity checks for: - Checks for the input object, which should be a one-dimension array with correct attributes and values. - The key names: "attributes", "dependency", "degree". All are required, other key names are blocked. - Value types for each key: "attributes" requires an array of integers, "dependency" an attribute number, "degree" a float. - List of attributes. In this case, it is possible that some dependencies are not listed in the statistics data, as items with a degree of 0 are discarded when building the statistics. This commit includes checks for simple scenarios, like duplicated attributes, or overlapping values between the list of "attributes" and the "dependency" value. Even if the input function considers the input as valid, a value still needs to be cross-checked with the attributes defined in a statistics object at import. - Based on the discussion, the checks on the values are loose, as there is also an argument for potentially stats injection. For example, "degree" should be defined in [0.0,1.0], but a check is not enforced. This is required for a follow-up patch that aims to implement the import of extended statistics. Some tests are added to check the code paths of the JSON parser checking the shape of the pg_dependencies inputs, with 91% of code coverage reached. The tests are located in their own new test file, for clarity. Author: Corey Huinker <corey.huinker@gmail.com> Reviewed-by: Jian He <jian.universality@gmail.com> Reviewed-by: Chao Li <li.evan.chao@gmail.com> Reviewed-by: Michael Paquier <michael@paquier.xyz> Reviewed-by: Yuefei Shi <shiyuefei1004@gmail.com> Discussion: https://postgr.es/m/CADkLM=dpz3KFnqP-dgJ-zvRvtjsa8UZv8wDAQdqho=qN3kX0Zg@mail.gmail.com
2025-11-26Add input function for data type pg_ndistinctMichael Paquier
pg_ndistinct is used as data type for the contents of ndistinct extended statistics. This new input function consumes the format that has been established by 1f927cce4498 for the output function of pg_ndistinct, enforcing some sanity checks for: - Checks for the input object, which should be a one-dimension array with correct attributes and values. - The key names: "attributes", "ndistinct". Both are required, other key names are blocked. - Value types for each key: "attributes" requires an array of integers, and "ndistinct" an integer. - List of attributes. Note that this enforces a check so as an attribute list has to be a subset of the longest attribute list found. This does not enforce that a full group of attribute sets exist, based on how the groups are generated when the ndistinct objects are generated, making the list of ndistinct items a bit loose. Note a check would still be required at import to see if the attributes listed match with the attribute numbers set in the definition of a statistics object. - Based on the discussion, the checks on the values are loose, as there is also an argument for potentially stats injection. The relation and attribute level stats follow the same line of argument for the values. This is required for a follow-up patch that aims to implement the import of extended statistics. Some tests are added to check the code paths of the JSON parser checking the shape of the pg_ndistinct inputs, with 90% of code coverage reached. The tests are located in their own new test file, for clarity. Author: Corey Huinker <corey.huinker@gmail.com> Reviewed-by: Jian He <jian.universality@gmail.com> Reviewed-by: Chao Li <li.evan.chao@gmail.com> Reviewed-by: Michael Paquier <michael@paquier.xyz> Reviewed-by: Yuefei Shi <shiyuefei1004@gmail.com> Discussion: https://postgr.es/m/CADkLM=dpz3KFnqP-dgJ-zvRvtjsa8UZv8wDAQdqho=qN3kX0Zg@mail.gmail.com
2025-11-25Improve test case stabilityÁlvaro Herrera
Given unlucky timing, some of the new tests added by commit bc32a12e0db2 can fail spuriously. We haven't seen such failures yet in buildfarm, but allegedly we can prevent them with this tweak. While at it, remove an unused injection point I (Álvaro) added. Author: Mihail Nikalayeu <mihailnikalayeu@gmail.com> Discussion: https://postgr.es/m/CADzfLwUc=jtSUEaQCtyt8zTeOJ-gHZ8=w_KJsVjDOYSLqaY9Lg@mail.gmail.com Discussion: https://postgr.es/m/CADzfLwV5oQq-Vg_VmG_o4SdL6yHjDoNO4T4pMtgJLzYGmYf74g@mail.gmail.com
2025-11-25Add slotsync skip statistics.Amit Kapila
This patch adds two new columns to the pg_stat_replication_slots view: slotsync_skip_count - the total number of times a slotsync operation was skipped. slotsync_skip_at - the timestamp of the most recent skip. These additions provide better visibility into replication slot synchronization behavior. A future patch will introduce the slotsync_skip_reason column in pg_replication_slots to capture the reason for skip. Author: Shlok Kyal <shlok.kyal.oss@gmail.com> Reviewed-by: shveta malik <shveta.malik@gmail.com> Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com> Reviewed-by: Ashutosh Sharma <ashu.coek88@gmail.com> Reviewed-by: Amit Kapila <amit.kapila16@gmail.com> Discussion: https://postgr.es/m/CAE9k0PkhfKrTEAsGz4DjOhEj1nQ+hbQVfvWUxNacD38ibW3a1g@mail.gmail.com
2025-11-24Improve detection of implicitly-temporary views.Tom Lane
We've long had a practice of making views temporary by default if they reference any temporary tables. However the implementation was pretty incomplete, in that it only searched for RangeTblEntry references to temp relations. Uses of temporary types, regclass constants, etc were not detected even though the dependency mechanism considers them grounds for dropping the view. Thus a view not believed to be temp could silently go away at session exit anyhow. To improve matters, replace the ad-hoc isQueryUsingTempRelation() logic with use of the dependency-based infrastructure introduced by commit 572c40ba9. This is complete by definition, and it's less code overall. While we're at it, we can also extend the warning NOTICE (or ERROR in the case of a materialized view) to mention one of the temp objects motivating the classification of the view as temp, as was done for functions in 572c40ba9. Author: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Jim Jones <jim.jones@uni-muenster.de> Discussion: https://postgr.es/m/19cf6ae1-04cd-422c-a760-d7e75fe6cba9@uni-muenster.de
2025-11-24Fix infer_arbiter_index during concurrent index operationsÁlvaro Herrera
Previously, we would only consider indexes marked indisvalid as usable for INSERT ON CONFLICT. But that's problematic during CREATE INDEX CONCURRENTLY and REINDEX CONCURRENTLY, because concurrent transactions would end up with inconsistents lists of inferred indexes, leading to deadlocks and spurious errors about unique key violations (because two transactions are operating on different indexes for the speculative insertion tokens). Change this function to return indexes even if invalid. This fixes the spurious errors and deadlocks. Because such indexes might not be complete, we still need uniqueness to be verified in a different way. We do that by requiring that at least one index marked valid is part of the set of indexes returned. It is that index that is going to help ensure that the inserted tuple is indeed unique. This does not fix similar problems occurring with partitioned tables or with named constraints. These problems will be fixed in follow-up commits. We have no user report of this problem, even though it exists in all branches. Because of that and given that the fix is somewhat tricky, I decided not to backpatch for now. Author: Mihail Nikalayeu <mihailnikalayeu@gmail.com> Reviewed-by: Michael Paquier <michael@paquier.xyz> Reviewed-by: Álvaro Herrera <alvherre@kurilemu.de> Discussion: https://postgr.es/m/CANtu0ogv+6wqRzPK241jik4U95s1pW3MCZ3rX5ZqbFdUysz7Qw@mail.gmail.com
2025-11-24Move isolation test index-killtuples to src/test/modules/index/Michael Paquier
index-killtuples test depends on the contrib modules btree_gin and btree_gist, which would not be installed in a temporary installation with an execution of the main isolation test suite like this one: make -C src/test/isolation/ check src/test/isolation/ should not depend on contrib/, and EXTRA_INSTALL has no effect in this case as this test suite uses its own Makefile rules. This commit moves index-killtuples into its new module, called "index", whose name looks like the best fit there can be as it depends on more than one index AM. btree_gin and btree_gist are now pulled in the temporary installation with EXTRA_INSTALL. The test is renamed to "killtuples", for simplicity. Author: Nazir Bilal Yavuz <byavuz81@gmail.com> Suggested-by: Andres Freund <andres@anarazel.de> Suggested-by: Michael Paquier <michael@paquier.xyz> Discussion: https://postgr.es/m/aKJsWedftW7UX1WM@paquier.xyz
2025-11-23Issue a NOTICE if a created function depends on any temp objects.Tom Lane
We don't have an official concept of temporary functions. (You can make one explicitly in pg_temp, but then you have to explicitly schema-qualify it on every call.) However, until now we were quite laissez-faire about whether a non-temporary function could depend on a temporary object, such as a temp table or view. If one does, it will silently go away at end of session, due to the automatic DROP ... CASCADE on the session's temporary objects. People have complained that that's surprising; however, we can't really forbid it because other people (including our own regression tests) rely on being able to do it. Let's compromise by emitting a NOTICE at CREATE FUNCTION time. This is somewhat comparable to our ancient practice of emitting a NOTICE when forcing a view to become temp because it depends on temp tables. Along the way, refactor recordDependencyOnExpr() so that the dependencies of an expression can be combined with other dependencies, instead of being emitted separately and perhaps duplicatively. We should probably make the implementation of temp-by-default views use the same infrastructure used here, but that's for another patch. It's unclear whether there are any other object classes that deserve similar treatment. Author: Jim Jones <jim.jones@uni-muenster.de> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/19cf6ae1-04cd-422c-a760-d7e75fe6cba9@uni-muenster.de
2025-11-22Add SupportRequestInlineInFrom planner support request.Tom Lane
This request allows a support function to replace a function call appearing in FROM (typically a set-returning function) with an equivalent SELECT subquery. The subquery will then be subject to the planner's usual optimizations, potentially allowing a much better plan to be generated. While the planner has long done this automatically for simple SQL-language functions, it's now possible for extensions to do it for functions outside that group. Notably, this could be useful for functions that are presently implemented in PL/pgSQL and work by generating and then EXECUTE'ing a SQL query. Author: Paul A Jungwirth <pj@illuminatedcomputing.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/09de6afa-c33d-4d94-a5cb-afc6cea0d2bb@illuminatedcomputing.com
2025-11-22Add range_minus_multi and multirange_minus_multi functionsPeter Eisentraut
The existing range_minus function raises an exception when the range is "split", because then the result can't be represented by a single range. For example '[0,10)'::int4range - '[4,5)' would be '[0,4)' and '[5,10)'. This commit adds new set-returning functions so that callers can get results even in the case of splits. There is no risk of an exception for multiranges, but a set-returning function lets us handle them the same way we handle ranges. Both functions return zero results if the subtraction would give an empty range/multirange. The main use-case for these functions is to implement UPDATE/DELETE FOR PORTION OF, which must compute the application-time of "temporal leftovers": the part of history in an updated/deleted row that was not changed. To preserve the untouched history, we will implicitly insert one record for each result returned by range/multirange_minus_multi. Using a set-returning function will also let us support user-defined types for application-time update/delete in the future. Author: Paul A. Jungwirth <pj@illuminatedcomputing.com> Reviewed-by: Peter Eisentraut <peter@eisentraut.org> Reviewed-by: Chao Li <li.evan.chao@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/ec498c3d-5f2b-48ec-b989-5561c8aa2024%40illuminatedcomputing.com
2025-11-20Add HINT listing valid encodings to encode() and decode() errors.Fujii Masao
This commit updates encode() and decode() so that when an invalid encoding is specified, their error message includes a HINT listing all valid encodings. This helps users quickly see which encodings are supported without needing to consult the documentation. Author: Shinya Sugamoto <shinya34892@gmail.com> Reviewed-by: Chao Li <lic@highgo.com> Reviewed-by: Daniel Gustafsson <daniel@yesql.se> Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com> Reviewed-by: Peter Eisentraut <peter@eisentraut.org> Reviewed-by: Nathan Bossart <nathandbossart@gmail.com> Reviewed-by: Fujii Masao <masao.fujii@gmail.com> Discussion: https://postgr.es/m/CAAe3y+99sfPv8UDF1VM-rC1i5HBdqxUh=2HrbJJFm2+i=1OwOw@mail.gmail.com
2025-11-18Don't allow CTEs to determine semantic levels of aggregates.Tom Lane
The fix for bug #19055 (commit b0cc0a71e) allowed CTE references in sub-selects within aggregate functions to affect the semantic levels assigned to such aggregates. It turns out this broke some related cases, leading to assertion failures or strange planner errors such as "unexpected outer reference in CTE query". After experimenting with some alternative rules for assigning the semantic level in such cases, we've come to the conclusion that changing the level is more likely to break things than be helpful. Therefore, this patch undoes what b0cc0a71e changed, and instead installs logic to throw an error if there is any reference to a CTE that's below the semantic level that standard SQL rules would assign to the aggregate based on its contained Var and Aggref nodes. (The SQL standard disallows sub-selects within aggregate functions, so it can't reach the troublesome case and hence has no rule for what to do.) Perhaps someone will come along with a legitimate query that this logic rejects, and if so probably the example will help us craft a level-adjustment rule that works better than what b0cc0a71e did. I'm not holding my breath for that though, because the previous logic had been there for a very long time before bug #19055 without complaints, and that bug report sure looks to have originated from fuzzing not from real usage. Like b0cc0a71e, back-patch to all supported branches, though sadly that no longer includes v13. Bug: #19106 Reported-by: Kamil Monicz <kamil@monicz.dev> Author: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/19106-9dd3668a0734cd72@postgresql.org Backpatch-through: 14
2025-11-18Check for tabs in postgresql.conf.sample.Nathan Bossart
The previous commit updated this file to use spaces instead of tabs. This commit adds a test to ensure that no new tabs are added. Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Chao Li <li.evan.chao@gmail.com> Discussion: https://postgr.es/m/aReNUKdMgKxLqmq7%40nathan
2025-11-18Rename two columns in pg_stat_subscription_stats.Amit Kapila
This patch renames the sync_error_count column to sync_table_error_count in the pg_stat_subscription_stats view. The new name makes the purpose explicit now that a separate column exists to track sequence synchronization errors. Additionally, the column seq_sync_error_count is renamed to sync_seq_error_count to maintain a consistent naming pattern, making it easier for users to group, and query synchronization related counters. Author: Vignesh C <vignesh21@gmail.com> Reviewed-by: Peter Smith <smithpb2250@gmail.com> Reviewed-by: Chao Li <li.evan.chao@gmail.com> Discussion: https://postgr.es/m/CALDaNm3WwJmz=-4ybTkhniB-Nf3qmFG9Zx1uKjyLLoPF5NYYXA@mail.gmail.com
2025-11-17Rework output format of pg_dependenciesMichael Paquier
The existing format of pg_dependencies uses a single-object JSON structure, with each key value embedding all the knowledge about the set attributes tracked, like: {"1 => 5": 1.000000, "5 => 1": 0.423130} While this is a very compact format, it is confusing to read and it is difficult to manipulate the values within the object, particularly when tracking multiple attributes. The new output format introduced in this commit is a JSON array of objects, with: - A key named "degree", with a float value. - A key named "attributes", with an array of attribute numbers. - A key named "dependency", with an attribute number. The values use the same underlying type as previously when printed, with a new output format that shows now as follows: [{"degree": 1.000000, "attributes": [1], "dependency": 5}, {"degree": 0.423130, "attributes": [5], "dependency": 1}] This new format will become handy for a follow-up set of changes, so as it becomes possible to inject extended statistics rather than require an ANALYZE, like in a dump/restore sequence or after pg_upgrade on a new cluster. This format has been suggested by Tomas Vondra. The key names are defined in the header introduced by 1f927cce4498, to ease the integration of frontend-specific changes that are still under discussion. (Again a personal note: if anybody comes up with better name for the keys, of course feel free.) The bulk of the changes come from the regression tests, where jsonb_pretty() is now used to make the outputs generated easier to parse. Author: Corey Huinker <corey.huinker@gmail.com> Reviewed-by: Jian He <jian.universality@gmail.com> Reviewed-by: Michael Paquier <michael@paquier.xyz> Discussion: https://postgr.es/m/CADkLM=dpz3KFnqP-dgJ-zvRvtjsa8UZv8wDAQdqho=qN3kX0Zg@mail.gmail.com
2025-11-17Rework output format of pg_ndistinctMichael Paquier
The existing format of pg_ndistinct uses a single-object JSON structure where each key is itself a comma-separated list of attnums, like: {"3, 4": 11, "3, 6": 11, "4, 6": 11, "3, 4, 6": 11} While this is a very compact format, it is confusing to read and it is difficult to manipulate the values within the object. The new output format introduced in this commit is an array of objects, with: - A key named "attributes", that contains an array of attribute numbers. - A key named "ndistinct", represented as an integer. The values use the same underlying type as previously when printed, with a new output format that shows now as follows: [{"ndistinct": 11, "attributes": [3,4]}, {"ndistinct": 11, "attributes": [3,6]}, {"ndistinct": 11, "attributes": [4,6]}, {"ndistinct": 11, "attributes": [3,4,6]}] This new format will become handy for a follow-up set of changes, so as it becomes possible to inject extended statistics rather than require an ANALYZE, like in a dump/restore sequence or after pg_upgrade on a new cluster. This format has been suggested by Tomas Vondra. The key names are defined in a new header, to ease with the integration of frontend-specific changes that are still under discussion. (Personal note: I am not specifically wedded to these key names, but if there are better name suggestions for this release, feel free.) The bulk of the changes come from the regression tests, where jsonb_pretty() is now used to make the outputs generated easier to parse. Author: Corey Huinker <corey.huinker@gmail.com> Reviewed-by: Jian He <jian.universality@gmail.com> Reviewed-by: Michael Paquier <michael@paquier.xyz> Discussion: https://postgr.es/m/CADkLM=dpz3KFnqP-dgJ-zvRvtjsa8UZv8wDAQdqho=qN3kX0Zg@mail.gmail.com
2025-11-17Add test for temporary file removal and WITH HOLD cursorMichael Paquier
This new test, added in 009_log_temp_files, checks that the temporary files created by a WITH HOLD cursor are dropped at the end of the transaction where the transaction has been created. The portal's executor is shutdown in PersistHoldablePortal(), after for example some forced detoast, so as the cursor data can be accessed without requiring a snapshot. Author: Mircea Cadariu <cadariu.mircea@gmail.com> Discussion: https://postgr.es/m/0a666d28-9080-4239-90d6-f6345bb43468@gmail.com
2025-11-16Fix Assert failure in EXPLAIN ANALYZE MERGE with a concurrent update.Dean Rasheed
When instrumenting a MERGE command containing both WHEN NOT MATCHED BY SOURCE and WHEN NOT MATCHED BY TARGET actions using EXPLAIN ANALYZE, a concurrent update of the target relation could lead to an Assert failure in show_modifytable_info(). In a non-assert build, this would lead to an incorrect value for "skipped" tuples in the EXPLAIN output, rather than a crash. This could happen if the concurrent update caused a matched row to no longer match, in which case ExecMerge() treats the single originally matched row as a pair of not matched rows, and potentially executes 2 not-matched actions for the single source row. This could then lead to a state where the number of rows processed by the ModifyTable node exceeds the number of rows produced by its source node, causing "skipped_path" in show_modifytable_info() to be negative, triggering the Assert. Fix this in ExecMergeMatched() by incrementing the instrumentation tuple count on the source node whenever a concurrent update of this kind is detected, if both kinds of merge actions exist, so that the number of source rows matches the number of actions potentially executed, and the "skipped" tuple count is correct. Back-patch to v17, where support for WHEN NOT MATCHED BY SOURCE actions was introduced. Bug: #19111 Reported-by: Dilip Kumar <dilipbalaut@gmail.com> Author: Dean Rasheed <dean.a.rasheed@gmail.com> Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com> Discussion: https://postgr.es/m/19111-5b06624513d301b3@postgresql.org Backpatch-through: 17
2025-11-14Add test for postgresql.conf.sample line syntaxDaniel Gustafsson
All GUCs in postgresql.conf.sample should be set to the default value and be commented out. This syntax was however not tested for, making omissions easy to miss. Add a test which check all lines for syntax. Author: Daniel Gustafsson <daniel@yesql.se> Reviewed-by: Andrew Dunstan <andrew@dunslane.net> Discussion: https://postgr.es/m/19727040-3EE4-4719-AF4F-2548544113D7@yesql.se
2025-11-14Revert "Drop unnamed portal immediately after execution to completion"Michael Paquier
This reverts commit 1fd981f05369, based on concerns that the logging improvements do not justify the protocol breakage of dropping an unnamed portal once its execution has completed. It seems unlikely that one would try to send an execute or describe message after the portal has been used, but if they do such post-completion messages would not be able to process as the previous versions. Let's revert this change for now so as we keep compatibility and consider a different solution. The tests added by 76bba033128a track the pre-1fd981f05369 behavior, and are still valid. Discussion: https://postgr.es/m/CA+TgmoYFJyJNQw3RT7veO3M2BWRE9Aw4hprC5rOcawHZti-f8g@mail.gmail.com
2025-11-12test_dsa: Avoid leaking LWLock tranches.Nathan Bossart
Since this is a test module, leaking a couple of LWLock tranches is fine, but we want to discourage that pattern in third-party code. This commit teaches the module to create only one tranche and to store its ID in shared memory for use by other backends. Reported-by: Alexander Lakhin <exclusion@gmail.com> Reviewed-by: Sami Imseih <samimseih@gmail.com> Discussion: https://postgr.es/m/dd36d384-55df-4fc2-825c-5bc56c950fa9%40gmail.com
2025-11-12Fix bug where we truncated CLOG that was still needed by LISTEN/NOTIFYHeikki Linnakangas
The async notification queue contains the XID of the sender, and when processing notifications we call TransactionIdDidCommit() on the XID. But we had no safeguards to prevent the CLOG segments containing those XIDs from being truncated away. As a result, if a backend didn't for some reason process its notifications for a long time, or when a new backend issued LISTEN, you could get an error like: test=# listen c21; ERROR: 58P01: could not access status of transaction 14279685 DETAIL: Could not open file "pg_xact/000D": No such file or directory. LOCATION: SlruReportIOError, slru.c:1087 To fix, make VACUUM "freeze" the XIDs in the async notification queue before truncating the CLOG. Old XIDs are replaced with FrozenTransactionId or InvalidTransactionId. Note: This commit is not a full fix. A race condition remains, where a backend is executing asyncQueueReadAllNotifications() and has just made a local copy of an async SLRU page which contains old XIDs, while vacuum concurrently truncates the CLOG covering those XIDs. When the backend then calls TransactionIdDidCommit() on those XIDs from the local copy, you still get the error. The next commit will fix that remaining race condition. This was first reported by Sergey Zhuravlev in 2021, with many other people hitting the same issue later. Thanks to: - Alexandra Wang, Daniil Davydov, Andrei Varashen and Jacques Combrink for investigating and providing reproducable test cases, - Matheus Alcantara and Arseniy Mukhin for review and earlier proposed patches to fix this, - Álvaro Herrera and Masahiko Sawada for reviews, - Yura Sokolov aka funny-falcon for the idea of marking transactions as committed in the notification queue, and - Joel Jacobson for the final patch version. I hope I didn't forget anyone. Backpatch to all supported versions. I believe the bug goes back all the way to commit d1e027221d, which introduced the SLRU-based async notification queue. Discussion: https://www.postgresql.org/message-id/16961-25f29f95b3604a8a@postgresql.org Discussion: https://www.postgresql.org/message-id/18804-bccbbde5e77a68c2@postgresql.org Discussion: https://www.postgresql.org/message-id/CAK98qZ3wZLE-RZJN_Y%2BTFjiTRPPFPBwNBpBi5K5CU8hUHkzDpw@mail.gmail.com Backpatch-through: 14
2025-11-12injection_points: Add tests for name limitsMichael Paquier
The maximum limits for point name, library name, function name and private area size were not kept track of in the tests. The new function introduced in 16a2f706951e gives a way to trigger them. This is not critical but cheap to cover. While on it, this commit cleans up some of the tests introduced by 16a2f706951e for NULL inputs by using more consistent argument values. The coverage does not change, but it makes the whole less confusing with argument values that are correct based their position in the SQL function called. Reviewed-by: Daniel Gustafsson <daniel@yesql.se> Reviewed-by: Xuneng Zhou <xunengzhou@gmail.com> Discussion: https://postgr.es/m/aRE7zhu6wOA29gFf@paquier.xyz
2025-11-10Check for CREATE privilege on the schema in CREATE STATISTICS.Nathan Bossart
This omission allowed table owners to create statistics in any schema, potentially leading to unexpected naming conflicts. For ALTER TABLE commands that require re-creating statistics objects, skip this check in case the user has since lost CREATE on the schema. The addition of a second parameter to CreateStatistics() breaks ABI compatibility, but we are unaware of any impacted third-party code. Reported-by: Jelte Fennema-Nio <postgres@jeltef.nl> Author: Jelte Fennema-Nio <postgres@jeltef.nl> Co-authored-by: Nathan Bossart <nathandbossart@gmail.com> Reviewed-by: Noah Misch <noah@leadboat.com> Reviewed-by: Álvaro Herrera <alvherre@kurilemu.de> Security: CVE-2025-12817 Backpatch-through: 13
2025-11-10Add more tests for relation statistics with rewritesMichael Paquier
While there are many tests related to relation rewrites, nothing existed to check how the cumulative statistics behave in such cases for relations. A different patch is under discussion to move the relation statistics to be tracked on a per-relfilenode basis, so as these could be rebuilt during crash recovery. This commit gives us a way to check (and perhaps change) the existing behaviors for several rewrite scenarios, mixing transactions, sub-transactions, two-phase commit and VACUUM. Author: Bertrand Drouvot <bertranddrouvot.pg@gmail.com> Reviewed-by: Michael Paquier <michael@paquier.xyz> Discussion: https://postgr.es/m/aQ3X20hbqoThQXgp@ip-10-97-1-34.eu-west-3.compute.internal
2025-11-10injection_points: Add variant for injection_point_attach()Michael Paquier
This new function is able to take in input more data than the existing injection_point_attach(): - A library name. - A function name. - Some private data. This gives more flexibility for tests so as these would not need to reinvent a wrapper for InjectionPointAttach() when attaching a callback from a library other than "injection_points". injection_point_detach() can be used with both versions of injection_point_attach(). Author: Rahila Syed <rahilasyed.90@gmail.com> Reviewed-by: Mihail Nikalayeu <mihailnikalayeu@gmail.com> Reviewed-by: Michael Paquier <michael@paquier.xyz> Discussion: https://postgr.es/m/CAH2L28sOG2b_TKkZU51dy+pWJtny1mqDmeFiFoUASGa0X0iiKQ@mail.gmail.com
2025-11-07Fix "inconsistent DLL linkage" warning on Windows MSVCPeter Eisentraut
This warning was disabled in meson.build (warning 4273). If you enable it, it looks like this: ../src/backend/utils/misc/ps_status.c(27): warning C4273: '__p__environ': inconsistent dll linkage C:\Program Files (x86)\Windows Kits\10\include\10.0.22621.0\ucrt\stdlib.h(1158): note: see previous definition of '__p__environ' The declaration in ps_status.c was: #if !defined(WIN32) || defined(_MSC_VER) extern char **environ; #endif The declaration in the OS header file is: _DCRTIMP char*** __cdecl __p__environ (void); #define _environ (*__p__environ()) So it is evident that this could be problematic. The old declaration was required by the old MSVCRT library, but we don't support that anymore with MSVC. To fix, disable the re-declaration in ps_status.c, and also in some other places that use the same code pattern but didn't trigger the warning. Then we can also re-enable the warning (delete the disablement in meson.build). Reviewed-by: Bryan Green <dbryan.green@gmail.com> Reviewed-by: Andres Freund <andres@anarazel.de> Discussion: https://www.postgresql.org/message-id/flat/bf060644-47ff-441b-97cf-c685d0827757@eisentraut.org
2025-11-07Add seq_sync_error_count to subscription statistics.Amit Kapila
This commit adds a new column, seq_sync_error_count, to the pg_stat_subscription_stats view. This counter tracks the number of errors encountered by the sequence synchronization worker during operation. Since a single worker handles the synchronization of all sequences, this value may reflect errors from multiple sequences. This addition improves observability of sequence synchronization behavior and helps monitor potential issues during replication. Author: Vignesh C <vignesh21@gmail.com> Reviewed-by: shveta malik <shveta.malik@gmail.com> Reviewed-by: Peter Smith <smithpb2250@gmail.com> Reviewed-by: Amit Kapila <amit.kapila16@gmail.com> Discussion: https://postgr.es/m/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
2025-11-06bufmgr: Allow some buffer state modifications while holding header lockAndres Freund
Until now BufferDesc.state was not allowed to be modified while the buffer header spinlock was held. This meant that operations like unpinning buffers needed to use a CAS loop, waiting for the buffer header spinlock to be released before updating. The benefit of that restriction is that it allowed us to unlock the buffer header spinlock with just a write barrier and an unlocked write (instead of a full atomic operation). That was important to avoid regressions in 48354581a49c. However, since then the hottest buffer header spinlock uses have been replaced with atomic operations (in particular, the most common use of PinBuffer_Locked(), in GetVictimBuffer() (formerly in BufferAlloc()), has been removed in 5e899859287). This change will allow, in a subsequent commit, to release buffer pins with a single atomic-sub operation. This previously was not possible while such operations were not allowed while the buffer header spinlock was held, as an atomic-sub would not have allowed a race-free check for the buffer header lock being held. Using atomic-sub to unpin buffers is a nice scalability win, however it is not the primary motivation for this change (although it would be sufficient). The primary motivation is that we would like to merge the buffer content lock into BufferDesc.state, which will result in more frequent changes of the state variable, which in some situations can cause a performance regression, due to an increased CAS failure rate when unpinning buffers. The regression entirely vanishes when using atomic-sub. Naively implementing this would require putting CAS loops in every place modifying the buffer state while holding the buffer header lock. To avoid that, introduce UnlockBufHdrExt(), which can set/add flags as well as the refcount, together with releasing the lock. Reviewed-by: Robert Haas <robertmhaas@gmail.com> Reviewed-by: Matthias van de Meent <boekewurm+postgres@gmail.com> Discussion: https://postgr.es/m/fvfmkr5kk4nyex56ejgxj3uzi63isfxovp2biecb4bspbjrze7@az2pljabhnff
2025-11-06Disallow generated columns in COPY WHERE clausePeter Eisentraut
Stored generated columns are not yet computed when the filtering happens, so we need to prohibit them to avoid incorrect behavior. Virtual generated columns currently error out ("unexpected virtual generated column reference"). They could probably work if we expand them in the right place, but for now let's keep them consistent with the stored variant. This doesn't change the behavior, it only gives a nicer error message. Co-authored-by: jian he <jian.universality@gmail.com> Reviewed-by: Kirill Reshke <reshkekirill@gmail.com> Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/CACJufxHb8YPQ095R_pYDr77W9XKNaXg5Rzy-WP525mkq+hRM3g@mail.gmail.com
2025-11-06Fix few issues in commit 5509055d69.Amit Kapila
Test failure on buildfarm member prion: The test failed due to an unexpected LOCATION: line appearing between the WARNING and ERROR messages. This occurred because the prion machine uses log_error_verbosity = verbose, which includes additional context in error messages. The test was originally checking for both WARNING and ERROR messages in sequence sync, but the extra LOCATION: line disrupted this pattern. To make the test robust across different verbosity settings, it now only checks for the presence of the WARNING message after the test, which is sufficient to validate the intended behavior. Failure to sync sequences with quoted names: The previous implementation did not correctly quote sequence names when querying remote information, leading to failures when quoted sequence names were used. This fix ensures that sequence names are properly quoted during remote queries, allowing sequences with quoted identifiers to be synced correctly. Author: Vignesh C <vignesh21@gmail.com> Author: Shinya Kato <shinya11.kato@gmail.com> Reviewed-by: Amit Kapila <amit.kapila16@gmail.com> Discussion: https://postgr.es/m/CALDaNm0WcdSCoNPiE-5ek4J2dMJ5o111GPTzKCYj9G5i=ONYtQ@mail.gmail.com Discussion: https://postgr.es/m/CAOzEurQOSN=Zcp9uVnatNbAy=2WgMTJn_DYszYjv0KUeQX_e_A@mail.gmail.com
2025-11-06Fix UNION planner estimate_num_groups with varno==0David Rowley
03d40e4b5 added code to provide better row estimates for when a UNION query ended up only with a single child due to other children being found to be dummy rels. In that case, ordinarily it would be ok to call estimate_num_groups() on the targetlist of the only child path, however that's not safe to do if the UNION child is the result of some other set operation as we generate targetlists containing Vars with varno==0 for those, which estimate_num_groups() can't handle. This could lead to: ERROR: XX000: no relation entry for relid 0 Fix this by avoiding doing this when the only child is the result of another set operation. In that case we'll fall back on the assume-all-rows-are-unique method. Reported-by: Alexander Lakhin <exclusion@gmail.com> Author: David Rowley <dgrowleyml@gmail.com> Discussion: https://postgr.es/m/cfbc99e5-9d44-4806-ba3c-f36b57a85e21@gmail.com
2025-11-05Implement WAIT FOR commandAlexander Korotkov
WAIT FOR is to be used on standby and specifies waiting for the specific WAL location to be replayed. This option is useful when the user makes some data changes on primary and needs a guarantee to see these changes are on standby. WAIT FOR needs to wait without any snapshot held. Otherwise, the snapshot could prevent the replay of WAL records, implying a kind of self-deadlock. This is why separate utility command seems appears to be the most robust way to implement this functionality. It's not possible to implement this as a function. Previous experience shows that stored procedures also have limitation in this aspect. Discussion: https://www.postgresql.org/message-id/flat/CAPpHfdsjtZLVzxjGT8rJHCYbM0D5dwkO+BBjcirozJ6nYbOW8Q@mail.gmail.com Discussion: https://www.postgresql.org/message-id/flat/CABPTF7UNft368x-RgOXkfj475OwEbp%2BVVO-wEXz7StgjD_%3D6sw%40mail.gmail.com Author: Kartyshov Ivan <i.kartyshov@postgrespro.ru> Author: Alexander Korotkov <aekorotkov@gmail.com> Author: Xuneng Zhou <xunengzhou@gmail.com> Reviewed-by: Michael Paquier <michael@paquier.xyz> Reviewed-by: Peter Eisentraut <peter.eisentraut@enterprisedb.com> Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com> Reviewed-by: Amit Kapila <amit.kapila16@gmail.com> Reviewed-by: Alexander Lakhin <exclusion@gmail.com> Reviewed-by: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> Reviewed-by: Euler Taveira <euler@eulerto.com> Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi> Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com> Reviewed-by: jian he <jian.universality@gmail.com> Reviewed-by: Álvaro Herrera <alvherre@kurilemu.de> Reviewed-by: Xuneng Zhou <xunengzhou@gmail.com>
2025-11-05Fix assertion failure in generate_orderedappend_paths()Richard Guo
In generate_orderedappend_paths(), there is an assumption that a child relation's row estimate is always greater than zero. There is an Assert verifying this assumption, and the estimate is also used to convert an absolute tuple count into a fraction. However, this assumption is not always valid -- for example, upper relations can have their row estimates unset, resulting in a value of zero. This can cause an assertion failure in debug builds or lead to the tuple fraction being computed as infinity in production builds. To fix, use the row estimate from the cheapest_total path to compute the tuple fraction. The row estimate in this path should already have been forced to a valid value. In passing, update the comment for generate_orderedappend_paths() to note that the function also considers the cheapest-fractional case when not all tuples need to be retrieved. That is, it collects all the cheapest fractional paths and builds an ordered append path for each interesting ordering. Backpatch to v18, where this issue was introduced. Bug: #19102 Reported-by: Kuntal Ghosh <kuntalghosh.2007@gmail.com> Author: Richard Guo <guofenglinux@gmail.com> Reviewed-by: Kuntal Ghosh <kuntalghosh.2007@gmail.com> Reviewed-by: Andrei Lepikhov <lepihov@gmail.com> Discussion: https://postgr.es/m/19102-93480667e1200169@postgresql.org Backpatch-through: 18
2025-11-05Fix timing-dependent failure in recovery test 004_timeline_switchMichael Paquier
The test introduced by 17b2d5ec759c verifies that a WAL receiver survives across a timeline jump by searching the server logs for termination messages. However, it called restart() before the timeline switch, which kills the WAL receiver and may log the exact message being checked, hence failing the test. As TAP tests reuse the same log file across restarts, a rotate_logfile() is used before the restart so as the log matching check is not impacted by log entries generated by a previous shutdown. Recent changes to file handle inheritance altered I/O timing enough to make this fail consistently while testing another patch. While on it, this adds an extra check based on a PID comparison. This test may lead to false positives as it could be possible that the WAL receiver has processed a timeline jump before the initial PID is grabbed, but it should be good enough in most cases. Like 17b2d5ec759c, backpatch down to v13. Author: Bryan Green <dbryan.green@gmail.com> Co-authored-by: Xuneng Zhou <xunengzhou@gmail.com> Discussion: https://postgr.es/m/9d00b597-d64a-4f1e-802e-90f9dc394c70@gmail.com Backpatch-through: 13
2025-11-05Add sequence synchronization for logical replication.Amit Kapila
This patch introduces sequence synchronization. Sequences that are synced will have 2 states: - INIT (needs [re]synchronizing) - READY (is already synchronized) A new sequencesync worker is launched as needed to synchronize sequences. A single sequencesync worker is responsible for synchronizing all sequences. It begins by retrieving the list of sequences that are flagged for synchronization, i.e., those in the INIT state. These sequences are then processed in batches, allowing multiple entries to be synchronized within a single transaction. The worker fetches the current sequence values and page LSNs from the remote publisher, updates the corresponding sequences on the local subscriber, and finally marks each sequence as READY upon successful synchronization. Sequence synchronization occurs in 3 places: 1) CREATE SUBSCRIPTION - The command syntax remains unchanged. - The subscriber retrieves sequences associated with publications. - Published sequences are added to pg_subscription_rel with INIT state. - Initiate the sequencesync worker to synchronize all sequences. 2) ALTER SUBSCRIPTION ... REFRESH PUBLICATION - The command syntax remains unchanged. - Dropped published sequences are removed from pg_subscription_rel. - Newly published sequences are added to pg_subscription_rel with INIT state. - Initiate the sequencesync worker to synchronize only newly added sequences. 3) ALTER SUBSCRIPTION ... REFRESH SEQUENCES - A new command introduced for PG19 by f0b3573c3a. - All sequences in pg_subscription_rel are reset to INIT state. - Initiate the sequencesync worker to synchronize all sequences. - Unlike "ALTER SUBSCRIPTION ... REFRESH PUBLICATION" command, addition and removal of missing sequences will not be done in this case. Author: Vignesh C <vignesh21@gmail.com> Reviewed-by: shveta malik <shveta.malik@gmail.com> Reviewed-by: Hou Zhijie <houzj.fnst@fujitsu.com> Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com> Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com> Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com> Reviewed-by: Peter Smith <smithpb2250@gmail.com> Reviewed-by: Nisha Moond <nisha.moond412@gmail.com> Reviewed-by: Shlok Kyal <shlok.kyal.oss@gmail.com> Reviewed-by: Amit Kapila <amit.kapila16@gmail.com> Reviewed-by: Chao Li <li.evan.chao@gmail.com> Discussion: https://postgr.es/m/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
2025-11-05Drop unnamed portal immediately after execution to completionMichael Paquier
Previously, unnamed portals were kept until the next Bind message or the end of the transaction. This could cause temporary files to persist longer than expected and make logging not reflect the actual SQL responsible for the temporary file. This patch changes exec_execute_message() to drop unnamed portals immediately after execution to completion at the end of an Execute message, making their removal more aggressive. This forces temporary file cleanups to happen at the same time as the completion of the portal execution, with statement logging correctly reflecting to which statements these temporary files were attached to (see the diffs in the TAP test updated by this commit for an idea). The documentation is updated to describe the lifetime of unnamed portals, and test cases are updated to verify temporary file removal and proper statement logging after unnamed portal execution. This changes how unnamed portals are handled in the protocol, hence no backpatch is done. Author: Frédéric Yhuel <frederic.yhuel@dalibo.com> Co-Authored-by: Sami Imseih <samimseih@gmail.com> Co-Authored-by: Mircea Cadariu <cadariu.mircea@gmail.com> Discussion: https://postgr.es/m/CAA5RZ0tTrTUoEr3kDXCuKsvqYGq8OOHiBwoD-dyJocq95uEOTQ%40mail.gmail.com
2025-11-05Fix possible usage of incorrect UPPERREL_SETOP RelOptInfoDavid Rowley
03d40e4b5 allowed dummy UNION [ALL] children to be removed from the plan by checking for is_dummy_rel(). That commit neglected to still account for the relids from the dummy rel so that the correct UPPERREL_SETOP RelOptInfo could be found and used for adding the Paths to. Not doing this could result in processing of subsequent UNIONs using the same RelOptInfo as a previously processed UNION, which could result in add_path() freeing old Paths that are needed by the previous UNION. The same fix was independently submitted (2 mins later) by Richard Guo. Reported-by: Alexander Lakhin <exclusion@gmail.com> Author: David Rowley <dgrowleyml@gmail.com> Discussion: https://postgr.es/m/bee34aec-659c-46f1-9ab7-7bbae0b7616c@gmail.com
2025-11-04Allow "SET list_guc TO NULL" to specify setting the GUC to empty.Tom Lane
We have never had a SET syntax that allows setting a GUC_LIST_INPUT parameter to be an empty list. A locution such as SET search_path = ''; doesn't mean that; it means setting the GUC to contain a single item that is an empty string. (For search_path the net effect is much the same, because search_path ignores invalid schema names and '' must be invalid.) This is confusing, not least because configuration-file entries and the set_config() function can easily produce empty-list values. We considered making the empty-string syntax do this, but that would foreclose ever allowing empty-string items to be valid in list GUCs. While there isn't any obvious use-case for that today, it feels like the kind of restriction that might hurt someday. Instead, let's accept the forbidden-up-to-now value NULL and treat that as meaning an empty list. (An objection to this could be "what if we someday want to allow NULL as a GUC value?". That seems unlikely though, and even if we did allow it for scalar GUCs, we could continue to treat it as meaning an empty list for list GUCs.) Author: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Andrei Klychkov <andrew.a.klychkov@gmail.com> Reviewed-by: Jim Jones <jim.jones@uni-muenster.de> Discussion: https://postgr.es/m/CA+mfrmwsBmYsJayWjc8bJmicxc3phZcHHY=yW5aYe=P-1d_4bg@mail.gmail.com
2025-11-04Tighten check for generated column in partition key expressionPeter Eisentraut
A generated column may end up being part of the partition key expression, if it's specified as an expression e.g. "(<generated column name>)" or if the partition key expression contains a whole-row reference, even though we do not allow a generated column to be part of partition key expression. Fix this hole. Co-authored-by: jian he <jian.universality@gmail.com> Co-authored-by: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> Reviewed-by: Fujii Masao <masao.fujii@oss.nttdata.com> Discussion: https://www.postgresql.org/message-id/flat/CACJufxF%3DWDGthXSAQr9thYUsfx_1_t9E6N8tE3B8EqXcVoVfQw%40mail.gmail.com
2025-11-04BRIN autosummarization may need a snapshotÁlvaro Herrera
It's possible to define BRIN indexes on functions that require a snapshot to run, but the autosummarization feature introduced by commit 7526e10224f0 fails to provide one. This causes autovacuum to leave a BRIN placeholder tuple behind after a failed work-item execution, making such indexes less efficient. Repair by obtaining a snapshot prior to running the task, and add a test to verify this behavior. Author: Álvaro Herrera <alvherre@kurilemu.de> Reported-by: Giovanni Fabris <giovanni.fabris@icon.it> Reported-by: Arthur Nascimento <tureba@gmail.com> Backpatch-through: 13 Discussion: https://postgr.es/m/202511031106.h4fwyuyui6fz@alvherre.pgsql
2025-11-04Error message stylistic correctionPeter Eisentraut
Fixup for commit ef5e60a9d35: The inconsistent use of articles was a bit awkward.
2025-11-04Fix unconditional WAL receiver shutdown during stream-archive transitionMichael Paquier
Commit b4f584f9d2a1 (affecting v15~, later backpatched down to 13 as of 3635a0a35aaf) introduced an unconditional WAL receiver shutdown when switching from streaming to archive WAL sources. This causes problems during a timeline switch, when a WAL receiver enters WALRCV_WAITING state but remains alive, waiting for instructions. The unconditional shutdown can break some monitoring scenarios as the WAL receiver gets repeatedly terminated and re-spawned, causing pg_stat_wal_receiver.status to show a "streaming" instead of "waiting" status, masking the fact that the WAL receiver is waiting for a new TLI and a new LSN to be able to continue streaming. This commit changes the WAL receiver behavior so as the shutdown becomes conditional, with InstallXLogFileSegmentActive being always reset to prevent the regression fixed by b4f584f9d2a1: only terminate the WAL receiver when it is actively streaming (WALRCV_STREAMING, WALRCV_STARTING, or WALRCV_RESTARTING). When in WALRCV_WAITING state, just reset InstallXLogFileSegmentActive flag to allow archive restoration without killing the process. WALRCV_STOPPED and WALRCV_STOPPING are not reachable states in this code path. For the latter, the startup process is the one in charge of setting WALRCV_STOPPING via ShutdownWalRcv(), waiting for the WAL receiver to reach a WALRCV_STOPPED state after switching walRcvState, so WaitForWALToBecomeAvailable() cannot be reached while a WAL receiver is in a WALRCV_STOPPING state. A regression test is added to check that a WAL receiver is not stopped on timeline jump, that fails when the fix of this commit is reverted. Reported-by: Ryan Bird <ryanzxg@gmail.com> Author: Xuneng Zhou <xunengzhou@gmail.com> Reviewed-by: Noah Misch <noah@leadboat.com> Reviewed-by: Michael Paquier <michael@paquier.xyz> Discussion: https://postgr.es/m/19093-c4fff49a608f82a0@postgresql.org Backpatch-through: 13
2025-11-03Add \pset options for boolean value displayÁlvaro Herrera
New \pset variables display_true and display_false allow the user to change how true and false values are displayed. Author: David G. Johnston <David.G.Johnston@gmail.com> Reviewed-by: Álvaro Herrera <alvherre@kurilemu.de> Discussion: https://postgr.es/m/CAKFQuwYts3vnfQ5AoKhEaKMTNMfJ443MW2kFswKwzn7fiofkrw@mail.gmail.com Discussion: https://postgr.es/m/56308F56.8060908@joh.to
2025-11-03Prevent setting a column as identity if its not-null constraint is invalidÁlvaro Herrera
We don't allow null values to appear in identity-generated columns in other ways, so we shouldn't let unvalidated not-null constraints do it either. Oversight in commit a379061a22a8. Author: jian he <jian.universality@gmail.com> Backpatch-through: 18 Discussion: https://postgr.es/m/CACJufxGQM_+vZoYJMaRoZfNyV=L2jxosjv_0TLAScbuLJXWRfQ@mail.gmail.com
2025-11-01Remove unused variable in recovery/t/006_logical_decoding.pl.Tom Lane
Author: Daniil Davydov <3danissimo@gmail.com> Reviewed-by: Daniel Gustafsson <daniel@yesql.se> Discussion: https://postgr.es/m/CAJDiXggmZWew8+SY_9o0atpmaJmPTL25wdz07MrDoqCkp4D1ug@mail.gmail.com
2025-10-31Mark function arguments of type "Datum *" as "const Datum *" where possiblePeter Eisentraut
Several functions in the codebase accept "Datum *" parameters but do not modify the pointed-to data. These have been updated to take "const Datum *" instead, improving type safety and making the interfaces clearer about their intent. This change helps the compiler catch accidental modifications and better documents immutability of arguments. Most of "Datum *" parameters have a pairing "bool *isnull" parameter, they are constified as well. No functional behavior is changed by this patch. Author: Chao Li <lic@highgo.com> Discussion: https://www.postgresql.org/message-id/flat/CAEoWx2msfT0knvzUa72ZBwu9LR_RLY4on85w2a9YpE-o2By5HQ@mail.gmail.com