summaryrefslogtreecommitdiff
path: root/doc
AgeCommit message (Collapse)Author
35 hoursdoc: Remove trailing whitespace in xrefHEADorigin/masterorigin/HEADmasterDaniel Gustafsson
Remove stray whitespace in xref tag. This was found due to a regression in xmllint 2.15.0 which flagged this as an error, and at the time of this commit no fix for xmllint has shipped. Author: Erik Wienhold <ewie@ewie.name> Discussion: https://postgr.es/m/f4c4661b-4e60-4c10-9336-768b7b55c084@ewie.name Backpatch-through: 17
3 daysAdd support for base64url encoding and decodingDaniel Gustafsson
This adds support for base64url encoding and decoding, a base64 variant which is safe to use in filenames and URLs. base64url replaces '+' in the base64 alphabet with '-' and '/' with '_', thus making it safe for URL addresses and file systems. Support for base64url was originally suggested by Przemysław Sztoch. Author: Florents Tselai <florents.tselai@gmail.com> Reviewed-by: Aleksander Alekseev <aleksander@timescale.com> Reviewed-by: David E. Wheeler <david@justatheory.com> Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com> Reviewed-by: Daniel Gustafsson <daniel@yesql.se> Reviewed-by: Chao Li (Evan) <li.evan.chao@gmail.com> Discussion: https://postgr.es/m/70f2b6a8-486a-4fdb-a951-84cef35e22ab@sztoch.pl
5 daysAdd optional pid parameter to pg_replication_origin_session_setup().Amit Kapila
Commit 216a784829c introduced parallel apply workers, allowing multiple processes to share a replication origin. To support this, replorigin_session_setup() was extended to accept a pid argument identifying the process using the origin. This commit exposes that capability through the SQL interface function pg_replication_origin_session_setup() by adding an optional pid parameter. This enables multiple processes to coordinate replication using the same origin when using SQL-level replication functions. This change allows the non-builtin logical replication solutions to implement parallel apply for large transactions. Additionally, an existing internal error was made user-facing, as it can now be triggered via the exposed SQL API. Author: Doruk Yilmaz <doruk@mixrank.com> Author: Hayato Kuroda <kuroda.hayato@fujitsu.com> Reviewed-by: Amit Kapila <amit.kapila16@gmail.com> Reviewed-by: Euler Taveira <euler@eulerto.com> Discussion: https://postgr.es/m/CAMPB6wfe4zLjJL8jiZV5kjjpwBM2=rTRme0UCL7Ra4L8MTVdOg@mail.gmail.com Discussion: https://postgr.es/m/CAE2gYzyTSNvHY1+iWUwykaLETSuAZsCWyryokjP6rG46ZvRgQA@mail.gmail.com
7 daysRevert "Avoid race condition between "GRANT role" and "DROP ROLE"".Tom Lane
This reverts commit 98fc31d6499163a0a781aa6f13582a07f09cd7c6. That change allowed DROP OWNED BY to drop grants of the target role to other roles, arguing that nobody would need those privileges anymore. But that's not so: if you're not superuser, you still need admin privilege on the target role so you can drop it. It's not clear whether or how the dependency-based approach to solving the original problem can be adapted to keep these grants. Since v18 release is fast approaching, the sanest thing to do seems to be to revert this patch for now. The race-condition problem is low severity and not worth taking risks for. I didn't force a catversion bump in 98fc31d64, so I won't do so here either. Reported-by: Dipesh Dhameliya <dipeshdhameliya125@gmail.com> Author: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/CABgZEgczOFicCJoqtrH9gbYMe_BV3Hq8zzCBRcMgmU6LRsihUA@mail.gmail.com Backpatch-through: 18
7 daysProvide more-specific error details/hints for function lookup failures.Tom Lane
Up to now we've contented ourselves with a one-size-fits-all error hint when we fail to find any match to a function or procedure call. That was mostly okay in the beginning, but it was never great, and since the introduction of named arguments it's really not adequate. We at least ought to distinguish "function name doesn't exist" from "function name exists, but not with those argument names". And the rules for named-argument matching are arcane enough that some more detail seems warranted if we match the argument names but the call still doesn't work. This patch creates a framework for dealing with these problems: FuncnameGetCandidates and related code will now pass back a bitmask of flags showing how far the match succeeded. This allows a considerable amount of granularity in the reports. The set-bits-in-a-bitmask approach means that when there are multiple candidate functions, the report will reflect the match(es) that got the furthest, which seems correct. Also, we can avoid mentioning "maybe add casts" unless failure to match argument types is actually the issue. Extend the same return-a-bitmask approach to OpernameGetCandidates. The issues around argument names don't apply to operator syntax, but it still seems worth distinguishing between "there is no operator of that name" and "we couldn't match the argument types". While at it, adjust these messages and related ones to more strictly separate "detail" from "hint", following our message style guidelines' distinction between those. Reported-by: Dominique Devienne <ddevienne@gmail.com> Author: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Robert Haas <robertmhaas@gmail.com> Discussion: https://postgr.es/m/1756041.1754616558@sss.pgh.pa.us
8 daysImprove ExplainState type handling in header filesPeter Eisentraut
Now that we can have repeat typedefs with C11, we don't need to use "struct ExplainState" anymore but can instead make a typedef where necessary. This doesn't change anything but makes it look nicer. (There are more opportunities for similar changes, but this is broken out because there was a separate discussion about it, and it's somewhat bulky on its own.) Reviewed-by: Chao Li <li.evan.chao@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/f36c0a45-98cd-40b2-a7cc-f2bf02b12890%40eisentraut.org#a12fb1a2c1089d6d03010f6268871b00 Discussion: https://www.postgresql.org/message-id/flat/10d32190-f31b-40a5-b177-11db55597355@eisentraut.org
8 daysResume conflict-relevant data retention automatically.Amit Kapila
This commit resumes automatic retention of conflict-relevant data for a subscription. Previously, retention would stop if the apply process failed to advance its xmin (oldest_nonremovable_xid) within the configured max_retention_duration and user needs to manually re-enable retain_dead_tuples option. With this change, retention will resume automatically once the apply worker catches up and begins advancing its xmin (oldest_nonremovable_xid) within the configured threshold. Author: Zhijie Hou <houzj.fnst@fujitsu.com> Reviewed-by: shveta malik <shveta.malik@gmail.com> Reviewed-by: Amit Kapila <amit.kapila16@gmail.com> Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com> Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com> Discussion: https://postgr.es/m/OS0PR01MB5716BE80DAEB0EE2A6A5D1F5949D2@OS0PR01MB5716.jpnprd01.prod.outlook.com
9 daysHide duplicate names from extension viewsPeter Eisentraut
If extensions of equal names were installed in different directories in the path, the views pg_available_extensions and pg_available_extension_versions would show all of them, even though only the first one was actually reachable by CREATE EXTENSION. To fix, have those views skip extensions found later in the path if they have names already found earlier. Also add a bit of documentation that only the first extension in the path can be used. Reported-by: Pierrick <pierrick.chovelon@dalibo.com> Discussion: https://www.postgresql.org/message-id/flat/8f5a0517-1cb8-4085-ae89-77e7454e27ba%40dalibo.com
12 daysDefault to log_lock_waits=onPeter Eisentraut
If someone is stuck behind a lock for more than a second, that is almost always a problem that is worth a log entry. Author: Laurenz Albe <laurenz.albe@cybertec.at> Reviewed-By: Michael Banck <mbanck@gmx.net> Reviewed-By: Robert Haas <robertmhaas@gmail.com> Reviewed-By: Christoph Berg <myon@debian.org> Reviewed-By: Stephen Frost <sfrost@snowman.net> Discussion: https://postgr.es/m/b8b8502915e50f44deb111bc0b43a99e2733e117.camel%40cybertec.at
12 daysRemove traces of support for Sun Studio compilerPeter Eisentraut
Per discussion, this compiler suite is no longer maintained, and it has not been able to compile PostgreSQL since at least PostgreSQL 17. This removes all the remaining support code for this compiler. Note that the Solaris operating system continues to be supported, but using GCC as the compiler. Reviewed-by: Andres Freund <andres@anarazel.de> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://www.postgresql.org/message-id/flat/a0f817ee-fb86-483a-8a14-b6f7f5991b6e%40eisentraut.org
12 daysdoc: Fix indentation in func-datetime.sgml.Dean Rasheed
Incorrect indentation introduced by commit faf071b5538.
12 daysdoc: Improve description of new random(min, max) functions.Dean Rasheed
Mention that the new variants of random(min, max) are affected by setseed(), like the original functions. Reported-by: Marcos Pegoraro <marcos@f10.com.br> Discussion: https://postgr.es/m/CAB-JLwb1=drA3Le6uZXDBi_tCpeS1qm6XQU7dKwac_x91Z4qDg@mail.gmail.com
14 daysFix documentation for shmem_startup_hook.Nathan Bossart
This section claims that each backend executes the shmem_startup_hook shortly after attaching to shared memory, which is true for EXEC_BACKEND builds, but not for others. This commit adds this important detail. Oversight in commit 964152c476. Reported-by: Sami Imseih <samimseih@gmail.com> Reviewed-by: Sami Imseih <samimseih@gmail.com> Discussion: https://postgr.es/m/CAA5RZ0vEGT1eigGbVt604LkXP6mUPMwPMxQoRCbFny44w%2B9EUQ%40mail.gmail.com Backpatch-through: 17
2025-09-09Add date and timestamp variants of random(min, max).Dean Rasheed
This adds 3 new variants of the random() function: random(min date, max date) returns date random(min timestamp, max timestamp) returns timestamp random(min timestamptz, max timestamptz) returns timestamptz Each returns a random value x in the range min <= x <= max. Author: Damien Clochard <damien@dalibo.info> Reviewed-by: Greg Sabino Mullane <htamfids@gmail.com> Reviewed-by: Dean Rasheed <dean.a.rasheed@gmail.com> Reviewed-by: Vik Fearing <vik@postgresfriends.org> Reviewed-by: Chao Li <li.evan.chao@gmail.com> Discussion: https://postgr.es/m/f524d8cab5914613d9e624d9ce177d3d@dalibo.info
2025-09-06Allow to log raw parse tree.Tatsuo Ishii
This commit allows to log the raw parse tree in the same way we currently log the parse tree, rewritten tree, and plan tree. To avoid unnecessary log noise for users not interested in this detail, a new GUC option, "debug_print_raw_parse", has been added. When starting the PostgreSQL process with "-d N", and N is 3 or higher, debug_print_raw_parse is enabled automatically, alongside debug_print_parse. Author: Chao Li <lic@highgo.com> Reviewed-by: Tender Wang <tndrwang@gmail.com> Reviewed-by: Tatsuo Ishii <ishii@postgresql.org> Reviewed-by: John Naylor <johncnaylorls@gmail.com> Discussion: https://postgr.es/m/CAEoWx2mcO0Gpo4vd8kPMAFWeJLSp0MeUUnaLdE1x0tSVd-VzUw%40mail.gmail.com
2025-09-03Move dynamically-allocated LWLock tranche names to shared memory.Nathan Bossart
There are two ways for shared libraries to allocate their own LWLock tranches. One way is to call RequestNamedLWLockTranche() in a shmem_request_hook, which requires the library to be loaded via shared_preload_libraries. The other way is to call LWLockNewTrancheId(), which is not subject to the same restrictions. However, LWLockNewTrancheId() does require each backend to store the tranche's name in backend-local memory via LWLockRegisterTranche(). This API is a little cumbersome and leads to things like unhelpful pg_stat_activity.wait_event values in backends that haven't loaded the library. This commit moves these LWLock tranche names to shared memory, thus eliminating the need for each backend to call LWLockRegisterTranche(). Instead, the tranche name must be provided to LWLockNewTrancheId(), which immediately makes the name available to all backends. Since the tranche name array is append-only, lookups can ordinarily avoid locking as long as their local copy of the LWLock counter is greater than the requested tranche ID. One downside of this approach is that we now have a hard limit on both the length of tranche names (NAMEDATALEN-1 bytes) and the number of dynamically-allocated tranches (256). Besides a limit of NAMEDATALEN-1 bytes for tranche names registered via RequestNamedLWLockTranche(), no such limits previously existed. We could avoid these new limits by using dynamic shared memory, but the complexity involved didn't seem worth it. We briefly considered making the tranche limit user-configurable but ultimately decided against that, too. Since there is still a lot of time left in the v19 development cycle, it's possible we will revisit this choice. Author: Sami Imseih <samimseih@gmail.com> Reviewed-by: Bertrand Drouvot <bertranddrouvot.pg@gmail.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Rahila Syed <rahilasyed90@gmail.com> Reviewed-by: Andres Freund <andres@anarazel.de> Discussion: https://postgr.es/m/CAA5RZ0vvED3naph8My8Szv6DL4AxOVK3eTPS0qXsaKi%3DbVdW2A%40mail.gmail.com
2025-09-02Add max_retention_duration option to subscriptions.Amit Kapila
This commit introduces a new subscription parameter, max_retention_duration, aimed at mitigating excessive accumulation of dead tuples when retain_dead_tuples is enabled and the apply worker lags behind the publisher. When the time spent advancing a non-removable transaction ID exceeds the max_retention_duration threshold, the apply worker will stop retaining conflict detection information. In such cases, the conflict slot's xmin will be set to InvalidTransactionId, provided that all apply workers associated with the subscription (with retain_dead_tuples enabled) confirm the retention duration has been exceeded. To ensure retention status persists across server restarts, a new column subretentionactive has been added to the pg_subscription catalog. This prevents unnecessary reactivation of retention logic after a restart. The conflict detection slot will not be automatically re-initialized unless a new subscription is created with retain_dead_tuples = true, or the user manually re-enables retain_dead_tuples. A future patch will introduce support for automatic slot re-initialization once at least one apply worker confirms that the retention duration is within the configured max_retention_duration. Author: Zhijie Hou <houzj.fnst@fujitsu.com> Reviewed-by: shveta malik <shveta.malik@gmail.com> Reviewed-by: Nisha Moond <nisha.moond412@gmail.com> Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com> Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com> Reviewed-by: Amit Kapila <amit.kapila16@gmail.com> Discussion: https://postgr.es/m/OS0PR01MB5716BE80DAEB0EE2A6A5D1F5949D2@OS0PR01MB5716.jpnprd01.prod.outlook.com
2025-08-28Glossary: improve definition of "relation"Álvaro Herrera
Define the more general term first, then the Postgres-specific meaning. Wording from Tom Lane. Discussion: https://postgr.es/m/CACJufxEZ48toGH0Em_6vdsT57Y3L8pLF=DZCQ_gCii6=C3MeXw@mail.gmail.com
2025-08-26Document privileges required for vacuumdb --missing-stats-only.Nathan Bossart
When vacuumdb's --missing-stats-only option is used, the catalog query for retrieving the list of relations to process must read pg_statistic and pg_statistic_ext_data. However, those catalogs can only be read by superusers by default, so --missing-stats-only is effectively superuser-only. This is unfortunate, but since the option is primarily intended for use by administrators after running pg_upgrade, let's just live with it for v18. This commit adds a note about the aforementioned privilege requirements to the documentation for --missing-stats-only. We first tried to improve matters by modifying the query to read the pg_stats and pg_stats_ext system views instead. While that is indeed more lenient from a privilege standpoint, it is also borderline incomprehensible. pg_stats shows rows for which the user has the SELECT privilege on the corresponding column, and pg_stats_ext shows rows for tables the user owns. Meanwhile, ANALYZE requires either MAINTAIN on the table or, for non-shared relations, ownership of the database. But even if the privilege discrepancies were tolerable, the performance impact was not. Ultimately, the modified query was substantially more expensive, so we abandoned the idea. For v19, perhaps we could introduce a simple, inexpensive way to discover which relations are missing statistics, such as a system function or view with similar privilege requirements to ANALYZE. Unfortunately, it is far too late for anything like that in v18. Reviewed-by: Yugo Nagata <nagata@sraoss.co.jp> Reviewed-by: Fujii Masao <masao.fujii@gmail.com> Discussion: https://postgr.es/m/CAHGQGwHh43suEfss1wvBsk7vqiou%3DUY0zcy8HGyE5hBp%2BHZ7SQ%40mail.gmail.com Backpatch-through: 18
2025-08-26Further clarify documentation for the initcap functionAlexander Korotkov
This is a follow-up for commit c2c2c7e225. It further clarifies the following in the initcap function documentation: * Document that title case is used for digraphs in specific locales, * Reference particular ICU function used, * Add note about the purpose of the function. Discussion: https://postgr.es/m/804cc10ef95d4d3b298e76b181fd9437%40postgrespro.ru Author: Oleg Tselebrovskiy <o.tselebrovskiy@postgrespro.ru> Co-authored-by: Alexander Korotkov <aekorotkov@gmail.com> Reviewed-by: Jeff Davis <pgsql@j-davis.com> Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
2025-08-26Raise C requirement to C11Peter Eisentraut
This changes configure and meson.build to require at least C11, instead of the previous C99. The installation documentation is updated accordingly. configure.ac previously used AC_PROG_CC_C99 to activate C99. But there is no AC_PROG_CC_C11 in Autoconf 2.69, because it's too old. (Also, post-2.69, the AC_PROG_CC_Cnn macros were deprecated and AC_PROG_CC activates the last supported C mode.) We could update the required Autoconf version, but that might be a separate project that no one wants to undertake at the moment. Instead, we open-code the test for C11 using some inspiration from later Autoconf versions. But instead of writing an elaborate test program, we keep it simple and just check __STDC_VERSION__, which should be good enough in practice. In meson.build, we update the existing C99 test to C11, but again we just check for __STDC_VERSION__. This also removes the separate option for the conforming preprocessor on MSVC, added by commit 8fd9bb1d965, since that is activated automatically in C11 mode. Note, we don't use the "official" way to set the C standard in Meson using the c_std project option, because that is impossible to use correctly (see <https://github.com/mesonbuild/meson/issues/14717>). Reviewed-by: David Rowley <dgrowleyml@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/01a69441-af54-4822-891b-ca28e05b215a@eisentraut.org
2025-08-25Message wording improvementsPeter Eisentraut
Use "row" instead of "tuple" for user-facing information for logical replication conflicts.
2025-08-22libpq: Be strict about cancel key lengthsHeikki Linnakangas
The protocol documentation states that the maximum length of a cancel key is 256 bytes. This starts checking for that limit in libpq. Otherwise third party backend implementations will probably start using more bytes anyway. We also start requiring that a protocol 3.0 connection does not send a longer cancel key, to make sure that servers don't start breaking old 3.0-only clients by accident. Finally this also restricts the minimum key length to 4 bytes (both in the protocol spec and in the libpq implementation). Author: Jelte Fennema-Nio <postgres@jeltef.nl> Reviewed-by: Jacob Champion <jchampion@postgresql.org> Discussion: https://www.postgresql.org/message-id/df892f9f-5923-4046-9d6f-8c48d8980b50@iki.fi Backpatch-through: 18
2025-08-22Doc: Fix typo in logicaldecoding.sgml.Amit Kapila
Author: Hayato Kuroda <kuroda.hayato@fujitsu.com> Backpatch-through: 17, where it was introduced Discussion: https://postgr.es/m/OSCPR01MB149662EC5467B4135398E3731F532A@OSCPR01MB14966.jpnprd01.prod.outlook.com
2025-08-22Ignore temporary relations in RelidByRelfilenumber()Michael Paquier
Temporary relations may share the same RelFileNumber with a permanent relation, or other temporary relations associated with other sessions. Being able to uniquely identify a temporary relation would require RelidByRelfilenumber() to know about the proc number of the temporary relation it wants to identify, something it is not designed for since its introduction in f01d1ae3a104. There are currently three callers of RelidByRelfilenumber(): - autoprewarm. - Logical decoding, reorder buffer. - pg_filenode_relation(), that attempts to find a relation OID based on a tablespace OID and a RelFileNumber. This makes the situation problematic particularly for the first two cases, leading to the possibility of random ERRORs due to inconsistencies that temporary relations can create in the cache maintained by RelidByRelfilenumber(). The third case should be less of an issue, as I suspect that there are few direct callers of pg_filenode_relation(). The window where the ERRORs are happen is very narrow, requiring an OID wraparound to create a lookup conflict in RelidByRelfilenumber() with a temporary table reusing the same OID as another relation already cached. The problem is easier to reach in workloads with a high OID consumption rate, especially with a higher number of temporary relations created. We could get pg_filenode_relation() and RelidByRelfilenumber() to work with temporary relations if provided the means to identify them with an optional proc number given in input, but the years have also shown that we do not have a use case for it, yet. Note that this could not be backpatched if pg_filenode_relation() needs changes. It is simpler to ignore temporary relations. Reported-by: Shenhao Wang <wangsh.fnst@fujitsu.com> Author: Vignesh C <vignesh21@gmail.com> Reviewed-By: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> Reviewed-By: Robert Haas <robertmhaas@gmail.com> Reviewed-By: Kyotaro Horiguchi <horikyota.ntt@gmail.com> Reviewed-By: Takamichi Osumi <osumi.takamichi@fujitsu.com> Reviewed-By: Michael Paquier <michael@paquier.xyz> Reviewed-By: Masahiko Sawada <sawada.mshk@gmail.com> Reported-By: Shenhao Wang <wangsh.fnst@fujitsu.com> Discussion: https://postgr.es/m/bbaaf9f9-ebb2-645f-54bb-34d6efc7ac42@fujitsu.com Backpatch-through: 13
2025-08-21PL/Python: Add event trigger supportPeter Eisentraut
Allow event triggers to be written in PL/Python. It provides a TD dictionary with some information about the event trigger. Author: Euler Taveira <euler@eulerto.com> Co-authored-by: Dimitri Fontaine <dimitri@2ndQuadrant.fr> Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/03f03515-2068-4f5b-b357-8fb540883c38%40app.fastmail.com
2025-08-21doc: Improve description of wal_compressionMichael Paquier
The description of this GUC provides a list of the situations where full-page writes are generated. However, it is not completely exact, mentioning only the cases where full_page_writes=on or base backups. It is possible to generate full-page writes in more situations than these two, making the description confusing as it implies that no other cases exist. The description is slightly reworded to take into account that other cases are possible, without mentioning them directly to minimize the maintenance burden should FPWs be generated in more contexts in the future. Author: Jingtang Zhang <mrdrivingduck@gmail.com> Reviewed-by: Andrey Borodin <x4mmm@yandex-team.ru> Reviewed-by: Xuneng Zhou <xunengzhou@gmail.com> Discussion: https://postgr.es/m/CAPsk3_CtAYa_fy4p6=x7qtoutrdKvg1kGk46D5fsE=sMt2546g@mail.gmail.com Backpatch-through: 13
2025-08-20vacuumdb: Make vacuumdb --analyze-only process partitioned tables.Fujii Masao
vacuumdb should follow the behavior of the underlying VACUUM and ANALYZE commands. When --analyze-only is used, it ought to analyze regular tables, materialized views, and partitioned tables, just as ANALYZE (with no explicit target tables) does. Otherwise, it should only process regular tables and materialized views, since VACUUM skips partitioned tables when no targets are given. Previously, vacuumdb --analyze-only skipped partitioned tables. This was inconsistent, and also inconvenient after pg_upgrade, where --analyze-only is typically used to gather missing statistics. This commit fixes the behavior so that vacuumdb --analyze-only also processes partitioned tables. As a result, both vacuumdb --analyze-only and ANALYZE (with no explicit targets) now analyze regular tables, partitioned tables, and materialized views, but not foreign tables. Because this is a nontrivial behavior change, it is applied only to master. Reported-by: Zechman, Derek S <Derek.S.Zechman@snapon.com> Author: Laurenz Albe <laurenz.albe@cybertec.at> Co-authored-by: Mircea Cadariu <cadariu.mircea@gmail.com> Reviewed-by: Fujii Masao <masao.fujii@gmail.com> Discussion: https://postgr.es/m/CO1PR04MB8281387B9AD9DE30976966BBC045A%40CO1PR04MB8281.namprd04.prod.outlook.com
2025-08-19doc: Improve pgoutput documentation.Fujii Masao
This commit updates the pgoutput documentation with the following changes: - Specify the data type for each pgoutput option. - Clarify the relationship between proto_version and options such as streaming and two_phase. - Add a note on the use of pg_logical_slot_peek_changes and pg_logical_slot_get_changes with pgoutput. Author: Fujii Masao <masao.fujii@gmail.com> Reviewed-by: Chao Li <li.evan.chao@gmail.com> Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com> Discussion: https://postgr.es/m/CAHGQGwFJTbygdhhjR_iP4Oem=Lo1xsptWWOq825uoW+hG_Lfnw@mail.gmail.com
2025-08-19doc: Improve documentation discoverability for pgoutput.Fujii Masao
Previously, the documentation for pgoutput was located in the section on the logical streaming replication protocol, and there was no index entry for it. As a result, users had difficulty finding information about pgoutput. This commit moves the pgoutput documentation under the logical decoding section and adds an index entry, making it easier for users to locate and access this information. Author: Fujii Masao <masao.fujii@gmail.com> Reviewed-by: Chao Li <li.evan.chao@gmail.com> Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com> Reviewed-by: Euler Taveira <euler@eulerto.com> Discussion: https://postgr.es/m/CAHGQGwFJTbygdhhjR_iP4Oem=Lo1xsptWWOq825uoW+hG_Lfnw@mail.gmail.com
2025-08-13Grab the low-hanging fruit from forcing sizeof(Datum) to 8.Tom Lane
Remove conditionally-compiled code for smaller Datum widths, and simplify comments that describe cases no longer of interest. I also fixed up a few more places that were not using DatumGetIntXX where they should, and made some cosmetic adjustments such as using sizeof(int64) not sizeof(Datum) in places where that fit better with the surrounding code. One thing I remembered while preparing this part is that SP-GiST stores pass-by-value prefix keys as Datums, so that the on-disk representation depends on sizeof(Datum). That's even more unfortunate than the existing commentary makes it out to be, because now there is a hazard that the change of sizeof(Datum) will break SP-GiST indexes on 32-bit machines. It appears that there are no existing SP-GiST opclasses that are actually affected; and if there are some that I didn't find, the number of installations that are using them on 32-bit machines is doubtless tiny. So I'm proceeding on the assumption that we can get away with this, but it's something to worry about. (gininsert.c looks like it has a similar problem, but it's okay because the "tuples" it's constructing are just transient data within the tuplesort step. That's pretty poorly documented though, so I added some comments.) Author: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Peter Eisentraut <peter@eisentraut.org> Discussion: https://postgr.es/m/1749799.1752797397@sss.pgh.pa.us
2025-08-13Adjust some table column widths in PDFPeter Eisentraut
Make some column widths more pleasing. Note: Some of this relies on the reduced body indents introduced by commit 37e06ba6e82. Author: Noboru Saito <noborusai@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/CAAM3qnLyMUD79XF+SqAVwWCwURCF3hyuFY9Ki9Csbqs-zMwwnw@mail.gmail.com
2025-08-13Improve PDF documentation marginsPeter Eisentraut
Set body indent to 0 to make use of the horizontal space better (and some reviewers thought it was also more readable). Add some left and right margin to the warning boxes, otherwise they drift too far off the page in combination with the above change. Author: Noboru Saito <noborusai@gmail.com> Reviewed-by: Hayato Kuroda (Fujitsu) <kuroda.hayato@fujitsu.com> Reviewed-by: Florents Tselai <florents.tselai@gmail.com> Reviewed-by: Tatsuo Ishii <ishii@postgresql.org> Discussion: https://www.postgresql.org/message-id/flat/CAAM3qnLyMUD79XF+SqAVwWCwURCF3hyuFY9Ki9Csbqs-zMwwnw@mail.gmail.com
2025-08-13Clean up order in stylesheete-fo.xslPeter Eisentraut
Make a separate section for release notes customization. Commits f986882ffd6 and 8a6e85b46e0 put those into the middle of unrelated things.
2025-08-11Restrict psql meta-commands in plain-text dumps.Nathan Bossart
A malicious server could inject psql meta-commands into plain-text dump output (i.e., scripts created with pg_dump --format=plain, pg_dumpall, or pg_restore --file) that are run at restore time on the machine running psql. To fix, introduce a new "restricted" mode in psql that blocks all meta-commands (except for \unrestrict to exit the mode), and teach pg_dump, pg_dumpall, and pg_restore to use this mode in plain-text dumps. While at it, encourage users to only restore dumps generated from trusted servers or to inspect it beforehand, since restoring causes the destination to execute arbitrary code of the source superusers' choice. However, the client running the dump and restore needn't trust the source or destination superusers. Reported-by: Martin Rakhmanov Reported-by: Matthieu Denais <litezeraw@gmail.com> Reported-by: RyotaK <ryotak.mail@gmail.com> Suggested-by: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Noah Misch <noah@leadboat.com> Reviewed-by: Michael Paquier <michael@paquier.xyz> Reviewed-by: Peter Eisentraut <peter@eisentraut.org> Security: CVE-2025-8714 Backpatch-through: 13
2025-08-07doc: add float as an alias for double precision.Tom Lane
Although the "Floating-Point Types" section says that "float" data type is taken to mean "double precision", this information was not reflected in the data type table that lists all data type aliases. Reported-by: alexander.kjall@hafslund.no Author: Euler Taveira <euler@eulerto.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/175456294638.800.12038559679827947313@wrigleys.postgresql.org Backpatch-through: 13
2025-08-07doc: Formatting improvementsPeter Eisentraut
Small touch-up on commits 25505082f0e and 50fd428b2b9. Fix the formatting of the example messages in the documentation and adjust the wording to match the code.
2025-08-06doc: Recommend ANALYZE after ALTER TABLE ... SET EXPRESSION AS.Fujii Masao
ALTER TABLE ... SET EXPRESSION AS removes statistics for the target column, so running ANALYZE afterward is recommended. But this was previously not documented, even though a similar recommendation exists for ALTER TABLE ... SET DATA TYPE, which also clears the column's statistics. This commit updates the documentation to include the ANALYZE recommendation for SET EXPRESSION AS. Since v18, virtual generated columns are supported, and these columns never have statistics. Therefore, ANALYZE is not needed after SET DATA TYPE or SET EXPRESSION AS when used on virtual generated columns. This commit also updates the documentation to clarify that ANALYZE is unnecessary in such cases. Back-patch the ANALYZE recommendation for SET EXPRESSION AS to v17 where the feature was introduced, and the note about virtual generated columns to v18 where those columns were added. Author: Yugo Nagata <nagata@sraoss.co.jp> Reviewed-by: Fujii Masao <masao.fujii@gmail.com> Discussion: https://postgr.es/m/20250804151418.0cf365bd2855d606763443fe@sraoss.co.jp Backpatch-through: 17
2025-08-05Put PG_TEST_EXTRA doc items back in alphabetical orderÁlvaro Herrera
A few items appears to have added in random order over the years.
2025-08-05Hide expensive pg_upgrade test behind PG_TEST_EXTRAÁlvaro Herrera
This new test is very expensive. Make it opt-in. Discussion: https://postgr.es/m/202508051433.ebznuqrxt4b2@alvherre.pgsql
2025-08-05Add backup_type column to pg_stat_progress_basebackup.Masahiko Sawada
This commit introduces a new column backup_type that indicates the type of backup being performed: either 'full' or 'incremental'. Bump catalog version. Author: Shinya Kato <shinya11.kato@gmail.com> Reviewed-by: Yugo Nagata <nagata@sraoss.co.jp> Discussion: https://postgr.es/m/CAOzEurQuzbHwTj1ehk1a+eeQDidJPyrE5s6mYumkjwjZnurhkQ@mail.gmail.com
2025-08-05Convert varatt.h access macros to static inline functions.Peter Eisentraut
We've only bothered converting the external interfaces, not the endian-dependent internal macros (which should not be used by any callers other than the interface functions in this header, anyway). The VARTAG_1B_E() changes are required for C++ compatibility. Author: Peter Eisentraut <peter@eisentraut.org> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/928ea48f-77c6-417b-897c-621ef16685a6@eisentraut.org
2025-08-04Split func.sgml into more manageable piecesAndrew Dunstan
func.sgml has grown over the years to the point where it is very difficult to manage. This commit splits out each sect1 piece into its own file, which is then included in the main file, so that the built documentation should be identical to the pre-split documentation. All these new files are placed in a new "func" subdirectory, and the previous func.sgml is removed. Done using scripts developed by: Author: jian he <jian.universality@gmail.com> Discussion: https://postgr.es/m/CACJufxFgAh1--EMwOjMuANe=VTmjkNaZjH+AzSe04-8ZCGiESA@mail.gmail.com
2025-08-04Rename XLogData protocol message to WALDataÁlvaro Herrera
This name is only used as documentation, and using this name is consistent with its byte being a 'w'. Renaming it would also make the use of a symbolic name based on the word "WAL" rather than the obsolete "XLog" term more consistent, per future commits along the lines of 37c7a7eeb6d1, 4a68d5008869, f4b54e1ed985. Discussion: https://postgr.es/m/aIECfYfevCUpenBT@nathan
2025-08-04doc: mention unusability of dropped CHECK to verify NOT NULLÁlvaro Herrera
It's possible to use a CHECK (col IS NOT NULL) constraint to skip scanning a table for nulls when adding a NOT NULL constraint on the same column. However, if the CHECK constraint is dropped on the same command that the NOT NULL is added, this fails, i.e., makes the NOT NULL addition slow. The best we can do about it at this stage is to document this so that users aren't taken by surprise. (In Postgres 18 you can directly add the NOT NULL constraint as NOT VALID instead, so there's no longer much use for the CHECK constraint, therefore no point in building mechanism to support the case better.) Reported-by: Andrew <psy2000usa@yahoo.com> Reviewed-by: David G. Johnston <david.g.johnston@gmail.com> Discussion: https://postgr.es/m/175385113607.786.16774570234342968908@wrigleys.postgresql.org
2025-08-04Detect and report update_deleted conflicts.Amit Kapila
This enhancement builds upon the infrastructure introduced in commit 228c370868, which enables the preservation of deleted tuples and their origin information on the subscriber. This capability is crucial for handling concurrent transactions replicated from remote nodes. The update introduces support for detecting update_deleted conflicts during the application of update operations on the subscriber. When an update operation fails to locate the target row-typically because it has been concurrently deleted-we perform an additional table scan. This scan uses the SnapshotAny mechanism and we do this additional scan only when the retain_dead_tuples option is enabled for the relevant subscription. The goal of this scan is to locate the most recently deleted tuple-matching the old column values from the remote update-that has not yet been removed by VACUUM and is still visible according to our slot (i.e., its deletion is not older than conflict-detection-slot's xmin). If such a tuple is found, the system reports an update_deleted conflict, including the origin and transaction details responsible for the deletion. This provides a groundwork for more robust and accurate conflict resolution process, preventing unexpected behavior by correctly identifying cases where a remote update clashes with a deletion from another origin. Author: Zhijie Hou <houzj.fnst@fujitsu.com> Reviewed-by: shveta malik <shveta.malik@gmail.com> Reviewed-by: Nisha Moond <nisha.moond412@gmail.com> Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com> Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com> Reviewed-by: Amit Kapila <amit.kapila16@gmail.com> Discussion: https://postgr.es/m/OS0PR01MB5716BE80DAEB0EE2A6A5D1F5949D2@OS0PR01MB5716.jpnprd01.prod.outlook.com
2025-08-02Simplify options in pg_dump and pg_restore.Jeff Davis
Remove redundant options --with-data and --with-schema, and rename --with-statistics to just --statistics. Reviewed-by: Nathan Bossart <nathandbossart@gmail.com> Reviewed-by: Fujii Masao <masao.fujii@gmail.com> Discussion: https://postgr.es/m/f379d0aeefe8effe13302a436bc28f549f09e924.camel@j-davis.com Backpatch-through: 18
2025-08-02Doc: clarify the restrictions of AFTER triggers with transition tables.Etsuro Fujita
It was not very clear that the triggers are only allowed on plain tables (not foreign tables). Also, rephrase the documentation for better readability. Follow up to commit 9e6104c66. Reported-by: Etsuro Fujita <etsuro.fujita@gmail.com> Author: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> Reviewed-by: Etsuro Fujita <etsuro.fujita@gmail.com> Discussion: https://postgr.es/m/CAPmGK16XBs9ptNr8Lk4f-tJZogf6y-Prz%3D8yhvJbb_4dpsc3mQ%40mail.gmail.com Backpatch-through: 13
2025-08-01libpq: Complain about missing BackendKeyData later with PGgetCancel()Heikki Linnakangas
PostgreSQL always sends the BackendKeyData message at connection startup, but there are some third party backend implementations out there that don't support cancellation, and don't send the message [1]. While the protocol docs left it up for interpretation if that is valid behavior, libpq in PostgreSQL 17 and below accepted it. It does not seem like the libpq behavior was intentional though, since it did so by sending CancelRequest messages with all zeros to such servers (instead of returning an error or making the cancel a no-op). In version 18 the behavior was changed to return an error when trying to create the cancel object with PGgetCancel() or PGcancelCreate(). This was done without any discussion, as part of supporting different lengths of cancel packets for the new 3.2 version of the protocol. This commit changes the behavior of PGgetCancel() / PGcancel() once more to only return an error when the cancel object is actually used to send a cancellation, instead of when merely creating the object. The reason to do so is that some clients [2] create a cancel object as part of their connection creation logic (thus having the cancel object ready for later when they need it), so if creating the cancel object returns an error, the whole connection attempt fails. By delaying the error, such clients will still be able to connect to the third party backend implementations in question, but when actually trying to cancel a query, the user will be notified that that is not possible for the server that they are connected to. This commit only changes the behavior of the older PGgetCancel() / PQcancel() functions, not the more modern PQcancelCreate() family of functions. I.e. PQcancelCreate() returns a failed connection object (CONNECTION_BAD) if the server didn't send a cancellation key. Unlike the old PQgetCancel() function, we're not aware of any clients in the field that use PQcancelCreate() during connection startup in a way that would prevent connecting to such servers. [1] AWS RDS Proxy is definitely one of them, and CockroachDB might be another. [2] psycopg2 (but not psycopg3). Author: Jelte Fennema-Nio <postgres@jeltef.nl> Reviewed-by: Jacob Champion <jacob.champion@enterprisedb.com> Backpatch-through: 18 Discussion: https://www.postgresql.org/message-id/20250617.101056.1437027795118961504.ishii%40postgresql.org
2025-07-31pg_stat_statements: Add counters for generic and custom plansMichael Paquier
This patch adds two new counters to pg_stat_statements: - generic_plan_calls - custom_plan_calls These counters track how many times a prepared statement was executed using a generic or custom plan, respectively, providing a global equivalent at query level, for top and non-top levels, of pg_prepared_statements whose data is restricted to a single session. This commit builds upon e125e360020a. The module is bumped to version 1.13. PGSS_FILE_HEADER is bumped as well, something that the latest patches touching the on-disk format of the PGSS file did not actually bother with since 2022.. Author: Sami Imseih <samimseih@gmail.com> Reviewed-by: Ilia Evdokimov <ilya.evdokimov@tantorlabs.com> Reviewed-by: Andrei Lepikhov <lepihov@gmail.com> Reviewed-by: Michael Paquier <michael@paquier.xyz> Reviewed-by: Nikolay Samokhvalov <nik@postgres.ai> Discussion: https://postgr.es/m/CAA5RZ0uFw8Y9GCFvafhC=OA8NnMqVZyzXPfv_EePOt+iv1T-qQ@mail.gmail.com