summaryrefslogtreecommitdiff
path: root/src/include
AgeCommit message (Collapse)Author
2025-08-01Fix a deadlock during ALTER SUBSCRIPTION ... DROP PUBLICATION.Amit Kapila
A deadlock can occur when the DDL command and the apply worker acquire catalog locks in different orders while dropping replication origins. The issue is rare in PG16 and higher branches because, in most cases, the tablesync worker performs the origin drop in those branches, and its locking sequence does not conflict with DDL operations. This patch ensures consistent lock acquisition to prevent such deadlocks. As per buildfarm. Reported-by: Alexander Lakhin <exclusion@gmail.com> Author: Ajin Cherian <itsajin@gmail.com> Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com> Reviewed-by: vignesh C <vignesh21@gmail.com> Reviewed-by: Amit Kapila <amit.kapila16@gmail.com> Backpatch-through: 14, where it was introduced Discussion: https://postgr.es/m/bab95e12-6cc5-4ebb-80a8-3e41956aa297@gmail.com
2025-07-23Fix build breakage on Solaris-alikes with late-model GCC.Tom Lane
Solaris has never bothered to add "const" to the second argument of PAM conversation procs, as all other Unixen did decades ago. This resulted in an "incompatible pointer" compiler warning when building --with-pam, but had no more serious effect than that, so we never did anything about it. However, as of GCC 14 the case is an error not warning by default. To complicate matters, recent OpenIndiana (and maybe illumos in general?) *does* supply the "const" by default, so we can't just assume that platforms using our solaris template need help. What we can do, short of building a configure-time probe, is to make solaris.h #define _PAM_LEGACY_NONCONST, which causes OpenIndiana's pam_appl.h to revert to the traditional definition, and hopefully will have no effect anywhere else. Then we can use that same symbol to control whether we include "const" in the declaration of pam_passwd_conv_proc(). Bug: #18995 Reported-by: Andrew Watkins <awatkins1966@gmail.com> Author: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/18995-82058da9ab4337a7@postgresql.org Backpatch-through: 13
2025-07-01Update comment for IndexInfo.ii_NullsNotDistinctPeter Eisentraut
Commit 7a7b3e11e61 added the ii_NullsNotDistinct field, but the comment was not updated. Author: Japin Li <japinli@hotmail.com> Reviewed-by: Richard Guo <guofenglinux@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/ME0P300MB04453E6C7EA635F0ECF41BFCB6832%40ME0P300MB0445.AUSP300.PROD.OUTLOOK.COM
2025-06-16Fix re-distributing previously distributed invalidation messages during ↵Masahiko Sawada
logical decoding. Commit 4909b38af0 introduced logic to distribute invalidation messages from catalog-modifying transactions to all concurrent in-progress transactions. However, since each transaction distributes not only its original invalidation messages but also previously distributed messages to other transactions, this leads to an exponential increase in allocation request size for invalidation messages, ultimately causing memory allocation failure. This commit fixes this issue by tracking distributed invalidation messages separately per decoded transaction and not redistributing these messages to other in-progress transactions. The maximum size of distributed invalidation messages that one transaction can store is limited to MAX_DISTR_INVAL_MSG_PER_TXN (8MB). Once the size of the distributed invalidation messages exceeds this threshold, we invalidate all caches in locations where distributed invalidation messages need to be executed. Back-patch to all supported versions where we introduced the fix by commit 4909b38af0. Note that this commit adds two new fields to ReorderBufferTXN to store the distributed transactions. This change breaks ABI compatibility in back branches, affecting third-party extensions that depend on the size of the ReorderBufferTXN struct, though this scenario seems unlikely. Additionally, it adds a new flag to the txn_flags field of ReorderBufferTXN to indicate distributed invalidation message overflow. This should not affect existing implementations, as it is unlikely that third-party extensions use unused bits in the txn_flags field. Bug: #18938 #18942 Author: vignesh C <vignesh21@gmail.com> Reported-by: Duncan Sands <duncan.sands@deepbluecap.com> Reported-by: John Hutchins <john.hutchins@wicourts.gov> Reported-by: Laurence Parry <greenreaper@hotmail.com> Reported-by: Max Madden <maxmmadden@gmail.com> Reported-by: Braulio Fdo Gonzalez <brauliofg@gmail.com> Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com> Reviewed-by: Amit Kapila <amit.kapila16@gmail.com> Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com> Discussion: https://postgr.es/m/680bdaf6-f7d1-4536-b580-05c2760c67c6@deepbluecap.com Discussion: https://postgr.es/m/18942-0ab1e5ae156613ad@postgresql.org Discussion: https://postgr.es/m/18938-57c9a1c463b68ce0@postgresql.org Discussion: https://postgr.es/m/CAD1FGCT2sYrP_70RTuo56QTizyc+J3wJdtn2gtO3VttQFpdMZg@mail.gmail.com Discussion: https://postgr.es/m/CANO2=B=2BT1hSYCE=nuuTnVTnjidMg0+-FfnRnqM6kd23qoygg@mail.gmail.com Backpatch-through: 13
2025-05-18Make our usage of memset_s() conform strictly to the C11 standard.Tom Lane
Per the letter of the C11 standard, one must #define __STDC_WANT_LIB_EXT1__ as 1 before including <string.h> in order to have access to memset_s(). It appears that many platforms are lenient about this, because we weren't doing it and yet the code appeared to work anyway. But we now find that with -std=c11, macOS is strict and doesn't declare memset_s, leading to compile failures since we try to use it anyway. (Given the lack of prior reports, perhaps this is new behavior in the latest SDK? No matter, we're clearly in the wrong.) In addition to the immediate problem, which could be fixed merely by adding the needed #define to explicit_bzero.c, it seems possible that our configure-time probe for memset_s() could fail in case a platform implements the function in some odd way due to this spec requirement. This concern can be fixed in largely the same way that we dealt with strchrnul() in 6da2ba1d8: switch to using a declaration-based configure probe instead of a does-it-link probe. Back-patch to v13 where we started using memset_s(). Reported-by: Lakshmi Narayana Velayudam <dev.narayana.v@gmail.com> Author: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/CAA4pTnLcKGG78xeOjiBr5yS7ZeE-Rh=FaFQQGOO=nPzA1L8yEA@mail.gmail.com Backpatch-through: 13
2025-05-09Ooops ... add required configure support.Tom Lane
The previous commit assumed we have a configure probe for SSL_CTX_set_cert_cb. v15 lacks that, so add it now. (Details borrowed from 36f40ce2d.) Reported-by: Thomas Munro <thomas.munro@gmail.com> Author: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Daniel Gustafsson <daniel@yesql.se> Discussion: https://postgr.es/m/CA+hUKG+fLqyweHqFSBcErueUVT0vDuSNWui-ySz3+d_APmq7dw@mail.gmail.com Backpatch-through: 15 only
2025-05-05With GB18030, prevent SIGSEGV from reading past end of allocation.Noah Misch
With GB18030 as source encoding, applications could crash the server via SQL functions convert() or convert_from(). Applications themselves could crash after passing unterminated GB18030 input to libpq functions PQescapeLiteral(), PQescapeIdentifier(), PQescapeStringConn(), or PQescapeString(). Extension code could crash by passing unterminated GB18030 input to jsonapi.h functions. All those functions have been intended to handle untrusted, unterminated input safely. A crash required allocating the input such that the last byte of the allocation was the last byte of a virtual memory page. Some malloc() implementations take measures against that, making the SIGSEGV hard to reach. Back-patch to v13 (all supported versions). Author: Noah Misch <noah@leadboat.com> Author: Andres Freund <andres@anarazel.de> Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com> Backpatch-through: 13 Security: CVE-2025-4207
2025-04-11Fix race with synchronous_standby_names at startupMichael Paquier
synchronous_standby_names cannot be reloaded safely by backends, and the checkpointer is in charge of updating a state in shared memory if the GUC is enabled in WalSndCtl, to let the backends know if they should wait or not for a given LSN. This provides a strict control on the timing of the waiting queues if the GUC is enabled or disabled, then reloaded. The checkpointer is also in charge of waking up the backends that could be waiting for a LSN when the GUC is disabled. This logic had a race condition at startup, where it would be possible for backends to not wait for a LSN even if synchronous_standby_names is enabled. This would cause visibility issues with transactions that we should be waiting for but they were not. The problem lasts until the checkpointer does its initial update of the shared memory state when it loads synchronous_standby_names. In order to take care of this problem, the shared memory state in WalSndCtl is extended to detect if it has been initialized by the checkpointer, and not only check if synchronous_standby_names is defined. In WalSndCtlData, sync_standbys_defined is renamed to sync_standbys_status, a bits8 able to know about two states: - If the shared memory state has been initialized. This flag is set by the checkpointer at startup once, and never removed. - If synchronous_standby_names is known as defined in the shared memory state. This is the same as the previous sync_standbys_defined in WalSndCtl. This method gives a way for backends to decide what they should do until the shared memory area is initialized, and they now ultimately fall back to a check on the GUC value in this case, which is the best thing that can be done. Fortunately, SyncRepUpdateSyncStandbysDefined() is called immediately by the checkpointer when this process starts, so the window is very narrow. It is possible to enlarge the problematic window by making the checkpointer wait at the beginning of SyncRepUpdateSyncStandbysDefined() with a hardcoded sleep for example, and doing so has showed that a 2PC visibility test is indeed failing. On machines slow enough, this bug would cause spurious failures. In 17~, we have looked at the possibility of adding an injection point to have a reproducible test, but as the problematic window happens at early startup, we would need to invent a way to make an injection point optionally persistent across restarts when attached, something that would be fine for this case as it would involve the checkpointer. This issue is quite old, and can be reproduced on all the stable branches. Author: Melnikov Maksim <m.melnikov@postgrespro.ru> Co-authored-by: Michael Paquier <michael@paquier.xyz> Discussion: https://postgr.es/m/163fcbec-900b-4b07-beaa-d2ead8634bec@postgrespro.ru Backpatch-through: 13
2025-04-10Fix data loss in logical replication.Amit Kapila
Data loss can happen when the DDLs like ALTER PUBLICATION ... ADD TABLE ... or ALTER TYPE ... that don't take a strong lock on table happens concurrently to DMLs on the tables involved in the DDL. This happens because logical decoding doesn't distribute invalidations to concurrent transactions and those transactions use stale cache data to decode the changes. The problem becomes bigger because we keep using the stale cache even after those in-progress transactions are finished and skip the changes required to be sent to the client. This commit fixes the issue by distributing invalidation messages from catalog-modifying transactions to all concurrent in-progress transactions. This allows the necessary rebuild of the catalog cache when decoding new changes after concurrent DDL. We observed performance regression primarily during frequent execution of *publication DDL* statements that modify the published tables. The regression is minor or nearly nonexistent for DDLs that do not affect the published tables or occur infrequently, making this a worthwhile cost to resolve a longstanding data loss issue. An alternative approach considered was to take a strong lock on each affected table during publication modification. However, this would only address issues related to publication DDLs (but not the ALTER TYPE ...) and require locking every relation in the database for publications created as FOR ALL TABLES, which is impractical. The bug exists in all supported branches, but we are backpatching till 14. The fix for 13 requires somewhat bigger changes than this fix, so the fix for that branch is still under discussion. Reported-by: hubert depesz lubaczewski <depesz@depesz.com> Reported-by: Tomas Vondra <tomas.vondra@enterprisedb.com> Author: Shlok Kyal <shlok.kyal.oss@gmail.com> Author: Hayato Kuroda <kuroda.hayato@fujitsu.com> Reviewed-by: Zhijie Hou <houzj.fnst@fujitsu.com> Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com> Reviewed-by: Amit Kapila <amit.kapila16@gmail.com> Tested-by: Benoit Lobréau <benoit.lobreau@dalibo.com> Backpatch-through: 14 Discussion: https://postgr.es/m/de52b282-1166-1180-45a2-8d8917ca74c6@enterprisedb.com Discussion: https://postgr.es/m/CAD21AoAenVqiMjpN-PvGHL1N9DWnHSq673bfgr6phmBUzx=kLQ@mail.gmail.com
2025-04-03Make dblink interruptible, via new libpqsrv APIs.Noah Misch
This replaces dblink's blocking libpq calls, allowing cancellation and allowing DROP DATABASE (of a database not involved in the query). Apart from explicit dblink_cancel_query() calls, dblink still doesn't cancel the remote side. The replacement for the blocking calls consists of new, general-purpose query execution wrappers in the libpqsrv facility. Out-of-tree extensions should adopt these. The original commit d3c5f37dd543498cc7c678815d3921823beec9e9 did not back-patch. Back-patch now to v16-v13, bringing coverage to all supported versions. This back-patch omits the orignal's refactoring in postgres_fdw. Discussion: https://postgr.es/m/20231122012945.74@rfd.leadboat.com
2025-04-03Add helper library for use of libpq inside the server environmentAndres Freund
Currently dblink and postgres_fdw don't process interrupts during connection establishment. Besides preventing query cancellations etc, this can lead to undetected deadlocks, as global barriers are not processed. Libpqwalreceiver in contrast, processes interrupts during connection establishment. The required code is not trivial, so duplicating it into additional places does not seem like a good option. These aforementioned undetected deadlocks are the reason for the spate of CI test failures in the FreeBSD 'test_running' step. For now the helper library is just a header, as it needs to be linked into each extension using libpq, and it seems too small to be worth adding a dedicated static library for. The conversion to the helper are done in subsequent commits. Reviewed-by: Thomas Munro <thomas.munro@gmail.com> Discussion: https://postgr.es/m/20220925232237.p6uskba2dw6fnwj2@awork3.anarazel.de
2025-04-01Fix detection and handling of strchrnul() for macOS 15.4.Tom Lane
As of 15.4, macOS has strchrnul(), but access to it is blocked behind a check for MACOSX_DEPLOYMENT_TARGET >= 15.4. But our does-it-link configure check finds it, so we try to use it, and fail with the present default deployment target (namely 15.0). This accounts for today's buildfarm failures on indri and sifaka. This is the identical problem that we faced some years ago when Apple introduced preadv and pwritev in the same way. We solved that in commit f014b1b9b by using AC_CHECK_DECLS instead of AC_CHECK_FUNCS to check the functions' availability. So do the same now for strchrnul(). Interestingly, we already had a workaround for "the link check doesn't agree with <string.h>" cases with glibc, which we no longer need since only the header declaration is being checked. Testing this revealed that the meson version of this check has never worked, because it failed to use "-Werror=unguarded-availability-new". (Apparently nobody's tried to build with meson on macOS versions that lack preadv/pwritev as standard.) Adjust that while at it. Also, we had never put support for "-Werror=unguarded-availability-new" into v13, but we need that now. Co-authored-by: Tom Lane <tgl@sss.pgh.pa.us> Co-authored-by: Peter Eisentraut <peter@eisentraut.org> Discussion: https://postgr.es/m/385134.1743523038@sss.pgh.pa.us Backpatch-through: 13
2025-03-13Repair commits 317aba70e et al for -DWRITE_READ_PARSE_PLAN_TREES.Tom Lane
Letting the rewriter keep RangeTblEntry.relid when expanding a view RTE, without making the outfuncs/readfuncs changes that went along with that originally, is more problematic than I realized. It causes WRITE_READ_PARSE_PLAN_TREES testing to fail because outfuncs/readfuncs don't think relid need be saved in an RTE_SUBQUERY RTE. There doesn't seem to be any other good route to fixing the whole-row Var problem solved at f4e7756ef, so we just have to deal with the consequences. We can make the eventually-produced plan tree safe for WRITE_READ_PARSE_PLAN_TREES by clearing the relid field at the end of planning, as was already being done for the functions field. (The functions field is not problematic here because our abuse of it is strictly local to the planner.) However, there is no nice fix for the post-rewrite WRITE_READ_PARSE_PLAN_TREES test. The solution adopted here is to remove the post-rewrite test in the affected branches. That's surely less than ideal, but a couple of arguments can be made why it's not unacceptable. First, the behavior of outfuncs/readfuncs for parsetrees in these branches is frozen no matter what, because of catalog stability requirements. So we're not testing anything that is going to change. Second, testing WRITE_READ_PARSE_PLAN_TREES at this particular time doesn't correspond to any direct system functionality requirement, neither rule storage nor plan transmission. Reported-by: Andres Freund <andres@anarazel.de> Author: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Dean Rasheed <dean.a.rasheed@gmail.com> Discussion: https://postgr.es/m/3518c50a-ab18-482f-b916-a37263622501@deepbluecap.com Backpatch-through: 13-15
2025-03-12Preserve RangeTblEntry.relid when expanding a view RTE.Tom Lane
When the rewriter converts an RTE_RELATION RTE for a view into an RTE_SUBQUERY RTE containing the view's defining query, leave the relid field alone instead of zeroing it. This allows the planner to tell that the subquery came from a view rather than having been written in-line, which is needed to support an upcoming planner bug fix. We cannot change the behavior of the outfuncs/readfuncs code in released branches, so the relid field will not survive in plans passed to parallel workers; therefore this info should not be relied on in the executor. This back-patches a portion of the data structure definitional changes made in v16 and up by commit 47bb9db75. It's being committed separately for visibility in the commit log, but with luck it will not actually matter to anyone. While it's not inconceivable that this change will break code expecting relid to be zero in a subquery RTE, we can hope that such code has already been adjusted to cope with v16 and up. Reported-by: Duncan Sands <duncan.sands@deepbluecap.com> Diagnosed-by: Tender Wang <tndrwang@gmail.com> Author: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Dean Rasheed <dean.a.rasheed@gmail.com> Discussion: https://postgr.es/m/3518c50a-ab18-482f-b916-a37263622501@deepbluecap.com Backpatch-through: 13-15
2025-03-03Fix broken handling of domains in atthasmissing logic.Tom Lane
If a domain type has a default, adding a column of that type (without any explicit DEFAULT clause) failed to install the domain's default value in existing rows, instead leaving the new column null. This is unexpected, and it used to work correctly before v11. The cause is confusion in the atthasmissing mechanism about which default value to install: we'd only consider installing an explicitly-specified default, and then we'd decide that no table rewrite is needed. To fix, take the responsibility for filling attmissingval out of StoreAttrDefault, and instead put it into ATExecAddColumn's existing logic that derives the correct value to fill the new column with. Also, centralize the logic that determines the need for default-related table rewriting there, instead of spreading it over four or five places. In the back branches, we'll leave the attmissingval-filling code in StoreAttrDefault even though it's now dead, for fear that some extension may be depending on that functionality to exist there. A separate HEAD-only patch will clean up the now-useless code. Reported-by: jian he <jian.universality@gmail.com> Author: jian he <jian.universality@gmail.com> Author: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/CACJufxHFssPvkP1we7WMhPD_1kwgbG52o=kQgL+TnVoX5LOyCQ@mail.gmail.com Backpatch-through: 13
2025-02-10Specify the encoding of input to fmtId()Andres Freund
This commit adds fmtIdEnc() and fmtQualifiedIdEnc(), which allow to specify the encoding as an explicit argument. Additionally setFmtEncoding() is provided, which defines the encoding when no explicit encoding is provided, to avoid breaking all code using fmtId(). All users of fmtId()/fmtQualifiedId() are either converted to the explicit version or a call to setFmtEncoding() has been added. This commit does not yet utilize the now well-defined encoding, that will happen in a subsequent commit. Reviewed-by: Noah Misch <noah@leadboat.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Backpatch-through: 13 Security: CVE-2025-1094
2025-02-10Add pg_encoding_set_invalid()Andres Freund
There are cases where we cannot / do not want to error out for invalidly encoded input. In such cases it can be useful to replace e.g. an incomplete multi-byte characters with bytes that will trigger an error when getting validated as part of a larger string. Unfortunately, until now, for some encoding no such sequence existed. For those encodings this commit removes one previously accepted input combination - we consider that to be ok, as the chosen bytes are outside of the valid ranges for the encodings, we just previously failed to detect that. As we cannot add a new field to pg_wchar_table without breaking ABI, this is implemented "in-line" in the newly added function. Author: Noah Misch <noah@leadboat.com> Reviewed-by: Andres Freund <andres@anarazel.de> Backpatch-through: 13 Security: CVE-2025-1094
2025-01-29Handle default NULL insertion a little better.Tom Lane
If a column is omitted in an INSERT, and there's no column default, the code in preptlist.c generates a NULL Const to be inserted. Furthermore, if the column is of a domain type, we wrap the Const in CoerceToDomain, so as to throw a run-time error if the domain has a NOT NULL constraint. That's fine as far as it goes, but there are two problems: 1. We're being sloppy about the type/typmod that the Const is labeled with. It really should have the domain's base type/typmod, since it's the input to CoerceToDomain not the output. This can result in coerce_to_domain inserting a useless length-coercion function (useless because it's being applied to a null). The coercion would typically get const-folded away later, but it'd be better not to create it in the first place. 2. We're not applying expression preprocessing (specifically, eval_const_expressions) to the resulting expression tree. The planner's primary expression-preprocessing pass already happened, so that means the length coercion step and CoerceToDomain node miss preprocessing altogether. This is at the least inefficient, since it means the length coercion and CoerceToDomain will actually be executed for each inserted row, though they could be const-folded away in most cases. Worse, it seems possible that missing preprocessing for the length coercion could result in an invalid plan (for example, due to failing to perform default-function-argument insertion). I'm not aware of any live bug of that sort with core datatypes, and it might be unreachable for extension types as well because of restrictions of CREATE CAST, but I'm not entirely convinced that it's unreachable. Hence, it seems worth back-patching the fix (although I only went back to v14, as the patch doesn't apply cleanly at all in v13). There are several places in the rewriter that are building null domain constants the same way as preptlist.c. While those are before the planner and hence don't have any reachable bug, they're still applying a length coercion that will be const-folded away later, uselessly wasting cycles. Hence, make a utility routine that all of these places can call to do it right. Making this code more careful about the typmod assigned to the generated NULL constant has visible but cosmetic effects on some of the plans shown in contrib/postgres_fdw's regression tests. Discussion: https://postgr.es/m/1865579.1738113656@sss.pgh.pa.us Backpatch-through: 14
2025-01-29Avoid breaking SJIS encoding while de-backslashing Windows paths.Tom Lane
When running on Windows, canonicalize_path() converts '\' to '/' to prevent confusing the Windows command processor. It was doing that in a non-encoding-aware fashion; but in SJIS there are valid two-byte characters whose second byte matches '\'. So encoding corruption ensues if such a character is used in the path. We can fairly easily fix this if we know which encoding is in use, but a lot of our utilities don't have much of a clue about that. After some discussion we decided we'd settle for fixing this only in psql, and assuming that its value of client_encoding matches what the user is typing. It seems hopeless to get the server to deal with the problematic characters in database path names, so we'll just declare that case to be unsupported. That means nothing need be done in the server, nor in utility programs whose only contact with file path names is for database paths. But psql frequently deals with client-side file paths, so it'd be good if it didn't mess those up. Bug: #18735 Reported-by: Koichi Suzuki <koichi.suzuki@enterprisedb.com> Author: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Koichi Suzuki <koichi.suzuki@enterprisedb.com> Discussion: https://postgr.es/m/18735-4acdb3998bb9f2b1@postgresql.org Backpatch-through: 13
2025-01-25At update of non-LP_NORMAL TID, fail instead of corrupting page header.Noah Misch
The right mix of DDL and VACUUM could corrupt a catalog page header such that PageIsVerified() durably fails, requiring a restore from backup. This affects only catalogs that both have a syscache and have DDL code that uses syscache tuples to construct updates. One of the test permutations shows a variant not yet fixed. This makes !TransactionIdIsValid(TM_FailureData.xmax) possible with TM_Deleted. I think core and PGXN are indifferent to that. Per bug #17821 from Alexander Lakhin. Back-patch to v13 (all supported versions). The test case is v17+, since it uses INJECTION_POINT. Discussion: https://postgr.es/m/17821-dd8c334263399284@postgresql.org
2025-01-20Avoid using timezone Asia/Manila in regression tests.Tom Lane
The freshly-released 2025a version of tzdata has a refined estimate for the longitude of Manila, changing their value for LMT in pre-standardized-timezone days. This changes the output of one of our test cases. Since we need to be able to run with system tzdata files that may or may not contain this update, we'd better stop making that specific test. I switched it to use Asia/Singapore, which has a roughly similar UTC offset. That LMT value hasn't changed in tzdb since 2003, so we can hope that it's well established. I also noticed that this set of make_timestamptz tests only exercises zones east of Greenwich, which seems rather sad, and was not the original intent AFAICS. (We've already changed these tests once to stabilize their results across tzdata updates, cf 66b737cd9; it looks like I failed to consider the UTC-offset-sign aspect then.) To improve that, add a test with Pacific/Honolulu. That LMT offset is also quite old in tzdb, so we'll cross our fingers that it doesn't get improved. Reported-by: Christoph Berg <cb@df7cb.de> Discussion: https://postgr.es/m/Z46inkznCxesvDEb@msg.df7cb.de Backpatch-through: 13
2025-01-14Avoid symbol collisions between pqsignal.c and legacy-pqsignal.c.Tom Lane
In the name of ABI stability (that is, to avoid a library major version bump for libpq), libpq still exports a version of pqsignal() that we no longer want to use ourselves. However, since that has the same link name as the function exported by src/port/pqsignal.c, there is a link ordering dependency determining which version will actually get used by code that uses libpq as well as libpgport.a. It now emerges that the wrong version has been used by pgbench and psql since commit 06843df4a rearranged their link commands. This can result in odd failures in pgbench with the -T switch, since its SIGALRM handler will now not be marked SA_RESTART. psql may have some edge-case problems in \watch, too. Since we don't want to depend on link ordering effects anymore, let's fix this in the same spirit as b6c7cfac8: use macros to change the actual link names of the competing functions. We cannot change legacy-pqsignal.c's exported name of course, so the victim has to be src/port/pqsignal.c. In master, rename its exported name to be pqsignal_fe in frontend or pqsignal_be in backend. (We could perhaps have gotten away with using the same symbol in both cases, but since the FE and BE versions now work a little differently, it seems advisable to use different names.) In back branches, rename to pqsignal_fe in frontend but keep it as pqsignal in backend. The frontend change could affect third-party code that is calling pqsignal from libpgport.a or libpgport_shlib.a, but only if the code is compiled against port.h from a different minor release than libpgport. Since we don't support using libpgport as a shared library, it seems unlikely that there will be such a problem. I left the backend symbol unchanged to avoid an ABI break for extensions. This means that the link ordering hazard still exists for any extension that links against libpq. However, none of our own extensions use both pqsignal() and libpq, and we're not making things any worse for third-party extensions that do. Report from Andy Fan, diagnosis by Fujii Masao, patch by me. Back-patch to all supported branches, as 06843df4a was. Discussion: https://postgr.es/m/87msfz5qv2.fsf@163.com
2025-01-14Fix catcache invalidation of a list entry that's being builtHeikki Linnakangas
If a new catalog tuple is inserted that belongs to a catcache list entry, and cache invalidation happens while the list entry is being built, the list entry might miss the newly inserted tuple. To fix, change the way we detect concurrent invalidations while a catcache entry is being built. Keep a stack of entries that are being built, and apply cache invalidation to those entries in addition to the real catcache entries. This is similar to the in-progress list in relcache.c. Back-patch to all supported versions. Reviewed-by: Noah Misch Discussion: https://www.postgresql.org/message-id/2234dc98-06fe-42ed-b5db-ac17384dc880@iki.fi
2025-01-09Provide 64-bit ftruncate() and lseek() on Windows.Thomas Munro
Change our ftruncate() macro to use the 64-bit variant of chsize(), and add a new macro to redirect lseek() to _lseeki64(). Back-patch to all supported releases, in preparation for a bug fix. Tested-by: Davinder Singh <davinder.singh@enterprisedb.com> Discussion: https://postgr.es/m/CAKZiRmyM4YnokK6Oenw5JKwAQ3rhP0YTz2T-tiw5dAQjGRXE3Q%40mail.gmail.com
2025-01-08Restore smgrtruncate() prototype in back-branches.Thomas Munro
It's possible that external code is calling smgrtruncate(). Any external callers might like to consider the recent changes to RelationTruncate(), but commit 38c579b0 should not have changed the function prototype in the back-branches, per ABI stability policy. Restore smgrtruncate()'s traditional argument list in the back-branches, but make it a wrapper for a new function smgrtruncate2(). The three callers in core can use smgrtruncate2() directly. In master (18-to-be), smgrtruncate2() is effectively renamed to smgrtruncate(), so this wart is cleaned up. Reviewed-by: Michael Paquier <michael@paquier.xyz> Discussion: https://postgr.es/m/CA%2BhUKG%2BThae6x6%2BjmQiuALQBT2Ae1ChjMh1%3DkMvJ8y_SBJZrvA%40mail.gmail.com
2024-12-28Exclude parallel workers from connection privilege/limit checks.Tom Lane
Cause parallel workers to not check datallowconn, rolcanlogin, and ACL_CONNECT privileges. The leader already checked these things (except for rolcanlogin which might have been checked for a different role). Re-checking can accomplish little except to induce unexpected failures in applications that might not even be aware that their query has been parallelized. We already had the principle that parallel workers rely on their leader to pass a valid set of authorization information, so this change just extends that a bit further. Also, modify the ReservedConnections, datconnlimit and rolconnlimit logic so that these limits are only enforced against regular backends, and only regular backends are counted while checking if the limits were already reached. Previously, background processes that had an assigned database or role were subject to these limits (with rather random exclusions for autovac workers and walsenders), and the set of existing processes that counted against each limit was quite haphazard as well. The point of these limits, AFAICS, is to ensure the availability of PGPROC slots for regular backends. Since all other types of processes have their own separate pools of PGPROC slots, it makes no sense either to enforce these limits against them or to count them while enforcing the limit. While edge-case failures of these sorts have been possible for a long time, the problem got a good deal worse with commit 5a2fed911 (CVE-2024-10978), which caused parallel workers to make some of these checks using the leader's current role where before we had used its AuthenticatedUserId, thus allowing parallel queries to fail after SET ROLE. The previous behavior was fairly accidental and I have no desire to return to it. This patch includes reverting 73c9f91a1, which was an emergency hack to suppress these same checks in some cases. It wasn't complete, as shown by a recent bug report from Laurenz Albe. We can also revert fd4d93d26 and 492217301, which hacked around the same problems in one regression test. In passing, remove the special case for autovac workers in CheckMyDatabase; it seems cleaner to have AutoVacWorkerMain pass the INIT_PG_OVERRIDE_ALLOW_CONNS flag, now that that does what's needed. Like 5a2fed911, back-patch to supported branches (which sadly no longer includes v12). Discussion: https://postgr.es/m/1808397.1735156190@sss.pgh.pa.us
2024-12-28In REASSIGN OWNED of a database, lock the tuple as mandated.Noah Misch
Commit aac2c9b4fde889d13f859c233c2523345e72d32b mandated such locking and attempted to fulfill that mandate, but it missed REASSIGN OWNED. Hence, it remained possible to lose VACUUM's inplace update of datfrozenxid if a REASSIGN OWNED processed that database at the same time. This didn't affect the other inplace-updated catalog, pg_class. For pg_class, REASSIGN OWNED calls ATExecChangeOwner() instead of the generic AlterObjectOwner_internal(), and ATExecChangeOwner() fulfills the locking mandate. Like in GRANT, implement this by following the locking protocol for any catalog subject to the generic AlterObjectOwner_internal(). It would suffice to do this for IsInplaceUpdateOid() catalogs only. Back-patch to v13 (all supported versions). Kirill Reshke. Reported by Alexander Kukushkin. Discussion: https://postgr.es/m/CAFh8B=mpKjAy4Cuun-HP-f_vRzh2HSvYFG3rhVfYbfEBUhBAGg@mail.gmail.com
2024-12-20Fix corruption when relation truncation fails.Thomas Munro
RelationTruncate() does three things, while holding an AccessExclusiveLock and preventing checkpoints: 1. Logs the truncation. 2. Drops buffers, even if they're dirty. 3. Truncates some number of files. Step 2 could previously be canceled if it had to wait for I/O, and step 3 could and still can fail in file APIs. All orderings of these operations have data corruption hazards if interrupted, so we can't give up until the whole operation is done. When dirty pages were discarded but the corresponding blocks were left on disk due to ERROR, old page versions could come back from disk, reviving deleted data (see pgsql-bugs #18146 and several like it). When primary and standby were allowed to disagree on relation size, standbys could panic (see pgsql-bugs #18426) or revive data unknown to visibility management on the primary (theorized). Changes: * WAL is now unconditionally flushed first * smgrtruncate() is now called in a critical section, preventing interrupts and causing PANIC on file API failure * smgrtruncate() has a new parameter for existing fork sizes, because it can't call smgrnblocks() itself inside a critical section The changes apply to RelationTruncate(), smgr_redo() and pg_truncate_visibility_map(). That last is also brought up to date with other evolutions of the truncation protocol. The VACUUM FileTruncate() failure mode had been discussed in older reports than the ones referenced below, with independent analysis from many people, but earlier theories on how to fix it were too complicated to back-patch. The more recently invented cancellation bug was diagnosed by Alexander Lakhin. Other corruption scenarios were spotted by me while iterating on this patch and earlier commit 75818b3a. Back-patch to all supported releases. Reviewed-by: Michael Paquier <michael@paquier.xyz> Reviewed-by: Robert Haas <robertmhaas@gmail.com> Reported-by: rootcause000@gmail.com Reported-by: Alexander Lakhin <exclusion@gmail.com> Discussion: https://postgr.es/m/18146-04e908c662113ad5%40postgresql.org Discussion: https://postgr.es/m/18426-2d18da6586f152d6%40postgresql.org
2024-12-09Simplify executor's determination of whether to use parallelism.Tom Lane
Our parallel-mode code only works when we are executing a query in full, so ExecutePlan must disable parallel mode when it is asked to do partial execution. The previous logic for this involved passing down a flag (variously named execute_once or run_once) from callers of ExecutorRun or PortalRun. This is overcomplicated, and unsurprisingly some of the callers didn't get it right, since it requires keeping state that not all of them have handy; not to mention that the requirements for it were undocumented. That led to assertion failures in some corner cases. The only state we really need for this is the existing QueryDesc.already_executed flag, so let's just put all the responsibility in ExecutePlan. (It could have been done in ExecutorRun too, leading to a slightly shorter patch -- but if there's ever more than one caller of ExecutePlan, it seems better to have this logic in the subroutine than the callers.) This makes those ExecutorRun/PortalRun parameters unnecessary. In master it seems okay to just remove them, returning the API for those functions to what it was before parallelism. Such an API break is clearly not okay in stable branches, but for them we can just leave the parameters in place after documenting that they do nothing. Per report from Yugo Nagata, who also reviewed and tested this patch. Back-patch to all supported branches. Discussion: https://postgr.es/m/20241206062549.710dc01cf91224809dd6c0e1@sraoss.co.jp
2024-12-07Ensure that pg_amop/amproc entries depend on their lefttype/righttype.Tom Lane
Usually an entry in pg_amop or pg_amproc does not need a dependency on its amoplefttype/amoprighttype/amproclefttype/amprocrighttype types, because there is an indirect dependency via the argument types of its referenced operator or procedure, or via the opclass it belongs to. However, for some support procedures in some index AMs, the argument types of the support procedure might not mention the column data type at all. Also, the amop/amproc entry might be treated as "loose" in the opfamily, in which case it lacks a dependency on any particular opclass; or it might be a cross-type entry having a reference to a datatype that is not its opclass' opcintype. The upshot of all this is that there are cases where a datatype can be dropped while leaving behind amop/amproc entries that mention it, because there is no path in pg_depend showing that those entries depend on that type. Such entries are harmless in normal activity, because they won't get used, but they cause problems for maintenance actions such as dropping the operator family. They also cause pg_dump to produce bogus output. The previous commit put a band-aid on the DROP OPERATOR FAMILY failure, but a real fix is needed. To fix, add pg_depend entries showing that a pg_amop/pg_amproc entry depends on its lefttype/righttype. To avoid bloating pg_depend too much, skip this if the referenced operator or function has that type as an input type. (I did not bother with considering the possible indirect dependency via the opclass' opcintype; at least in the reported case, that wouldn't help anyway.) Probably, the reason this has escaped notice for so long is that add-on datatypes and relevant opclasses/opfamilies are usually packaged as extensions nowadays, so that there's no way to drop a type without dropping the referencing opclasses/opfamilies too. Still, in the absence of pg_depend entries there's nothing that constrains DROP EXTENSION to drop the opfamily entries before the datatype, so it seems possible for a DROP failure to occur anyway. The specific case that was reported doesn't fail in v13, because v13 prefers to attach the support procedure to the opclass not the opfamily. But it's surely possible to construct other edge cases that do fail in v13, so patch that too. Per report from Yoran Heling. Back-patch to all supported branches. Discussion: https://postgr.es/m/Z1MVCOh1hprjK5Sf@gmai021
2024-11-25Fix NULLIF()'s handling of read-write expanded objects.Tom Lane
If passed a read-write expanded object pointer, the EEOP_NULLIF code would hand that same pointer to the equality function and then (unless equality was reported) also return the same pointer as its value. This is no good, because a function that receives a read-write expanded object pointer is fully entitled to scribble on or even delete the object, thus corrupting the NULLIF output. (This problem is likely unobservable with the equality functions provided in core Postgres, but it's easy to demonstrate with one coded in plpgsql.) To fix, make sure the pointer passed to the equality function is read-only. We can still return the original read-write pointer as the NULLIF result, allowing optimization of later operations. Per bug #18722 from Alexander Lakhin. This has been wrong since we invented expanded objects, so back-patch to all supported branches. Discussion: https://postgr.es/m/18722-fd9e645448cc78b4@postgresql.org
2024-11-25Assume that <stdbool.h> conforms to the C standard.Thomas Munro
Previously we checked "for <stdbool.h> that conforms to C99" using autoconf's AC_HEADER_STDBOOL macro. We've required C99 since PostgreSQL 12, so the test was redundant, and under C23 it was broken: autoconf 2.69's implementation doesn't understand C23's new empty header (the macros it's looking for went away, replaced by language keywords). Later autoconf versions fixed that, but let's just remove the anachronistic test. HAVE_STDBOOL_H and HAVE__BOOL will no longer be defined, but they weren't directly tested in core or likely extensions (except in 11, see below). PG_USE_STDBOOL (or USE_STDBOOL in 11 and 12) is still defined when sizeof(bool) is 1, which should be true on all modern systems. Otherwise we define our own bool type and values of size 1, which would fail to compile under C23 as revealed by the broken test. (We'll probably clean that dead code up in master, but here we want a minimal back-patchable change.) This came to our attention when GCC 15 recently started using using C23 by default and failed to compile the replacement code, as reported by Sam James and build farm animal alligator. Back-patch to all supported releases, and then two older versions that also know about <stdbool.h>, per the recently-out-of-support policy[1]. 12 requires C99 so it's much like the supported releases, but 11 only assumes C89 so it now uses AC_CHECK_HEADERS instead of the overly picky AC_HEADER_STDBOOL. (I could find no discussion of which historical systems had <stdbool.h> but failed the conformance test; if they ever existed, they surely aren't relevant to that policy's goals.) [1] https://wiki.postgresql.org/wiki/Committing_checklist#Policies Reported-by: Sam James <sam@gentoo.org> Reviewed-by: Peter Eisentraut <peter@eisentraut.org> (master version) Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> (approach) Discussion: https://www.postgresql.org/message-id/flat/87o72eo9iu.fsf%40gentoo.org
2024-11-16Undo unintentional ABI break in struct ResultRelInfo.Tom Lane
Commits aac2c9b4f et al. added a bool field to struct ResultRelInfo. That's no problem in the master branch, but in released branches care must be taken when modifying publicly-visible structs to avoid an ABI break for extensions. Frequently we solve that by adding the new field at the end of the struct, and that's what was done here. But ResultRelInfo has stricter constraints than just about any other node type in Postgres. Some executor APIs require extensions to index into arrays of ResultRelInfo, which means that any change whatever in sizeof(ResultRelInfo) causes a fatal ABI break. Fortunately, this is easy to fix, because the new field can be squeezed into available padding space instead --- indeed, that's where it was put in master, so this fix also removes a cross-branch coding variation. Per report from Pavan Deolasee. Patch v14-v17 only; earlier versions did not gain the extra field, nor is there any problem in master. Discussion: https://postgr.es/m/CABOikdNmVBC1LL6pY26dyxAS2f+gLZvTsNt=2XbcyG7WxXVBBQ@mail.gmail.com
2024-11-15Avoid assertion due to disconnected NFA sub-graphs in regex parsing.Tom Lane
In commit 08c0d6ad6 which introduced "rainbow" arcs in regex NFAs, I didn't think terribly hard about what to do when creating the color complement of a rainbow arc. Clearly, the complement cannot match any characters, and I took the easy way out by just not building any arcs at all in the complement arc set. That mostly works, but Nikolay Shaplov found a case where it doesn't: if we decide to delete that sub-NFA later because it's inside a "{0}" quantifier, delsub() suffered an assertion failure. That's because delsub() relies on the target sub-NFA being fully connected. That was always true before, and the best fix seems to be to restore that property. Hence, invent a new arc type CANTMATCH that can be generated in place of an empty color complement, and drop it again later when we start NFA optimization. (At that point we don't need to do delsub() any more, and besides there are other cases where NFA optimization can lead to disconnected subgraphs.) It appears that this bug has no consequences in a non-assert-enabled build: there will be some transiently leaked NFA states/arcs, but they'll get cleaned up eventually. Still, we don't like assertion failures, so back-patch to v14 where rainbow arcs were introduced. Per bug #18708 from Nikolay Shaplov. Discussion: https://postgr.es/m/18708-f94f2599c9d2c005@postgresql.org
2024-11-15Fix race conditions with drop of reused pgstats entriesMichael Paquier
This fixes a set of race conditions with cumulative statistics where a shared stats entry could be dropped while it should still be valid in the event when it is reused: an entry may refer to a different object but requires the same hash key. This can happen with various stats kinds, like: - Replication slots that compute internally an index number, for different slot names. - Stats kinds that use an OID in the object key, where a wraparound causes the same key to be used if an OID is used for the same object. - As of PostgreSQL 18, custom pgstats kinds could also be an issue, depending on their implementation. This issue is fixed by introducing a counter called "generation" in the shared entries via PgStatShared_HashEntry, initialized at 0 when an entry is created and incremented when the same entry is reused, to avoid concurrent issues on drop because of other backends still holding a reference to it. This "generation" is copied to the local copy that a backend holds when looking at an object, then cross-checked with the shared entry to make sure that the entry is not dropped even if its "refcount" justifies that if it has been reused. This problem could show up when a backend shuts down and needs to discard any entries it still holds, causing statistics to be removed when they should not, or even an assertion failure. Another report involved a failure in a standby after an OID wraparound, where the startup process would FATAL on a "can only drop stats once", stopping recovery abruptly. The buildfarm has been sporadically complaining about the problem, as well, but the window is hard to reach with the in-core tests. Note that the issue can be reproduced easily by adding a sleep before dshash_find() in pgstat_release_entry_ref() to enlarge the problematic window while repeating test_decoding's isolation test oldest_xmin a couple of times, for example, as pointed out by Alexander Lakhin. Reported-by: Alexander Lakhin, Peter Smith Author: Kyotaro Horiguchi, Michael Paquier Reviewed-by: Bertrand Drouvot Discussion: https://postgr.es/m/CAA4eK1KxuMVyAryz_Vk5yq3ejgKYcL6F45Hj9ZnMNBS-g+PuZg@mail.gmail.com Discussion: https://postgr.es/m/17947-b9554521ad963c9c@postgresql.org Backpatch-through: 15
2024-11-11Fix improper interactions between session_authorization and role.Tom Lane
The SQL spec mandates that SET SESSION AUTHORIZATION implies SET ROLE NONE. We tried to implement that within the lowest-level functions that manipulate these settings, but that was a bad idea. In particular, guc.c assumes that it doesn't matter in what order it applies GUC variable updates, but that was not the case for these two variables. This problem, compounded by some hackish attempts to work around it, led to some security-grade issues: * Rolling back a transaction that had done SET SESSION AUTHORIZATION would revert to SET ROLE NONE, even if that had not been the previous state, so that the effective user ID might now be different from what it had been. * The same for SET SESSION AUTHORIZATION in a function SET clause. * If a parallel worker inspected current_setting('role'), it saw "none" even when it should see something else. Also, although the parallel worker startup code intended to cope with the current role's pg_authid row having disappeared, its implementation of that was incomplete so it would still fail. Fix by fully separating the miscinit.c functions that assign session_authorization from those that assign role. To implement the spec's requirement, teach set_config_option itself to perform "SET ROLE NONE" when it sets session_authorization. (This is undoubtedly ugly, but the alternatives seem worse. In particular, there's no way to do it within assign_session_authorization without incompatible changes in the API for GUC assign hooks.) Also, improve ParallelWorkerMain to directly set all the relevant user-ID variables instead of relying on some of them to get set indirectly. That allows us to survive not finding the pg_authid row during worker startup. In v16 and earlier, this includes back-patching 9987a7bf3 which fixed a violation of GUC coding rules: SetSessionAuthorization is not an appropriate place to be throwing errors from. Security: CVE-2024-10978
2024-11-08Replace pgwin32_is_junction() with lstat().Thomas Munro
Now that lstat() reports junction points with S_IFLNK/S_ISLINK(), and unlink() can unlink them, there is no need for conditional code for Windows in a few places. That was expressed by testing for WIN32 or S_ISLNK, which we can now constant-fold. The coding around pgwin32_is_junction() was a bit suspect anyway, as we never checked for errors, and we also know that errors can be spuriously reported because of transient sharing violations on this OS. The lstat()-based code has handling for that. This also reverts 4fc6b6ee on master only. That was done because lstat() didn't previously work for symlinks (junction points), but now it does. Tested-by: Andrew Dunstan <andrew@dunslane.net> Discussion: https://postgr.es/m/CA%2BhUKGLfOOeyZpm5ByVcAt7x5Pn-%3DxGRNCvgiUPVVzjFLtnY0w%40mail.gmail.com (cherry picked from commit 5fc88c5d53e43fa7dcea93499d230a0bf70f4f77) Author: Thomas Munro <tmunro@postgresql.org> Author: Alexandra Wang <alexandra.wang.oss@gmail.com>
2024-11-08Provide lstat() for Windows.Thomas Munro
Junction points will be reported with S_ISLNK(x.st_mode), simulating POSIX lstat(). stat() will follow pseudo-symlinks, like in POSIX (but only one level before giving up, unlike in POSIX). This completes a TODO left by commit bed90759fcb. Tested-by: Andrew Dunstan <andrew@dunslane.net> (earlier version) Discussion: https://postgr.es/m/CA%2BhUKGLfOOeyZpm5ByVcAt7x5Pn-%3DxGRNCvgiUPVVzjFLtnY0w%40mail.gmail.com (cherry picked from commit c5cb8f3b770c043509b61528664bcd805e1777e6) Author: Thomas Munro <tmunro@postgresql.org> Author: Alexandra Wang <alexandra.wang.oss@gmail.com>
2024-11-06Monkey-patch LLVM code to fix ARM relocation bug.Thomas Munro
Supply a new memory manager for RuntimeDyld, to avoid crashes in generated code caused by memory placement that can overflow a 32 bit data type. This is a drop-in replacement for the llvm::SectionMemoryManager class in the LLVM library, with Michael Smith's proposed fix from https://www.github.com/llvm/llvm-project/pull/71968. We hereby slurp it into our own source tree, after moving into a new namespace llvm::backport and making some minor adjustments so that it can be compiled with older LLVM versions as far back as 12. It's harder to make it work on even older LLVM versions, but it doesn't seem likely that people are really using them so that is not investigated for now. The problem could also be addressed by switching to JITLink instead of RuntimeDyld, and that is the LLVM project's recommended solution as the latter is about to be deprecated. We'll have to do that soon enough anyway, and then when the LLVM version support window advances far enough in a few years we'll be able to delete this code. Unfortunately that wouldn't be enough for PostgreSQL today: in most relevant versions of LLVM, JITLink is missing or incomplete. Several other projects have already back-ported this fix into their fork of LLVM, which is a vote of confidence despite the lack of commit into LLVM as of today. We don't have our own copy of LLVM so we can't do exactly what they've done; instead we have a copy of the whole patched class so we can pass an instance of it to RuntimeDyld. The LLVM project hasn't chosen to commit the fix yet, and even if it did, it wouldn't be back-ported into the releases of LLVM that most of our users care about, so there is not much point in waiting any longer for that. If they make further changes and commit it to LLVM 19 or 20, we'll still need this for older versions, but we may want to resynchronize our copy and update some comments. The changes that we've had to make to our copy can be seen by diffing our SectionMemoryManager.{h,cpp} files against the ones in the tree of the pull request. Per the LLVM project's license requirements, a copy is in SectionMemoryManager.LICENSE. This should fix the spate of crash reports we've been receiving lately from users on large memory ARM systems. Back-patch to all supported releases. Co-authored-by: Thomas Munro <thomas.munro@gmail.com> Co-authored-by: Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com> Reviewed-by: Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com> Reviewed-by: Daniel Gustafsson <daniel@yesql.se> (license aspects) Reported-by: Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com> Discussion: https://postgr.es/m/CAO6_Xqr63qj%3DSx7HY6ZiiQ6R_JbX%2B-p6sTPwDYwTWZjUmjsYBg%40mail.gmail.com
2024-11-02Revert "For inplace update, send nontransactional invalidations."Noah Misch
This reverts commit 95c5acb3fc261067ab65ddc0b2dca8e162f09442 (v17) and counterparts in each other non-master branch. If released, that commit would have caused a worst-in-years minor release regression, via undetected LWLock self-deadlock. This commit and its self-deadlock fix warrant more bake time in the master branch. Reported by Alexander Lakhin. Discussion: https://postgr.es/m/10ec0bc3-5933-1189-6bb8-5dec4114558e@gmail.com
2024-10-29Unpin buffer before inplace update waits for an XID to end.Noah Misch
Commit a07e03fd8fa7daf4d1356f7cb501ffe784ea6257 changed inplace updates to wait for heap_update() commands like GRANT TABLE and GRANT DATABASE. By keeping the pin during that wait, a sequence of autovacuum workers and an uncommitted GRANT starved one foreground LockBufferForCleanup() for six minutes, on buildfarm member sarus. Prevent, at the cost of a bit of complexity. Back-patch to v12, like the earlier commit. That commit and heap_inplace_lock() have not yet appeared in any release. Discussion: https://postgr.es/m/20241026184936.ae.nmisch@google.com
2024-10-29doc: Add better description for rewrite functions in event triggersMichael Paquier
There are two functions that can be used in event triggers to get more details about a rewrite happening on a relation. Both had a limited documentation: - pg_event_trigger_table_rewrite_reason() and pg_event_trigger_table_rewrite_oid() were not mentioned in the main event trigger section in the paragraph dedicated to the event table_rewrite. - pg_event_trigger_table_rewrite_reason() returns an integer which is a bitmap of the reasons why a rewrite happens. There was no explanation about the meaning of these values, forcing the reader to look at the code to find out that these are defined in event_trigger.h. While on it, let's add a comment in event_trigger.h where the AT_REWRITE_* are defined, telling to update the documentation when these values are changed. Backpatch down to 13 as a consequence of 1ad23335f36b, where this area of the documentation has been heavily reworked. Author: Greg Sabino Mullane Discussion: https://postgr.es/m/CAKAnmmL+Z6j-C8dAx1tVrnBmZJu+BSoc68WSg3sR+CVNjBCqbw@mail.gmail.com Backpatch-through: 13
2024-10-25For inplace update, send nontransactional invalidations.Noah Misch
The inplace update survives ROLLBACK. The inval didn't, so another backend's DDL could then update the row without incorporating the inplace update. In the test this fixes, a mix of CREATE INDEX and ALTER TABLE resulted in a table with an index, yet relhasindex=f. That is a source of index corruption. Back-patch to v12 (all supported versions). The back branch versions don't change WAL, because those branches just added end-of-recovery SIResetAll(). All branches change the ABI of extern function PrepareToInvalidateCacheTuple(). No PGXN extension calls that, and there's no apparent use case in extensions. Reviewed by Nitin Motiani and (in earlier versions) Andres Freund. Discussion: https://postgr.es/m/20240523000548.58.nmisch@google.com
2024-10-25At end of recovery, reset all sinval-managed caches.Noah Misch
An inplace update's invalidation messages are part of its transaction's commit record. However, the update survives even if its transaction aborts or we stop recovery before replaying its transaction commit. After recovery, a backend that started in recovery could update the row without incorporating the inplace update. That could result in a table with an index, yet relhasindex=f. That is a source of index corruption. This bulk invalidation avoids the functional consequences. A future change can fix the !RecoveryInProgress() scenario without changing the WAL format. Back-patch to v17 - v12 (all supported versions). v18 will instead add invalidations to WAL. Discussion: https://postgr.es/m/20240618152349.7f.nmisch@google.com
2024-09-30Bump MIN_WINNT for MINGW to clear a build errorAndrew Dunstan
Because we have been setting this too low, there has been a long-standing warning about a missing declaration for inet_pton(). Modern gcc now considers this an error, so we have been getting failures on the buildfarm animal fairywren. Fix suggested by Thomas Munro. This isn't needed in later branches, as they already set MIN_WINNT higher, nor on earlier branches because they don't use inet_pton(). Discussion: https://postgr.es/m/574fae43-c993-4a25-b0e5-04c3e9c36d6d@dunslane.net
2024-09-24For inplace update durability, make heap_update() callers wait.Noah Misch
The previous commit fixed some ways of losing an inplace update. It remained possible to lose one when a backend working toward a heap_update() copied a tuple into memory just before inplace update of that tuple. In catalogs eligible for inplace update, use LOCKTAG_TUPLE to govern admission to the steps of copying an old tuple, modifying it, and issuing heap_update(). This includes MERGE commands. To avoid changing most of the pg_class DDL, don't require LOCKTAG_TUPLE when holding a relation lock sufficient to exclude inplace updaters. Back-patch to v12 (all supported versions). In v13 and v12, "UPDATE pg_class" or "UPDATE pg_database" can still lose an inplace update. The v14+ UPDATE fix needs commit 86dc90056dfdbd9d1b891718d2e5614e3e432f35, and it wasn't worth reimplementing that fix without such infrastructure. Reviewed by Nitin Motiani and (in earlier versions) Heikki Linnakangas. Discussion: https://postgr.es/m/20231027214946.79.nmisch@google.com
2024-09-24Fix data loss at inplace update after heap_update().Noah Misch
As previously-added tests demonstrated, heap_inplace_update() could instead update an unrelated tuple of the same catalog. It could lose the update. Losing relhasindex=t was a source of index corruption. Inplace-updating commands like VACUUM will now wait for heap_update() commands like GRANT TABLE and GRANT DATABASE. That isn't ideal, but a long-running GRANT already hurts VACUUM progress more just by keeping an XID running. The VACUUM will behave like a DELETE or UPDATE waiting for the uncommitted change. For implementation details, start at the systable_inplace_update_begin() header comment and README.tuplock. Back-patch to v12 (all supported versions). In back branches, retain a deprecated heap_inplace_update(), for extensions. Reported by Smolkin Grigory. Reviewed by Nitin Motiani, (in earlier versions) Heikki Linnakangas, and (in earlier versions) Alexander Lakhin. Discussion: https://postgr.es/m/CAMp+ueZQz3yDk7qg42hk6-9gxniYbp-=bG2mgqecErqR5gGGOA@mail.gmail.com
2024-08-10Allow adjusting session_authorization and role in parallel workers.Tom Lane
The code intends to allow GUCs to be set within parallel workers via function SET clauses, but not otherwise. However, doing so fails for "session_authorization" and "role", because the assign hooks for those attempt to set the subsidiary "is_superuser" GUC, and that call falls foul of the "not otherwise" prohibition. We can't switch to using GUC_ACTION_SAVE for this, so instead add a new GUC variable flag GUC_ALLOW_IN_PARALLEL to mark is_superuser as being safe to set anyway. (This is okay because is_superuser has context PGC_INTERNAL and thus only hard-wired calls can change it. We'd need more thought before applying the flag to other GUCs; but maybe there are other use-cases.) This isn't the prettiest fix perhaps, but other alternatives we thought of would be much more invasive. While here, correct a thinko in commit 059de3ca4: when rejecting a GUC setting within a parallel worker, we should return 0 not -1 if the ereport doesn't longjmp. (This seems to have no consequences right now because no caller cares, but it's inconsistent.) Improve the comments to try to forestall future confusion of the same kind. Despite the lack of field complaints, this seems worth back-patching. Thanks to Nathan Bossart for the idea to invent a new flag, and for review. Discussion: https://postgr.es/m/2833457.1723229039@sss.pgh.pa.us
2024-08-05Restrict accesses to non-system views and foreign tables during pg_dump.Masahiko Sawada
When pg_dump retrieves the list of database objects and performs the data dump, there was possibility that objects are replaced with others of the same name, such as views, and access them. This vulnerability could result in code execution with superuser privileges during the pg_dump process. This issue can arise when dumping data of sequences, foreign tables (only 13 or later), or tables registered with a WHERE clause in the extension configuration table. To address this, pg_dump now utilizes the newly introduced restrict_nonsystem_relation_kind GUC parameter to restrict the accesses to non-system views and foreign tables during the dump process. This new GUC parameter is added to back branches too, but these changes do not require cluster recreation. Back-patch to all supported branches. Reviewed-by: Noah Misch Security: CVE-2024-7348 Backpatch-through: 12
2024-08-01Update comment in portal.h.Etsuro Fujita
We store tuples into the portal's tuple store for a PORTAL_ONE_MOD_WITH query as well. Back-patch to all supported branches. Reviewed by Andy Fan. Discussion: https://postgr.es/m/CAPmGK14HVYBZYZtHabjeCd-e31VT%3Dwx6rQNq8QfehywLcpZ2Hw%40mail.gmail.com