summaryrefslogtreecommitdiff
path: root/src
AgeCommit message (Collapse)Author
2024-07-06Cope with <regex.h> name clashes.Thomas Munro
macOS 15's SDK pulls in headers related to <regex.h> when we include <xlocale.h>. This causes our own regex_t implementation to clash with the OS's regex_t implementation. Luckily our function names already had pg_ prefixes, but the macros and typenames did not. Include <regex.h> explicitly on all POSIX systems, and fix everything that breaks. Then we can prove that we are capable of fully hiding and replacing the system regex API with our own. 1. Deal with standard-clobbering macros by undefining them all first. POSIX says they are "symbolic constants". If they are macros, this allows us to redefine them. If they are enums or variables, our macros will hide them. 2. Deal with standard-clobbering types by giving our types pg_ prefixes, and then using macros to redirect xxx_t -> pg_xxx_t. After including our "regex/regex.h", the system <regex.h> is hidden, because we've replaced all the standard names. The PostgreSQL source tree and extensions can continue to use standard prefix-less type and macro names, but reach our implementation, if they included our "regex/regex.h" header. Back-patch to all supported branches, so that macOS 15's tool chain can build them. Reported-by: Stan Hu <stanhu@gmail.com> Suggested-by: Tom Lane <tgl@sss.pgh.pa.us> Tested-by: Aleksander Alekseev <aleksander@timescale.com> Discussion: https://postgr.es/m/CAMBWrQnEwEJtgOv7EUNsXmFw2Ub4p5P%2B5QTBEgYwiyjy7rAsEQ%40mail.gmail.com
2024-07-01Fix missing installation/uninstallation rules for BackgroundPsql.pmHeikki Linnakangas
Commit d5fd7865 backported BackgroundPsql perl module module with helper functions for tests running interactive or background psql tasks to PG 12 to 15, but did not add installation/uninstallation rules of the build system, causing problems running TAP tests for the extensions. Author: Pavan Deolasee <pavan.deolasee@gmail.com> Discussion: https://www.postgresql.org/message-id/CABOikdPmRuZrcf_gtgXmQzZ5Tbg9yUJmqXDCAZ2aW%3DWi-PbDyQ%40mail.gmail.com
2024-07-01Preserve CurrentMemoryContext across notify and sinval interrupts.Tom Lane
ProcessIncomingNotify is called from the main processing loop that normally runs in MessageContext. That outer-loop code assumes that whatever it allocates will be cleaned up when we're done processing the current client message --- but if we service a notify interrupt, then whatever gets allocated before the next switch into MessageContext will be permanently leaked in TopMemoryContext, because CommitTransactionCommand sets CurrentMemoryContext to TopMemoryContext. There are observable leaks associated with (at least) encoding conversion of incoming queries and parameters attached to Bind messages. sinval catchup interrupts have a similar problem. There might be others, but I've not identified any other clear cases. To fix, take care to save and restore CurrentMemoryContext across the Start/CommitTransactionCommand calls in these functions. Per bug #18512 from wizardbrony. Commit to back branches only; in HEAD, this was dealt with by the riskier but more thoroughgoing approach in commit 1afe31f03. Discussion: https://postgr.es/m/3478884.1718656625@sss.pgh.pa.us
2024-06-28Remove configuration-dependent output from new inplace-inval test.Noah Misch
Per buildfarm members prion and trilobite. Back-patch to v12 (all supported versions), like commit 0844b3968985447ed0a6937cfc8639e379da2fe6. Strategy reviewed by Tom Lane. Discussion: https://postgr.es/m/20240628051353.a0.nmisch@google.com
2024-06-27Remove comment about xl_heap_inplace "AT END OF STRUCT".Noah Misch
Commit 2c03216d831160bedd72d45f712601b6f7d03f1c moved the tuple data from there to the buffer-0 data. Back-patch to v12 (all supported versions), the plan for the next change to this struct. Discussion: https://postgr.es/m/20240523000548.58.nmisch@google.com
2024-06-27Cope with inplace update making catcache stale during TOAST fetch.Noah Misch
This extends ad98fb14226ae6456fbaed7990ee7591cbe5efd2 to invals of inplace updates. Trouble requires an inplace update of a catalog having a TOAST table, so only pg_database was at risk. (The other catalog on which core code performs inplace updates, pg_class, has no TOAST table.) Trouble would require something like the inplace-inval.spec test. Consider GRANT ... ON DATABASE fetching a stale row from cache and discarding a datfrozenxid update that vac_truncate_clog() has already relied upon. Back-patch to v12 (all supported versions). Reviewed (in an earlier version) by Robert Haas. Discussion: https://postgr.es/m/20240114201411.d0@rfd.leadboat.com Discussion: https://postgr.es/m/20240512232923.aa.nmisch@google.com
2024-06-27AccessExclusiveLock new relations just after assigning the OID.Noah Misch
This has no user-visible, important consequences, since other sessions' catalog scans can't find the relation until we commit. However, this unblocks introducing a rule about locks required to heap_update() a pg_class row. CREATE TABLE has been acquiring this lock eventually, but it can heap_update() pg_class.relchecks earlier. create_toast_table() has been acquiring only ShareLock. Back-patch to v12 (all supported versions), the plan for the commit relying on the new rule. Reviewed (in an earlier version) by Robert Haas. Discussion: https://postgr.es/m/20240611024525.9f.nmisch@google.com
2024-06-27Lock before setting relhassubclass on RELKIND_PARTITIONED_INDEX.Noah Misch
Commit 5b562644fec696977df4a82790064e8287927891 added a comment that SetRelationHasSubclass() callers must hold this lock. When commit 17f206fbc824d2b4b14480199ca9ff7dea417eda extended use of this column to partitioned indexes, it didn't take the lock. As the latter commit message mentioned, we currently never reset a partitioned index to relhassubclass=f. That largely avoids harm from the lock omission. The cause for fixing this now is to unblock introducing a rule about locks required to heap_update() a pg_class row. This might cause more deadlocks. It gives minor user-visible benefits: - If an ALTER INDEX SET TABLESPACE runs concurrently with ALTER TABLE ATTACH PARTITION or CREATE PARTITION OF, one transaction blocks instead of failing with "tuple concurrently updated". (Many cases of DDL concurrency still fail that way.) - Match ALTER INDEX ATTACH PARTITION in choosing to lock the index. While not user-visible today, we'll need this if we ever make something set the flag to false for a partitioned index, like ANALYZE does today for tables. Back-patch to v12 (all supported versions), the plan for the commit relying on the new rule. In back branches, add LockOrStrongerHeldByMe() instead of adding a LockHeldByMe() parameter. Reviewed (in an earlier version) by Robert Haas. Discussion: https://postgr.es/m/20240611024525.9f.nmisch@google.com
2024-06-27Expand comments and add an assertion in nodeModifyTable.c.Noah Misch
Most comments concern RELKIND_VIEW. One addresses the ExecUpdate() "tupleid" parameter. A later commit will rely on these facts, but they hold already. Back-patch to v12 (all supported versions), the plan for that commit. Reviewed (in an earlier version) by Robert Haas. Discussion: https://postgr.es/m/20240512232923.aa.nmisch@google.com
2024-06-27Improve test coverage for changes to inplace-updated catalogs.Noah Misch
This covers both regular and inplace changes, since bugs arise at their intersection. Where marked, these witness extant bugs. Back-patch to v12 (all supported versions). Reviewed (in an earlier version) by Robert Haas. Discussion: https://postgr.es/m/20240512232923.aa.nmisch@google.com
2024-06-27Avoid crashing when a JIT-inlined backend function throws an error.Tom Lane
errfinish() assumes that the __FUNC__ and __FILE__ arguments it's passed are compile-time constant strings that can just be pointed to rather than physically copied. However, it's possible for LLVM to generate code in which those pointers point into a dynamically loaded code segment. If that segment gets unloaded before we're done with the ErrorData struct, we have dangling pointers that will lead to SIGSEGV. In simple cases that won't happen, because we won't unload LLVM code before end of transaction. But it's possible to happen if the error is thrown within end-of-transaction code run by _SPI_commit or _SPI_rollback, because since commit 2e517818f those functions clean up by ending the transaction and starting a new one. Rather than fixing this by adding pstrdup() overhead to every elog/ereport sequence, let's fix it by copying the risky pointers in CopyErrorData(). That solves it for _SPI_commit/_SPI_rollback because they use that function to preserve the error data across the transaction end/restart sequence; and it seems likely that any other code doing something similar would need to do that too. I'm suspicious that this behavior amounts to an LLVM bug (or a bug in our use of it?), because it implies that string constant references that should be pointer-equal according to a naive understanding of C semantics will sometimes not be equal. However, even if it is a bug and someday gets fixed, we'll have to cope with the current behavior for a long time to come. Report and patch by me. Back-patch to all supported branches. Discussion: https://postgr.es/m/1565654.1719425368@sss.pgh.pa.us
2024-06-27Fix MVCC bug with prepared xact with subxacts on standbyHeikki Linnakangas
We did not recover the subtransaction IDs of prepared transactions when starting a hot standby from a shutdown checkpoint. As a result, such subtransactions were considered as aborted, rather than in-progress. That would lead to hint bits being set incorrectly, and the subtransactions suddenly becoming visible to old snapshots when the prepared transaction was committed. To fix, update pg_subtrans with prepared transactions's subxids when starting hot standby from a shutdown checkpoint. The snapshots taken from that state need to be marked as "suboverflowed", so that we also check the pg_subtrans. Backport to all supported versions. Discussion: https://www.postgresql.org/message-id/6b852e98-2d49-4ca1-9e95-db419a2696e0@iki.fi
2024-06-27tests: Trim newline from result returned by BackgroundPsql->queryHeikki Linnakangas
This went unnoticed, because only a few existing callers of BackgroundPsql->query used the result, and the ones that did were not bothered by an extra newline. I noticed because I was about to add a new test that checks the result. Backport to all supported versions, since I just backported the BackgroundPsql facility to all supported versions too.
2024-06-27Fix thinkos in commentsAlvaro Herrera
The first one was noticed by Tender Wang and introduced with 8aba9322511f; the other one was newly introduced with dbca3469ebf8.
2024-06-27Backport BackgroundPsql perl test moduleHeikki Linnakangas
Backport the new BackgroundPsql modules and the constructor functions, background_psql() and interactive_psql, to all supported branches. That makes it easier to backpatch tests that use it. BackgroundPsql was introduced in version 16. On version 16, this commit backports just the new timeout argument from master (commit 334f512f45). On older branches, the whole facility. This includes the change to `use warnings FATAL => 'all'`, which we haven't otherwise backported, but it seems good to keep the file identical across branches. Discussion: https://www.postgresql.org/message-id/b7c64f20-ea01-4f15-9088-0cd6832af149@iki.fi
2024-06-26Fix bugs in MultiXact truncationHeikki Linnakangas
1. TruncateMultiXact() performs the SLRU truncations in a critical section. Deleting the SLRU segments calls ForwardSyncRequest(), which will try to compact the request queue if it's full (CompactCheckpointerRequestQueue()). That in turn allocates memory, which is not allowed in a critical section. Backtrace: TRAP: failed Assert("CritSectionCount == 0 || (context)->allowInCritSection"), File: "../src/backend/utils/mmgr/mcxt.c", Line: 1353, PID: 920981 postgres: autovacuum worker template0(ExceptionalCondition+0x6e)[0x560a501e866e] postgres: autovacuum worker template0(+0x5dce3d)[0x560a50217e3d] postgres: autovacuum worker template0(ForwardSyncRequest+0x8e)[0x560a4ffec95e] postgres: autovacuum worker template0(RegisterSyncRequest+0x2b)[0x560a50091eeb] postgres: autovacuum worker template0(+0x187b0a)[0x560a4fdc2b0a] postgres: autovacuum worker template0(SlruDeleteSegment+0x101)[0x560a4fdc2ab1] postgres: autovacuum worker template0(TruncateMultiXact+0x2fb)[0x560a4fdbde1b] postgres: autovacuum worker template0(vac_update_datfrozenxid+0x4b3)[0x560a4febd2f3] postgres: autovacuum worker template0(+0x3adf66)[0x560a4ffe8f66] postgres: autovacuum worker template0(AutoVacWorkerMain+0x3ed)[0x560a4ffe7c2d] postgres: autovacuum worker template0(+0x3b1ead)[0x560a4ffecead] postgres: autovacuum worker template0(+0x3b620e)[0x560a4fff120e] postgres: autovacuum worker template0(+0x3b3fbb)[0x560a4ffeefbb] postgres: autovacuum worker template0(+0x2f724e)[0x560a4ff3224e] /lib/x86_64-linux-gnu/libc.so.6(+0x27c8a)[0x7f62cc642c8a] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x85)[0x7f62cc642d45] postgres: autovacuum worker template0(_start+0x21)[0x560a4fd16f31] To fix, bail out in CompactCheckpointerRequestQueue() without doing anything, if it's called in a critical section. That covers the above call path, as well as any other similar cases where RegisterSyncRequest might be called in a critical section. 2. After fixing that, another problem became apparent: Autovacuum process doing that truncation can deadlock with the checkpointer process. TruncateMultiXact() sets "MyProc->delayChkptFlags |= DELAY_CHKPT_START". If the sync request queue is full and cannot be compacted, the process will repeatedly sleep and retry, until there is room in the queue. However, if the checkpointer is trying to start a checkpoint at the same time, and is waiting for the DELAY_CHKPT_START processes to finish, the queue will never shrink. More concretely, the autovacuum process is stuck here: #0 0x00007fc934926dc3 in epoll_wait () from /lib/x86_64-linux-gnu/libc.so.6 #1 0x000056220b24348b in WaitEventSetWaitBlock (set=0x56220c2e4b50, occurred_events=0x7ffe7856d040, nevents=1, cur_timeout=<optimized out>) at ../src/backend/storage/ipc/latch.c:1570 #2 WaitEventSetWait (set=0x56220c2e4b50, timeout=timeout@entry=10, occurred_events=<optimized out>, occurred_events@entry=0x7ffe7856d040, nevents=nevents@entry=1, wait_event_info=wait_event_info@entry=150994949) at ../src/backend/storage/ipc/latch.c:1516 #3 0x000056220b243224 in WaitLatch (latch=<optimized out>, latch@entry=0x0, wakeEvents=wakeEvents@entry=40, timeout=timeout@entry=10, wait_event_info=wait_event_info@entry=150994949) at ../src/backend/storage/ipc/latch.c:538 #4 0x000056220b26cf46 in RegisterSyncRequest (ftag=ftag@entry=0x7ffe7856d0a0, type=type@entry=SYNC_FORGET_REQUEST, retryOnError=true) at ../src/backend/storage/sync/sync.c:614 #5 0x000056220af9db0a in SlruInternalDeleteSegment (ctl=ctl@entry=0x56220b7beb60 <MultiXactMemberCtlData>, segno=segno@entry=11350) at ../src/backend/access/transam/slru.c:1495 #6 0x000056220af9dab1 in SlruDeleteSegment (ctl=ctl@entry=0x56220b7beb60 <MultiXactMemberCtlData>, segno=segno@entry=11350) at ../src/backend/access/transam/slru.c:1566 #7 0x000056220af98e1b in PerformMembersTruncation (oldestOffset=<optimized out>, newOldestOffset=<optimized out>) at ../src/backend/access/transam/multixact.c:3006 #8 TruncateMultiXact (newOldestMulti=newOldestMulti@entry=3221225472, newOldestMultiDB=newOldestMultiDB@entry=4) at ../src/backend/access/transam/multixact.c:3201 #9 0x000056220b098303 in vac_truncate_clog (frozenXID=749, minMulti=<optimized out>, lastSaneFrozenXid=749, lastSaneMinMulti=3221225472) at ../src/backend/commands/vacuum.c:1917 #10 vac_update_datfrozenxid () at ../src/backend/commands/vacuum.c:1760 #11 0x000056220b1c3f76 in do_autovacuum () at ../src/backend/postmaster/autovacuum.c:2550 #12 0x000056220b1c2c3d in AutoVacWorkerMain (startup_data=<optimized out>, startup_data_len=<optimized out>) at ../src/backend/postmaster/autovacuum.c:1569 and the checkpointer is stuck here: #0 0x00007fc9348ebf93 in clock_nanosleep () from /lib/x86_64-linux-gnu/libc.so.6 #1 0x00007fc9348fe353 in nanosleep () from /lib/x86_64-linux-gnu/libc.so.6 #2 0x000056220b40ecb4 in pg_usleep (microsec=microsec@entry=10000) at ../src/port/pgsleep.c:50 #3 0x000056220afb43c3 in CreateCheckPoint (flags=flags@entry=108) at ../src/backend/access/transam/xlog.c:7098 #4 0x000056220b1c6e86 in CheckpointerMain (startup_data=<optimized out>, startup_data_len=<optimized out>) at ../src/backend/postmaster/checkpointer.c:464 To fix, add AbsorbSyncRequests() to the loops where the checkpointer waits for DELAY_CHKPT_START or DELAY_CHKPT_COMPLETE operations to finish. Backpatch to v14. Before that, SLRU deletion didn't call RegisterSyncRequest, which avoided this failure. I'm not sure if there are other similar scenarios on older versions, but we haven't had any such reports. Discussion: https://www.postgresql.org/message-id/ccc66933-31c1-4f6a-bf4b-45fef0d4f22e@iki.fi
2024-06-26Remove redundant perl version checksAndrew Dunstan
Commit 4c1532763a removed some redundant uses of 'use 5.008001;' in perl scripts, including in plperl's plc_perlboot.pl. Because it made other changes it wasn't backpatched. However, now this is causing a failure on back branches when built with bleeding edge perl. Therefore, backpatch just that part of it which removed those uses, from 15 all the way down to 9.2, which is the earliest version currently built in the buildfarm. per report from Alexander Lakhin Discussion: https://postgr.es/m/4cc2ee93-e03c-8e13-61ed-412e7e6ff19d@gmail.com
2024-06-24Fix partition pruning setup during DETACH CONCURRENTLYAlvaro Herrera
When detaching partition in concurrent mode, it's possible for partition descriptors to not match the set that was recently seen when the plan was made, causing an assertion failure or (in production builds) failure to construct a working plan. The case that was reported involves prepared statements, but I think it may be possible to hit this bug without that too. The problem is that CreatePartitionPruneState is constructing a PartitionPruneState under the assumption that new partitions can be added, but never removed, but it turns out that this isn't true: a prepared statement gets replanned when the DETACH CONCURRENTLY session sends out its invalidation message, but if the invalidation message arrives after ExecInitAppend started, we would build a partition descriptor without the partition, and then CreatePartitionPruneState would refuse to work with it. CreatePartitionPruneState already contains code to deal with the new descriptor having more partitions than before (and behaving for the extra partitions as if they had been pruned), but doesn't have code to deal with less partitions than before, and it is naïve about the case where the number of partitions is the same. We could simply add that a new stanza for less partitions than before, and in simple testing it works to do that; but it's possible to press the test scripts even further and hit the case where one partition is added and a partition is removed quickly enough that we see the same number of partitions, but they don't actually match, causing hangs during execution. To cope with both these problems, we now memcmp() the arrays of partition OIDs, and do a more elaborate mapping (relying on the fact that both OID arrays are in partition-bounds order) if they're not identical. Backpatch to 14, where DETACH CONCURRENTLY appeared. Reported-by: yajun Hu <1026592243@qq.com> Reviewed-by: Tender Wang <tndrwang@gmail.com> Discussion: https://postgr.es/m/18377-e0324601cfebdfe5@postgresql.org
2024-06-20Don't throw an error if a queued AFTER trigger no longer exists.Tom Lane
afterTriggerInvokeEvents and AfterTriggerExecute have always treated it as an error if the trigger OID mentioned in a queued after-trigger event can't be found. However, that fails to account for the edge case where the trigger's been dropped in the current transaction since queueing the event. There seems no very good reason to disallow that case, so instead silently do nothing if the trigger OID can't be found. This does give up a little bit of bug-detection ability, but I don't recall that these error messages have ever actually revealed a bug, so it seems mostly theoretical. Alternatives such as marking pending events DONE at the time of dropping a trigger would be complicated and perhaps introduce bugs of their own. Per bug #18517 from Alexander Lakhin. Back-patch to all supported branches. Discussion: https://postgr.es/m/18517-af2d19882240902c@postgresql.org
2024-06-19Fix possible Assert failure in cost_memoize_rescanDavid Rowley
In cost_memoize_rescan(), when calculating the hit_ratio using the calls and ndistinct estimations, if the value that was set in MemoizePath.calls had not been processed through clamp_row_est(), then it was possible that it was set to some non-integer value which could result in ndistinct being 1 higher than calls due to estimate_num_groups() performing clamp_row_est() on its input_rows. This could result in hit_ratio values slightly below 0.0, which would cause an Assert failure. The value of MemoizePath.calls comes from the final parameter in the create_memoize_path() function, of which we only have one true caller of. That caller passes outer_path->rows. All the core code I looked at always seems to call clamp_row_est() on the Path.rows, so there might have been no issues with any core Paths causing troubles here. The bug report was about a CustomPath with a non-clamped row estimated. The misbehavior as a result of this seems to be mostly limited to the Assert() failing. Aside from that, it seems the Memoize costs would just come out slightly higher than they should have, which is likely fairly harmless. Reported-by: Kohei KaiGai <kaigai@heterodb.com> Discussion: https://postgr.es/m/CAOP8fzZnTU+N64UYJYogb1hN-5hFP+PwTb3m_cnGAD7EsQwrKw@mail.gmail.com Reviewed-by: Richard Guo Backpatch-through: 14, where Memoize was introduced
2024-06-17Fix insertion of SP-GiST REDIRECT tuples during REINDEX CONCURRENTLY.Tom Lane
Reconstruction of an SP-GiST index by REINDEX CONCURRENTLY may insert some REDIRECT tuples. This will typically happen in a transaction that lacks an XID, which leads either to assertion failure in spgFormDeadTuple or to insertion of a REDIRECT tuple with zero xid. The latter's not good either, since eventually VACUUM will apply GlobalVisTestIsRemovableXid() to the zero xid, resulting in either an assertion failure or a garbage answer. In practice, since REINDEX CONCURRENTLY locks out index scans till it's done, it doesn't matter whether it inserts REDIRECTs or PLACEHOLDERs; and likewise it doesn't matter how soon VACUUM reduces such a REDIRECT to a PLACEHOLDER. So in non-assert builds there's no observable problem here, other than perhaps a little index bloat. But it's not behaving as intended. To fix, remove the failing Assert in spgFormDeadTuple, acknowledging that we might sometimes insert a zero XID; and guard VACUUM's GlobalVisTestIsRemovableXid() call with a test for valid XID, ensuring that we'll reduce such a REDIRECT the first time VACUUM sees it. (Versions before v14 use TransactionIdPrecedes here, which won't fail on zero xid, so they really have no bug at all in non-assert builds.) Another solution could be to not create REDIRECTs at all during REINDEX CONCURRENTLY, making the relevant code paths treat that case like index build (which likewise knows that no concurrent index scans can be happening). That would allow restoring the Assert in spgFormDeadTuple, but we'd still need the VACUUM change because redirection tuples with zero xid may be out there already. But there doesn't seem to be a nice way for spginsert() to tell that it's being called in REINDEX CONCURRENTLY without some API changes, so we'll leave that as a possible future improvement. In HEAD, also rename the SpGistState.myXid field to redirectXid, which seems less misleading (since it might not in fact be our transaction's XID) and is certainly less uninformatively generic. Per bug #18499 from Alexander Lakhin. Back-patch to all supported branches. Discussion: https://postgr.es/m/18499-8a519c280f956480@postgresql.org
2024-06-14Clean out column-level pg_init_privs entries when dropping tables.Tom Lane
DeleteInitPrivs did not get the memo about how, when dropping a whole object (with subid == 0), you should drop entries relating to its sub-objects too. This is visible in the test_pg_dump test case if one drops the extension at the end: the entry for GRANT SELECT(col1) ON regress_pg_dump_table TO public; was still present in pg_init_privs afterwards, although it was pointing to a dangling table OID. Noted while fooling with a fix for REASSIGN OWNED for pg_init_privs entries. This bug is aboriginal in the pg_init_privs feature though, and there seems no reason not to back-patch the fix.
2024-06-13Fix parsing of ignored operators in websearch_to_tsquery().Tom Lane
The manual says clearly that punctuation in the input of websearch_to_tsquery() is ignored, except for the special cases of dashes and quotes. However, this failed for cases like "(foo bar) or something", or in general an ISOPERATOR character in front of the "or". We'd switch back to WAITOPERAND state, then ignore the operator character while remaining in that state, and then reach the "or" in WAITOPERAND state which (intentionally) makes us treat it as data. The fix is simple enough: if we see an ISOPERATOR character while in WAITOPERATOR state, we have to skip it while staying in that state. (We don't need to worry about other punctuation characters: those will be consumed as though they were words, but then rejected by lexizing.) In v14 and up (since commit eb086056f) we can simplify the code a bit more too, because there is no longer a reason for the WAITOPERAND state to distinguish between quoted and unquoted operands. Per bug #18479 from Manos Emmanouilidis. Back-patch to all supported branches. Discussion: https://postgr.es/m/18479-d9b46e2fc242c33e@postgresql.org
2024-06-13When replanning a plpgsql "simple expression", check it's still simple.Tom Lane
The previous coding here assumed that we didn't need to recheck any of the querytree tests made in exec_simple_check_plan(). I think we supposed that those properties were fully determined by the syntax of the source text and hence couldn't change. That is true for most of them, but at least hasTargetSRFs and hasAggs can change by dint of forcibly dropping an originally-referenced function and recreating it with new properties. That leads to "unexpected plan node type" or similar failures. These tests are pretty cheap compared to the cost of replanning, so rather than sweat over exactly which properties need to be rechecked, let's just recheck them all. Hence, factor out those tests into a new function exec_is_simple_query(), and rearrange callers as needed. A second problem in the same area was that if we failed during replanning or during exec_save_simple_expr(), we'd potentially leave behind now-dangling pointers to the old simple expression, potentially resulting in crashes later. To fix, clear those pointers before replanning. The v12 code looks quite different in this area but still has the bug about needing to recheck query simplicity. I chose to back-patch all of the plpgsql_simple.sql test script, which formerly didn't exist in this branch. Per bug #18497 from Nikita Kalinin. Back-patch to all supported branches. Discussion: https://postgr.es/m/18497-fe93b6da82ce31d4@postgresql.org
2024-06-13Clamp result of MultiXactMemberFreezeThresholdHeikki Linnakangas
The purpose of the function is to reduce the effective autovacuum_multixact_freeze_max_age if the multixact members SLRU is approaching wraparound, to make multixid freezing more aggressive. The returned value should therefore never be greater than plain autovacuum_multixact_freeze_max_age. Reviewed-by: Robert Haas Discussion: https://www.postgresql.org/message-id/85fb354c-f89f-4d47-b3a2-3cbd461c90a3@iki.fi Backpatch-through: 12, all supported versions
2024-06-13Skip some permissions checks on CygwinAndrew Dunstan
These are checks that are already skipped on other Windows systems. Backpatch to all live branches, as appropriate.
2024-06-11Fix infer_arbiter_indexes() to not assume resultRelation is 1.Tom Lane
infer_arbiter_indexes failed to renumber varnos in index expressions or predicates that it got from the catalogs. This escaped detection up to now because the stored varnos in such trees will be 1, and an INSERT's result relation is usually the first rangetable entry, so that that was fine. However, in cases such as inserting through an updatable view, it's not fine, leading to failure to match the expressions to the query with ensuing "there is no unique or exclusion constraint matching the ON CONFLICT specification" errors. Fix by copy-and-paste from get_relation_info(). Per bug #18502 from Michael Wang. Back-patch to all supported versions. Discussion: https://postgr.es/m/18502-545b53f5b81e54e0@postgresql.org
2024-06-11Fix creation of partition descriptor during concurrent detachAlvaro Herrera
When a partition is being detached in concurrent mode, it is possible for find_inheritance_children_extended() to return that partition in the list, and immediately after that receive an invalidation message that sets its relpartbound to NULL just before we read it. (This can happen because table_open() reads invalidation messages.) Currently we raise an error ERROR: missing relpartbound for relation %u about the situation, but that's bogus because the table is no longer a partition, so we shouldn't be complaining about it. A better reaction is to retry the find_inheritance_children_extended call to get a new list, which will no longer have the partition being detached. Noticed while investigating bug #18377. Backpatch to 14, where DETACH CONCURRENTLY appeared. Discussion: https://postgr.es/m/202405201616.y4ht2qe5ihoy@alvherre.pgsql
2024-06-07Tighten test_predtest's input checks, and improve error messages.Tom Lane
test_predtest() neglected to consider the possibility that SPI_plan_get_cached_plan would return NULL. This led to a core dump if the input (incorrectly) contains more than one SQL command. While here, let's expend more than zero effort on the error message for this case and nearby ones. Per (half of) bug #18483 from Alexander Kozhemyakin. Back-patch to all supported branches, not because this is very significant (it's merely test scaffolding) but to make our world a bit safer for fuzz testing. Discussion: https://postgr.es/m/18483-30bfff42de238000@postgresql.org
2024-06-07Reject modifying a temp table of another session with ALTER TABLE.Tom Lane
Normally this case isn't even reachable by non-superusers, since permissions checks prevent naming such a table. However, it is possible to make it happen by altering a parent table whose child is another session's temp table. We definitely can't support any such ALTER that requires modifying the contents of such a table, since we lack access to the other session's temporary-buffer pool. But there seems no good reason to allow it even if it'd only require changing catalog contents. One reason not to allow it is that we'd rather not expose the implementation-dependent behavior of whether a specific ALTER requires touching the table contents. Another is that there may be (in future, even if not today) optimizations that assume that a session's own temp tables won't be modified by other sessions. Hence, add a RELATION_IS_OTHER_TEMP() check to all the places where ALTER TABLE currently does CheckTableNotInUse(). (I looked through all other callers of CheckTableNotInUse(), and they seem OK already.) Per bug #18492 from Alexander Lakhin. Back-patch to all supported branches. Discussion: https://postgr.es/m/18492-c7a2634bf4968763@postgresql.org
2024-06-07Fix behavior of stable functions called from a CALL's argument list.Tom Lane
If the CALL is within an atomic context (e.g. there's an outer transaction block), _SPI_execute_plan should acquire a fresh snapshot to execute any such functions with. We failed to do that and instead passed them the Portal snapshot, which had been acquired at the start of the current SQL command. This'd lead to seeing stale values of rows modified since the start of the command. This is arguably a bug in 84f5c2908: I failed to see that "are we in non-atomic mode" needs to be defined the same way as it is further down in _SPI_execute_plan, i.e. check !_SPI_current->atomic not just options->allow_nonatomic. Alternatively the blame could be laid on plpgsql, which is unconditionally passing allow_nonatomic = true for CALL/DO even when it knows it's in an atomic context. However, fixing it in spi.c seems like a better idea since that will also fix the problem for any extensions that may have copied plpgsql's coding pattern. While here, update an obsolete comment about _SPI_execute_plan's snapshot management. Per report from Victor Yegorov. Back-patch to all supported versions. Discussion: https://postgr.es/m/CAGnEboiRe+fG2QxuBO2390F7P8e2MQ6UyBjZSL_w1Cej+E4=Vw@mail.gmail.com
2024-06-06Fix failure with SQL-procedure polymorphic output arguments in v12.Tom Lane
Before the v13-era commit 913bbd88d, check_sql_fn_retval fails to resolve polymorphic output types and then just throws up its hands and assumes the check will be made at runtime. I think that's true for ordinary functions returning RECORD, but it doesn't happen in CALL, potentially resulting in crashes if the actual output of the SQL procedure's SELECT doesn't match the type inferred from polymorphism. With a little bit of rearrangement, we can use get_call_result_type instead of get_func_result_type and thereby infer the correct types. I'm still unwilling to back-patch all of 913bbd88d, so if the types don't match you'll get an error rather than perhaps silently inserting a cast as v13 and later can. That's consistent with prior behavior though, so it seems fine. Prior to 70ffb27b2, you'd typically get other errors due to other shortcomings of CALL's management of polymorphism. Nonetheless, this is an independent bug. Although there is no bug in v13 and up, it seems prudent to add the test case for this to the newer branches too. It's clearly an under-tested area. Per report from Andrew Bille. Discussion: https://postgr.es/m/CAJnzarw9EeWHAQRm76dXd=7j+rgw6ERqC=nCay8jeFqTwKwhqQ@mail.gmail.com
2024-06-04Fix pl/tcl's handling of errors from Tcl_ListObjGetElements().Tom Lane
In a procedure or function returning tuple, we use that function to parse the Tcl script's result, which is supposed to be a Tcl list. If it isn't, you get an error. Commit 26abb50c4 incautiously supposed that we could use throw_tcl_error() to report such an error. That doesn't actually work, because low-level functions like Tcl_ListObjGetElements() don't fill Tcl's errorInfo variable. The result is either a null-pointer-dereference crash or emission of misleading context information describing the previous Tcl error. Back off to just reporting the interpreter's result string, and improve throw_tcl_error()'s comment to explain when to use it. Also, although the similar code in pltcl_trigger_handler() avoided this mistake, it was using a fairly confusing wording of the error message. Improve that while we're here. Per report from A. Kozhemyakin. Back-patch to all supported branches. Erik Wienhold and Tom Lane Discussion: https://postgr.es/m/6a2a1c40-2b2c-4a33-8b72-243c0766fcda@postgrespro.ru
2024-05-23Remove race conditions between ECPGdebug() and ecpg_log().Tom Lane
Coverity complains that ECPGdebug is accessing debugstream without holding debug_mutex, which is a fair complaint: we should take debug_mutex while changing the settings ecpg_log looks at. In some branches it also complains about unlocked use of simple_debug. I think it's intentional and safe to have a quick unlocked check of simple_debug at the start of ecpg_log, since that early exit will always be taken in non-debug cases. But we should recheck simple_debug after acquiring the mutex. In the worst case, calling ECPGdebug concurrently with ecpg_log in another thread could result in a null-pointer dereference due to debugstream transiently being NULL while simple_debug isn't 0. This is largely hypothetical, since it's unlikely anybody uses ECPGdebug() at all in the field, and our own regression tests don't seem to be hitting the theoretical race conditions either. Still, if we're going to the trouble of having mutexes here, we ought to be using them in a way that's actually safe not just almost safe. Hence, back-patch to all supported branches.
2024-05-22Fix handling of extended expression statistics in CREATE TABLE LIKE.Tom Lane
transformTableLikeClause believed that it could process extended statistics immediately because "the representation of CreateStatsStmt doesn't depend on column numbers". That was true when extended stats were first introduced, but it was falsified by the addition of extended stats on expressions: the parsed expression tree is fed forward by the LIKE option, and that will contain Vars. So if the new table doesn't have attnums identical to the old one's (typically because there are some dropped columns in the old one), that doesn't work. The CREATE goes through, but it emits invalid statistics objects that will cause problems later. Fortunately, we already have logic that can adapt expression trees to the possibly-new column numbering. To use it, we have to delay processing of CREATE_TABLE_LIKE_STATISTICS into expandTableLikeClause, just as for other LIKE options that involve expressions. Per bug #18468 from Alexander Lakhin. Back-patch to v14 where extended statistics on expressions were added. Discussion: https://postgr.es/m/18468-f5add190e3fa5902@postgresql.org
2024-05-18Account for optimized MinMax aggregates during SS_finalize_plan.Tom Lane
We are capable of optimizing MIN() and MAX() aggregates on indexed columns into subqueries that exploit the index, rather than the normal thing of scanning the whole table. When we do this, we replace the Aggref node(s) with Params referencing subquery outputs. Such Params really ought to be included in the per-plan-node extParam/allParam sets computed by SS_finalize_plan. However, we've never done so up to now because of an ancient implementation choice to perform that substitution during set_plan_references, which runs after SS_finalize_plan, so that SS_finalize_plan never sees these Params. The cleanest fix would be to perform a separate tree walk to do these substitutions before SS_finalize_plan runs. That seems unattractive, first because a whole-tree mutation pass is expensive, and second because we lack infrastructure for visiting expression subtrees in a Plan tree, so that we'd need a new function knowing as much as SS_finalize_plan knows about that. I also considered swapping the order of SS_finalize_plan and set_plan_references, but that fell foul of various assumptions that seem tricky to fix. So the approach adopted here is to teach SS_finalize_plan itself to check for such Aggrefs. I refactored things a bit in setrefs.c to avoid having three copies of the code that does that. Back-patch of v17 commits d0d44049d and 779ac2c74. When d0d44049d went in, there was no evidence that it was fixing a reachable bug, so I refrained from back-patching. Now we have such evidence. Per bug #18465 from Hal Takahara. Back-patch to all supported branches. Discussion: https://postgr.es/m/18465-2fae927718976b22@postgresql.org Discussion: https://postgr.es/m/2391880.1689025003@sss.pgh.pa.us
2024-05-17Refuse upgrades from pre-9.0 clustersDaniel Gustafsson
Commit 695b4a113ab added a dependency on retrieving oldestxid from pg_control, which only exists in 9.0 and onwards, but the check for 8.4 as the oldest version was retained. Since there has been few if any complaints of 8.4 upgrades not working, fix by setting 9.0 as the oldest version supported rather than resurrecting 8.4 support. Backpatch to all supported versions. Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/1973418.1657040382@sss.pgh.pa.us Backpatch-through: v12
2024-05-16Fix documentation about DROP DATABASE FORCE process termination rights.Noah Misch
Specifically, it terminates a background worker even if the caller couldn't terminate the background worker with pg_terminate_backend(). Commit 3a9b18b3095366cd0c4305441d426d04572d88c1 neglected to update this. Back-patch to v13, which introduced DROP DATABASE FORCE. Reviewed by Amit Kapila. Reported by Kirill Reshke. Discussion: https://postgr.es/m/20240429212756.60.nmisch@google.com
2024-05-14Fix handling of polymorphic output arguments for procedures.Tom Lane
Most of the infrastructure for procedure arguments was already okay with polymorphic output arguments, but it turns out that CallStmtResultDesc() was a few bricks shy of a load here. It thought all it needed to do was call build_function_result_tupdesc_t, but that function specifically disclaims responsibility for resolving polymorphic arguments. Failing to handle that doesn't seem to be a problem for CALL in plpgsql, but CALL from plain SQL would get errors like "cannot display a value of type anyelement", or even crash outright. In v14 and later we can simply examine the exposed types of the CallStmt.outargs nodes to get the right type OIDs. But it's a lot more complicated to fix in v12/v13, because those versions don't have CallStmt.outargs, nor do they do expand_function_arguments until ExecuteCallStmt runs. We have to duplicatively run expand_function_arguments, and then re-determine which elements of the args list are output arguments. Per bug #18463 from Drew Kimball. Back-patch to all supported versions, since it's busted in all of them. Discussion: https://postgr.es/m/18463-f8cd77e12564d8a2@postgresql.org
2024-05-13Fix pg_sequence_last_value() for unlogged sequences on standbys.Nathan Bossart
Presently, when this function is called for an unlogged sequence on a standby server, it will error out with a message like ERROR: could not open file "base/5/16388": No such file or directory Since the pg_sequences system view uses pg_sequence_last_value(), it can error similarly. To fix, modify the function to return NULL for unlogged sequences on standby servers. Since this bug is present on all versions since v15, this approach is preferable to making the ERROR nicer because we need to repair the pg_sequences view without modifying its definition on released versions. For consistency, this commit also modifies the function to return NULL for other sessions' temporary sequences. The pg_sequences view already appropriately filters out such sequences, so there's no bug there, but we might as well offer some defense in case someone invokes this function directly. Unlogged sequences were first introduced in v15, but temporary sequences are much older, so while the fix for unlogged sequences is only back-patched to v15, the temporary sequence portion is back-patched to all supported versions. We could also remove the privilege check in the pg_sequences view definition in v18 if we modify this function to return NULL for sequences for which the current user lacks privileges, but that is left as a future exercise for when v18 development begins. Reviewed-by: Tom Lane, Michael Paquier Discussion: https://postgr.es/m/20240501005730.GA594666%40nathanxps13 Backpatch-through: 12
2024-05-09Fix recursive RECORD-returning plpython functions.Tom Lane
If we recursed to a new call of the same function, with a different coldeflist (AS clause), it would fail because the inner call would overwrite the outer call's idea of what to return. This is vaguely like 1d2fe56e4 and c5bec5426, but it's not due to any API decisions: it's just that we computed the actual output rowtype at the start of the call, and saved it in the per-procedure data structure. We can fix it at basically zero cost by doing the computation at the end of each call instead of the start. It's not clear that there's any real-world use-case for such a function, but given that it doesn't cost anything to fix, it'd be silly not to. Per report from Andreas Karlsson. Back-patch to all supported branches. Discussion: https://postgr.es/m/1651a46d-3c15-4028-a8c1-d74937b54e19@proxel.se
2024-05-09Fix overread in JSON parsing errors for incomplete byte sequencesMichael Paquier
json_lex_string() relies on pg_encoding_mblen_bounded() to point to the end of a JSON string when generating an error message, and the input it uses is not guaranteed to be null-terminated. It was possible to walk off the end of the input buffer by a few bytes when the last bytes consist of an incomplete multi-byte sequence, as token_terminator would point to a location defined by pg_encoding_mblen_bounded() rather than the end of the input. This commit switches token_terminator so as the error uses data up to the end of the JSON input. More work should be done so as this code could rely on an equivalent of report_invalid_encoding() so as incorrect byte sequences can show in error messages in a readable form. This requires work for at least two cases in the JSON parsing API: an incomplete token and an invalid escape sequence. A more complete solution may be too invasive for a backpatch, so this is left as a future improvement, taking care of the overread first. A test is added on HEAD as test_json_parser makes this issue straight-forward to check. Note that pg_encoding_mblen_bounded() no longer has any callers. This will be removed on HEAD with a separate commit, as this is proving to encourage unsafe coding. Author: Jacob Champion Discussion: https://postgr.es/m/CAOYmi+ncM7pwLS3AnKCSmoqqtpjvA8wmCdoBtKA3ZrB2hZG6zA@mail.gmail.com Backpatch-through: 13
2024-05-07Ensure that "pg_restore -l" reports dependent TOC entries correctly.Tom Lane
If -l was specified together with selective-restore options such as -n or -N, dependent TOC entries such as comments would be omitted from the listing, even when an actual restore would have selected them. This happened because PrintTOCSummary neglected to update the te->reqs marking of the entry they depended on. Per report from Justin Pryzby. This has been wrong since 0d4e6ed30 taught _tocEntryRequired to sometimes look at the "reqs" marking of other TOC entries, so back-patch to all supported branches. Discussion: https://postgr.es/m/ZjoeirG7yxODdC4P@pryzbyj2023
2024-05-07Don't corrupt plpython's "TD" dictionary in a recursive trigger call.Tom Lane
If a plpython-language trigger caused another one to be invoked, the "TD" dictionary created for the inner one would overwrite the outer one's "TD" dictionary. This is more or less the same problem that 1d2fe56e4 fixed for ordinary functions in plpython, so fix it the same way, by saving and restoring "TD" during a recursive invocation. This fix makes an ABI-incompatible change in struct PLySavedArgs. I'm not too worried about that because it seems highly unlikely that any extension is messing with those structs. We could imagine doing something weird to preserve nominal ABI compatibility in the back branches, like keeping the saved TD object in an extra element of namedargs[]. However, that would only be very nominal compatibility: if anything *is* touching PLySavedArgs, it would likely do the wrong thing due to not knowing about the additional value. So I judge it not worth the ugliness to do something different there. (I also changed struct PLyProcedure, but its added field fits into formerly-padding space, so that should be safe.) Per bug #18456 from Jacques Combrink. This bug is very ancient, so back-patch to all supported branches. Discussion: https://postgr.es/m/3008982.1714853799@sss.pgh.pa.us
2024-05-06Fix privilege checks in pg_stats_ext and pg_stats_ext_exprs.Nathan Bossart
The catalog view pg_stats_ext fails to consider privileges for expression statistics. The catalog view pg_stats_ext_exprs fails to consider privileges and row-level security policies. To fix, restrict the data in these views to table owners or roles that inherit privileges of the table owner. It may be possible to apply less restrictive privilege checks in some cases, but that is left as a future exercise. Furthermore, for pg_stats_ext_exprs, do not return data for tables with row-level security enabled, as is already done for pg_stats_ext. On the back-branches, a fix-CVE-2024-4317.sql script is provided that will install into the "share" directory. This file can be used to apply the fix to existing clusters. Bumps catversion on 'master' branch only. Reported-by: Lukas Fittl Reviewed-by: Noah Misch, Tomas Vondra, Tom Lane Security: CVE-2024-4317 Backpatch-through: 14
2024-05-06Translation updatesPeter Eisentraut
Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git Source-Git-Hash: c5f76beb79ef3e1424902905d99033b6c1e659b5
2024-05-01Ensure we allocate NAMEDATALEN bytes for names in Index Only ScansDavid Rowley
As an optimization, we store "name" columns as cstrings in btree indexes. Here we modify it so that Index Only Scans convert these cstrings back to names with NAMEDATALEN bytes rather than storing the cstring in the tuple slot, as was happening previously. Bug: #17855 Reported-by: Alexander Lakhin Reviewed-by: Alexander Lakhin, Tom Lane Discussion: https://postgr.es/m/17855-5f523e0f9769a566@postgresql.org Backpatch-through: 12, all supported versions
2024-04-30Disallow converting a table to a view within an outer SQL command.Tom Lane
We have long disallowed all forms of ALTER TABLE if the table is already opened by some outer SQL command in the same session. This has the same purpose as obtaining AccessExclusiveLock, but since a session's own locks don't conflict the lock only blocks use of the table by other sessions, not our own. Without this check, the ALTER might confuse the outer SQL command since any previous inspection of the table would potentially become invalid. However, the RelisBecomingView code path in DefineQueryRewrite never got that memo, and assumed that AccessExclusiveLock is sufficient for performing something morally equivalent to a rather invasive ALTER TABLE. Unsurprisingly, this can confuse an outer command that is trying to do something with the table. This was submitted as a security issue, but the security team has been unable to identify any consequence worse than a null pointer dereference (from trying to access rd_tableam methods that the relation no longer has). Therefore, in accordance with our usual policy, it's not security material and should just be fixed as a routine bug. Fix by disallowing the operation if the table is open locally, exactly as ALTER TABLE does it. Per an anonymous security researcher, via Bundesamt für Sicherheit in der Informationstechnik. Patch v12-v15 only. In v16 and later, we removed this code altogether (cf. commit b23cd185f), so that there's no issue.
2024-04-29Close race condition between datfrozen and relfrozen updates.Noah Misch
vac_update_datfrozenxid() did multiple loads of relfrozenxid and relminmxid from buffer memory, and it assumed each would get the same value. Not so if a concurrent vac_update_relstats() did an inplace update. Commit 2d2e40e3befd8b9e0d2757554537345b15fa6ea2 fixed the same kind of bug in vac_truncate_clog(). Today's bug could cause the rel-level field and XIDs in the rel's rows to precede the db-level field. A cluster having such values should VACUUM affected tables. Back-patch to v12 (all supported versions). Discussion: https://postgr.es/m/20240423003956.e7.nmisch@google.com
2024-04-28Throw a more on-point error for functions depending on columns.Tom Lane
ALTER COLUMN TYPE wasn't expecting to find any pg_proc objects depending on the column whose type is to be altered. That indeed wasn't possible when this code was written, but it is possible since we introduced new-style SQL function bodies. It's about as difficult to fix this case as it is to fix dependent views, and we've been punting on those for years, so I don't feel too awful about punting for functions too. (I sure wouldn't risk back-patching such code.) So just throw a more user-facing error. Also, adjust some of the existing comments to reflect that these are all pretty much the same issue. (This patch also fixes it so we will tolerate finding such a dependency during ALTER COLUMN SET EXPRESSION; in that, we need not do anything to the function, so no error is wanted. That problem is new in HEAD.) Per bug #18449 from Alexander Lakhin. Back-patch to v14 where we added new-style SQL functions. Discussion: https://postgr.es/m/18449-f8248467aaa294d5@postgresql.org