summaryrefslogtreecommitdiff
path: root/src
AgeCommit message (Collapse)Author
2021-06-01Add fallback implementation for setenv()Michael Paquier
This fixes the code compilation on Windows with MSVC and Kerberos, as a missing implementation of setenv() causes a compilation failure of the GSSAPI code. This was only reproducible when building the code with Kerberos, something that buildfarm animal hamerkop has fixed recently. This issue only happens on 12 and 13, as this code has been introduced in b0b39f7. HEAD is already able to compile properly thanks to 7ca37fb0, and this commit is a minimal cherry-pick of it. Thanks to Tom Lane for the discussion. Discussion: https://postgr.es/m/YLDtm5WGjPxm6ua4@paquier.xyz Backpatch-through: 12
2021-05-31Fix mis-planning of repeated application of a projection.Tom Lane
create_projection_plan contains a hidden assumption (here made explicit by an Assert) that a projection-capable Path will yield a projection-capable Plan. Unfortunately, that assumption is violated only a few lines away, by create_projection_plan itself. This means that two stacked ProjectionPaths can yield an outcome where we try to jam the upper path's tlist into a non-projection-capable child node, resulting in an invalid plan. There isn't any good reason to have stacked ProjectionPaths; indeed the whole concept is faulty, since the set of Vars/Aggs/etc needed by the upper one wouldn't necessarily be available in the output of the lower one, nor could the lower one create such values if they weren't available from its input. Hence, we can fix this by adjusting create_projection_path to strip any top-level ProjectionPath from the subpath it's given. (This amounts to saying "oh, we changed our minds about what we need to project here".) The test case added here only fails in v13 and HEAD; before that, we don't attempt to shove the Sort into the parallel part of the plan, for reasons that aren't entirely clear to me. However, all the directly-related code looks generally the same as far back as v11, where the hazard was introduced (by d7c19e62a). So I've got no faith that the same type of bug doesn't exist in v11 and v12, given the right test case. Hence, back-patch the code changes, but not the irrelevant test case, into those branches. Per report from Bas Poot. Discussion: https://postgr.es/m/534fca83789c4a378c7de379e9067d4f@politie.nl
2021-05-31Raise a timeout to 180s, in test 010_logical_decoding_timelines.pl.Noah Misch
Per buildfarm member hornet. Also, update Pod documentation showing the lower value. Back-patch to v10, where the test first appeared.
2021-05-29Fix race condition when sharing tuple descriptors.Thomas Munro
Parallel query processes that called BlessTupleDesc() for identical tuple descriptors at the same moment could crash. There was code to handle that rare case, but it dereferenced a bogus DSA pointer. Repair. Back-patch to 11, where commit cc5f8136 added support for sharing tuple descriptors in parallel queries. Reported-by: Eric Thinnes <e.thinnes@gmx.de> Discussion: https://postgr.es/m/99aaa2eb-e194-bf07-c29a-1a76b4f2bcf9%40gmx.de
2021-05-28fix syntax errorAndrew Dunstan
2021-05-28Report configured port in MSVC built pg_configAndrew Dunstan
This is a long standing omission, discovered when trying to write code that relied on it. Backpatch to all live branches.
2021-05-27Fix MSVC scripts when building with GSSAPI/KerberosMichael Paquier
The deliverables of upstream Kerberos on Windows are installed with paths that do not match our MSVC scripts. First, the include folder was named "inc/" in our scripts, but the upstream MSIs use "include/". Second, the build would fail with 64-bit environments as the libraries are named differently. This commit adjusts the MSVC scripts to be compatible with the latest installations of upstream, and I have checked that the compilation was able to work with the 32-bit and 64-bit installations. Special thanks to Kondo Yuta for the help in investigating the situation in hamerkop, which had an incorrect configuration for the GSS compilation. Reported-by: Brian Ye Discussion: https://postgr.es/m/162128202219.27274.12616756784952017465@wrigleys.postgresql.org Backpatch-through: 9.6
2021-05-25Improve docs and error messages for parallel vacuum.Amit Kapila
The error messages, docs, and one of the options were using 'parallel degree' to indicate parallelism used by vacuum command. We normally use 'parallel workers' at other places so change it for parallel vacuum accordingly. Author: Bharath Rupireddy Reviewed-by: Dilip Kumar, Amit Kapila Backpatch-through: 13 Discussion: https://postgr.es/m/CALj2ACWz=PYrrFXVsEKb9J1aiX4raA+UBe02hdRp_zqDkrWUiw@mail.gmail.com
2021-05-25Disallow SSL renegotiationMichael Paquier
SSL renegotiation is already disabled as of 48d23c72, however this does not prevent the server to comply with a client willing to use renegotiation. In the last couple of years, renegotiation had its set of security issues and flaws (like the recent CVE-2021-3449), and it could be possible to crash the backend with a client attempting renegotiation. This commit takes one extra step by disabling renegotiation in the backend in the same way as SSL compression (f9264d15) or tickets (97d3a0b0). OpenSSL 1.1.0h has added an option named SSL_OP_NO_RENEGOTIATION able to achieve that. In older versions there is an option called SSL3_FLAGS_NO_RENEGOTIATE_CIPHERS that was undocumented, and could be set within the SSL object created when the TLS connection opens, but I have decided not to use it, as it feels trickier to rely on, and it is not official. Note that this option is not usable in OpenSSL < 1.1.0h as the internal contents of the *SSL object are hidden to applications. SSL renegotiation concerns protocols up to TLSv1.2. Per original report from Robert Haas, with a patch based on a suggestion by Andres Freund. Author: Michael Paquier Reviewed-by: Daniel Gustafsson Discussion: https://postgr.es/m/YKZBXx7RhU74FlTE@paquier.xyz Backpatch-through: 9.6
2021-05-22Fix access to no-longer-open relcache entry in logical-rep worker.Tom Lane
If we redirected a replicated tuple operation into a partition child table, and then tried to fire AFTER triggers for that event, the relation cache entry for the child table was already closed. This has no visible ill effects as long as the entry is still there and still valid, but an unluckily-timed cache flush could result in a crash or other misbehavior. To fix, postpone the ExecCleanupTupleRouting call (which is what closes the child table) until after we've fired triggers. This requires a bit of refactoring so that the cleanup function can have access to the necessary state. In HEAD, I took the opportunity to simplify some of worker.c's function APIs based on use of the new ApplyExecutionData struct. However, it doesn't seem safe/practical to back-patch that aspect, at least not without a lot of analysis of possible interactions with a04daa97a. In passing, add an Assert to afterTriggerInvokeEvents to catch such cases. This seems worthwhile because we've grown a number of fairly unstructured ways of calling AfterTriggerEndQuery. Back-patch to v13, where worker.c grew the ability to deal with partitioned target tables. Discussion: https://postgr.es/m/3382681.1621381328@sss.pgh.pa.us
2021-05-21Disallow whole-row variables in GENERATED expressions.Tom Lane
This was previously allowed, but I think that was just an oversight. It's a clear violation of the rule that a generated column cannot depend on itself or other generated columns. Moreover, because the code was relying on the assumption that no such cross-references exist, it was pretty easy to crash ALTER TABLE and perhaps other places. Even if you managed not to crash, you got quite unstable, implementation-dependent results. Per report from Vitaly Ustinov. Back-patch to v12 where GENERATED came in. Discussion: https://postgr.es/m/CAM_DEiWR2DPT6U4xb-Ehigozzd3n3G37ZB1+867zbsEVtYoJww@mail.gmail.com
2021-05-21Fix usage of "tableoid" in GENERATED expressions.Tom Lane
We consider this supported (though I've got my doubts that it's a good idea, because tableoid is not immutable). However, several code paths failed to fill the field in soon enough, causing such a GENERATED expression to see zero or the wrong value. This occurred when ALTER TABLE adds a new GENERATED column to a table with existing rows, and during regular INSERT or UPDATE on a foreign table with GENERATED columns. Noted during investigation of a report from Vitaly Ustinov. Back-patch to v12 where GENERATED came in. Discussion: https://postgr.es/m/CAM_DEiWR2DPT6U4xb-Ehigozzd3n3G37ZB1+867zbsEVtYoJww@mail.gmail.com
2021-05-21Restore the portal-level snapshot after procedure COMMIT/ROLLBACK.Tom Lane
COMMIT/ROLLBACK necessarily destroys all snapshots within the session. The original implementation of intra-procedure transactions just cavalierly did that, ignoring the fact that this left us executing in a rather different environment than normal. In particular, it turns out that handling of toasted datums depends rather critically on there being an outer ActiveSnapshot: otherwise, when SPI or the core executor pop whatever snapshot they used and return, it's unsafe to dereference any toasted datums that may appear in the query result. It's possible to demonstrate "no known snapshots" and "missing chunk number N for toast value" errors as a result of this oversight. Historically this outer snapshot has been held by the Portal code, and that seems like a good plan to preserve. So add infrastructure to pquery.c to allow re-establishing the Portal-owned snapshot if it's not there anymore, and add enough bookkeeping support that we can tell whether it is or not. We can't, however, just re-establish the Portal snapshot as part of COMMIT/ROLLBACK. As in normal transaction start, acquiring the first snapshot should wait until after SET and LOCK commands. Hence, teach spi.c about doing this at the right time. (Note that this patch doesn't fix the problem for any PLs that try to run intra-procedure transactions without using SPI to execute SQL commands.) This makes SPI's no_snapshots parameter rather a misnomer, so in HEAD, rename that to allow_nonatomic. replication/logical/worker.c also needs some fixes, because it wasn't careful to hold a snapshot open around AFTER trigger execution. That code doesn't use a Portal, which I suspect someday we're gonna have to fix. But for now, just rearrange the order of operations. This includes back-patching the recent addition of finish_estate() to centralize the cleanup logic there. This also back-patches commit 2ecfeda3e into v13, to improve the test coverage for worker.c (it was that test that exposed that worker.c's snapshot management is wrong). Per bug #15990 from Andreas Wicht. Back-patch to v11 where intra-procedure COMMIT was added. Discussion: https://postgr.es/m/15990-eee2ac466b11293d@postgresql.org
2021-05-21Fix deadlock for multiple replicating truncates of the same table.Amit Kapila
While applying the truncate change, the logical apply worker acquires RowExclusiveLock on the relation being truncated. This allowed truncate on the relation at a time by two apply workers which lead to a deadlock. The reason was that one of the workers after updating the pg_class tuple tries to acquire SHARE lock on the relation and started to wait for the second worker which has acquired RowExclusiveLock on the relation. And when the second worker tries to update the pg_class tuple, it starts to wait for the first worker which leads to a deadlock. Fix it by acquiring AccessExclusiveLock on the relation before applying the truncate change as we do for normal truncate operation. Author: Peter Smith, test case by Haiying Tang Reviewed-by: Dilip Kumar, Amit Kapila Backpatch-through: 11 Discussion: https://postgr.es/m/CAHut+PsNm43p0jM+idTvWwiGZPcP0hGrHMPK9TOAkc+a4UpUqw@mail.gmail.com
2021-05-20Avoid detoasting failure after COMMIT inside a plpgsql FOR loop.Tom Lane
exec_for_query() normally tries to prefetch a few rows at a time from the query being iterated over, so as to reduce executor entry/exit overhead. Unfortunately this is unsafe if we have COMMIT or ROLLBACK within the loop, because there might be TOAST references in the data that we prefetched but haven't yet examined. Immediately after the COMMIT/ROLLBACK, we have no snapshots in the session, meaning that VACUUM is at liberty to remove recently-deleted TOAST rows. This was originally reported as a case triggering the "no known snapshots" error in init_toast_snapshot(), but even if you miss hitting that, you can get "missing toast chunk", as illustrated by the added isolation test case. To fix, just disable prefetching in non-atomic contexts. Maybe there will be performance complaints prompting us to work harder later, but it's not clear at the moment that this really costs much, and I doubt we'd want to back-patch any complicated fix. In passing, adjust that error message in init_toast_snapshot() to be a little clearer about the likely cause of the problem. Patch by me, based on earlier investigation by Konstantin Knizhnik. Per bug #15990 from Andreas Wicht. Back-patch to v11 where intra-procedure COMMIT was added. Discussion: https://postgr.es/m/15990-eee2ac466b11293d@postgresql.org
2021-05-20Clean up cpluspluscheck violation.Tom Lane
"typename" is a C++ keyword, so pg_upgrade.h fails to compile in C++. Fortunately, there seems no likely reason for somebody to need to do that. Nonetheless, it's project policy that all .h files should pass cpluspluscheck, so rename the argument to fix that. Oversight in 57c081de0; back-patch as that was. (The policy requiring pg_upgrade.h to pass cpluspluscheck only goes back to v12, but it seems best to keep this code looking the same in all branches.)
2021-05-18Fix typo and outdated information in README.barrierDavid Rowley
README.barrier didn't seem to get the memo when atomics were added. Fix that. Author: Tatsuo Ishii, David Rowley Discussion: https://postgr.es/m/20210516.211133.2159010194908437625.t-ishii%40sraoss.co.jp Backpatch-through: 9.6, oldest supported release
2021-05-15Be more careful about barriers when releasing BackgroundWorkerSlots.Tom Lane
ForgetBackgroundWorker lacked any memory barrier at all, while BackgroundWorkerStateChange had one but unaccountably did additional manipulation of the slot after the barrier. AFAICS, the rule must be that the barrier is immediately before setting or clearing slot->in_use. It looks like back in 9.6 when ForgetBackgroundWorker was first written, there might have been some case for not needing a barrier there, but I'm not very convinced of that --- the fact that the load of bgw_notify_pid is in the caller doesn't seem to guarantee no memory ordering problem. So patch 9.6 too. It's likely that this doesn't fix any observable bug on Intel hardware, but machines with weaker memory ordering rules could have problems here. Discussion: https://postgr.es/m/4046084.1620244003@sss.pgh.pa.us
2021-05-14Harden nbtree deduplication posting split code.Peter Geoghegan
Add a defensive "can't happen" error to code that handles nbtree posting list splits (promote an existing assertion). This avoids a segfault in the event of an insertion of a newitem that is somehow identical to an existing non-pivot tuple in the index. An nbtree index should never have two index tuples with identical TIDs. This scenario is not particular unlikely in the event of any kind of corruption that leaves the index in an inconsistent state relative to the heap relation that is indexed. There are two known reports of preventable hard crashes. Doing nothing seems unacceptable given the general expectation that nbtree will cope reasonably well with corrupt data. Discussion: https://postgr.es/m/CAH2-Wz=Jr_d-dOYEEmwz0-ifojVNWho01eAqewfQXgKfoe114w@mail.gmail.com Backpatch: 13-, where nbtree deduplication was introduced.
2021-05-14Prevent infinite insertion loops in spgdoinsert().Tom Lane
Formerly we just relied on operator classes that assert longValuesOK to eventually shorten the leaf value enough to fit on an index page. That fails since the introduction of INCLUDE-column support (commit 09c1c6ab4), because the INCLUDE columns might alone take up more than a page, meaning no amount of leaf-datum compaction will get the job done. At least with spgtextproc.c, that leads to an infinite loop, since spgtextproc.c won't throw an error for not being able to shorten the leaf datum anymore. To fix without breaking cases that would otherwise work, add logic to spgdoinsert() to verify that the leaf tuple size is decreasing after each "choose" step. Some opclasses might not decrease the size on every single cycle, and in any case, alignment roundoff of the tuple size could obscure small gains. Therefore, allow up to 10 cycles without additional savings before throwing an error. (Perhaps this number will need adjustment, but it seems quite generous right now.) As long as we've developed this logic, let's back-patch it. The back branches don't have INCLUDE columns to worry about, but this seems like a good defense against possible bugs in operator classes. We already know that an infinite loop here is pretty unpleasant, so having a defense seems to outweigh the risk of breaking things. (Note that spgtextproc.c is actually the only known opclass with longValuesOK support, so that this is all moot for known non-core opclasses anyway.) Per report from Dilip Kumar. Discussion: https://postgr.es/m/CAFiTN-uxP_soPhVG840tRMQTBmtA_f_Y8N51G7DKYYqDh7XN-A@mail.gmail.com
2021-05-14Fix query-cancel handling in spgdoinsert().Tom Lane
Knowing that a buggy opclass could cause an infinite insertion loop, spgdoinsert() intended to allow its loop to be interrupted by query cancel. However, that never actually worked, because in iterations after the first, we'd be holding buffer lock(s) which would cause InterruptHoldoffCount to be positive, preventing servicing of the interrupt. To fix, check if an interrupt is pending, and if so fall out of the insertion loop and service the interrupt after we've released the buffers. If it was indeed a query cancel, that's the end of the matter. If it was a non-canceling interrupt reason, make use of the existing provision to retry the whole insertion. (This isn't as wasteful as it might seem, since any upper-level index tuples we already created should be usable in the next attempt.) While there's no known instance of such a bug in existing release branches, it still seems like a good idea to back-patch this to all supported branches, since the behavior is fairly nasty if a loop does happen --- not only is it uncancelable, but it will quickly consume memory to the point of an OOM failure. In any case, this code is certainly not working as intended. Per report from Dilip Kumar. Discussion: https://postgr.es/m/CAFiTN-uxP_soPhVG840tRMQTBmtA_f_Y8N51G7DKYYqDh7XN-A@mail.gmail.com
2021-05-14Refactor CHECK_FOR_INTERRUPTS() to add flexibility.Tom Lane
Split up CHECK_FOR_INTERRUPTS() to provide an additional macro INTERRUPTS_PENDING_CONDITION(), which just tests whether an interrupt is pending without attempting to service it. This is useful in situations where the caller knows that interrupts are blocked, and would like to find out if it's worth the trouble to unblock them. Also add INTERRUPTS_CAN_BE_PROCESSED(), which indicates whether CHECK_FOR_INTERRUPTS() can be relied on to clear the pending interrupt. This commit doesn't actually add any uses of the new macros, but a follow-on bug fix will do so. Back-patch to all supported branches to provide infrastructure for that fix. Alvaro Herrera and Tom Lane Discussion: https://postgr.es/m/20210513155351.GA7848@alvherre.pgsql
2021-05-12Rename the logical replication global "wrconn"Alvaro Herrera
The worker.c global wrconn is only meant to be used by logical apply/ tablesync workers, but there are other variables with the same name. To reduce future confusion rename the global from "wrconn" to "LogRepWorkerWalRcvConn". While this is just cosmetic, it seems better to backpatch it all the way back to 10 where this code appeared, to avoid future backpatching issues. Author: Peter Smith <smithpb2250@gmail.com> Discussion: https://postgr.es/m/CAHut+Pu7Jv9L2BOEx_Z0UtJxfDevQSAUW2mJqWU+CtmDrEZVAg@mail.gmail.com
2021-05-11Reduce runtime of privileges.sql test under CLOBBER_CACHE_ALWAYS.Tom Lane
Several queries in the privileges regression test cause the planner to apply the plpgsql function "leak()" to every element of the histogram for atest12.b. Since commit 0c882e52a increased the size of that histogram to 10000 entries, the test invokes that function over 100000 times, which takes an absolutely unreasonable amount of time in clobber-cache-always mode. However, there's no real reason why that has to be a plpgsql function: for the purposes of this test, all that matters is that it not be marked leakproof. So we can replace the plpgsql implementation with a direct call of int4lt, which has the same behavior and is orders of magnitude faster. This is expected to cut several hours off the buildfarm cycle time for CCA animals. It has some positive impact in normal builds too, though that's probably lost in the noise. Back-patch to v13 where 0c882e52a came in. Discussion: https://postgr.es/m/575884.1620626638@sss.pgh.pa.us
2021-05-10Fix mishandling of resjunk columns in ON CONFLICT ... UPDATE tlists.Tom Lane
It's unusual to have any resjunk columns in an ON CONFLICT ... UPDATE list, but it can happen when MULTIEXPR_SUBLINK SubPlans are present. If it happens, the ON CONFLICT UPDATE code path would end up storing tuples that include the values of the extra resjunk columns. That's fairly harmless in the short run, but if new columns are added to the table then the values would become accessible, possibly leading to malfunctions if they don't match the datatypes of the new columns. This had escaped notice through a confluence of missing sanity checks, including * There's no cross-check that a tuple presented to heap_insert or heap_update matches the table rowtype. While it's difficult to check that fully at reasonable cost, we can easily add assertions that there aren't too many columns. * The output-column-assignment cases in execExprInterp.c lacked any sanity checks on the output column numbers, which seems like an oversight considering there are plenty of assertion checks on input column numbers. Add assertions there too. * We failed to apply nodeModifyTable's ExecCheckPlanOutput() to the ON CONFLICT UPDATE tlist. That wouldn't have caught this specific error, since that function is chartered to ignore resjunk columns; but it sure seems like a bad omission now that we've seen this bug. In HEAD, the right way to fix this is to make the processing of ON CONFLICT UPDATE tlists work the same as regular UPDATE tlists now do, that is don't add "SET x = x" entries, and use ExecBuildUpdateProjection to evaluate the tlist and combine it with old values of the not-set columns. This adds a little complication to ExecBuildUpdateProjection, but allows removal of a comparable amount of now-dead code from the planner. In the back branches, the most expedient solution seems to be to (a) use an output slot for the ON CONFLICT UPDATE projection that actually matches the target table, and then (b) invent a variant of ExecBuildProjectionInfo that can be told to not store values resulting from resjunk columns, so it doesn't try to store into nonexistent columns of the output slot. (We can't simply ignore the resjunk columns altogether; they have to be evaluated for MULTIEXPR_SUBLINK to work.) This works back to v10. In 9.6, projections work much differently and we can't cheaply give them such an option. The 9.6 version of this patch works by inserting a JunkFilter when it's necessary to get rid of resjunk columns. In addition, v11 and up have the reverse problem when trying to perform ON CONFLICT UPDATE on a partitioned table. Through a further oversight, adjust_partition_tlist() discarded resjunk columns when re-ordering the ON CONFLICT UPDATE tlist to match a partition. This accidentally prevented the storing-bogus-tuples problem, but at the cost that MULTIEXPR_SUBLINK cases didn't work, typically crashing if more than one row has to be updated. Fix by preserving resjunk columns in that routine. (I failed to resist the temptation to add more assertions there too, and to do some minor code beautification.) Per report from Andres Freund. Back-patch to all supported branches. Security: CVE-2021-32028
2021-05-10Prevent integer overflows in array subscripting calculations.Tom Lane
While we were (mostly) careful about ensuring that the dimensions of arrays aren't large enough to cause integer overflow, the lower bound values were generally not checked. This allows situations where lower_bound + dimension overflows an integer. It seems that that's harmless so far as array reading is concerned, except that array elements with subscripts notionally exceeding INT_MAX are inaccessible. However, it confuses various array-assignment logic, resulting in a potential for memory stomps. Fix by adding checks that array lower bounds aren't large enough to cause lower_bound + dimension to overflow. (Note: this results in disallowing cases where the last subscript position would be exactly INT_MAX. In principle we could probably allow that, but there's a lot of code that computes lower_bound + dimension and would need adjustment. It seems doubtful that it's worth the trouble/risk to allow it.) Somewhat independently of that, array_set_element() was careless about possible overflow when checking the subscript of a fixed-length array, creating a different route to memory stomps. Fix that too. Security: CVE-2021-32027
2021-05-10Translation updatesPeter Eisentraut
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git Source-Git-Hash: 832086c7a50768dd7a8c548ab063037741530ddf
2021-05-10Emit dummy statements for probes.d probes when disabledPeter Eisentraut
When building without --enable-dtrace, emit dummy do {} while (0) statements for the stubbed-out TRACE_POSTGRESQL_foo() macros instead of empty macros that totally elide the original probe statement. This fixes the warning: suggest braces around empty body in an ‘if’ statement [-Wempty-body] introduced by b94409a02f. Author: Craig Ringer <craig.ringer@2ndquadrant.com> Discussion: https://www.postgresql.org/message-id/flat/20210504221531.cfvpmmdfsou6eitb%40alap3.anarazel.de
2021-05-07AlterSubscription_refresh: avoid stomping on global variableAlvaro Herrera
This patch replaces use of the global "wrconn" variable in AlterSubscription_refresh with a local variable of the same name, making it consistent with other functions in subscriptioncmds.c (e.g. DropSubscription). The global wrconn is only meant to be used for logical apply/tablesync worker. Abusing it this way is known to cause trouble if an apply worker manages to do a subscription refresh, such as reported by Jeremy Finzel and diagnosed by Andres Freund back in November 2020, at https://www.postgresql.org/message-id/20201111215820.qihhrz7fayu6myfi@alap3.anarazel.de Backpatch to 10. In branch master, also move the connection establishment to occur outside the PG_TRY block; this way we can remove a test for NULL in PG_FINALLY, and it also makes the code more consistent with similar code in the same file. Author: Peter Smith <peter.b.smith@fujitsu.com> Reviewed-by: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> Reviewed-by: Japin Li <japinli@hotmail.com> Discussion: https://postgr.es/m/CAHut+Pu7Jv9L2BOEx_Z0UtJxfDevQSAUW2mJqWU+CtmDrEZVAg@mail.gmail.com
2021-05-05jit: Fix warning reported by gcc-11 caused by dubious function signature.Andres Freund
Reported-By: Erik Rijkers <er@xs4all.nl> Discussion: https://postgr.es/m/833107370.1313189.1619647621213@webmailclassic.xs4all.nl Backpatch: 13, where b059d2f45685 introduced the issue.
2021-05-05Have ALTER CONSTRAINT recurse on partitioned tablesAlvaro Herrera
When ALTER TABLE .. ALTER CONSTRAINT changes deferrability properties changed in a partitioned table, we failed to propagate those changes correctly to partitions and to triggers. Repair by adding a recursion mechanism to affect all derived constraints and all derived triggers. (In particular, recurse to partitions even if their respective parents are already in the desired state: it is possible for the partitions to have been altered individually.) Because foreign keys involve tables in two sides, we cannot use the standard ALTER TABLE recursion mechanism, so we invent our own by following pg_constraint.conparentid down. When ALTER TABLE .. ALTER CONSTRAINT is invoked on the derived pg_constraint object that's automaticaly created in a partition as a result of a constraint added to its parent, raise an error instead of pretending to work and then failing to modify all the affected triggers. Before this commit such a command would be allowed but failed to affect all triggers, so it would silently misbehave. (Restoring dumps of existing databases is not affected, because pg_dump does not produce anything for such a derived constraint anyway.) Add some tests for the case. Backpatch to 11, where foreign key support was added to partitioned tables by commit 3de241dba86f. (A related change is commit f56f8f8da6af in pg12 which added support for FKs *referencing* partitioned tables; this is what forces us to use an ad-hoc recursion mechanism for this.) Diagnosed by Tom Lane from bug report from Ron L Johnson. As of this writing, no reviews were offered. Discussion: https://postgr.es/m/75fe0761-a291-86a9-c8d8-4906da077469@gmail.com Discussion: https://postgr.es/m/3144850.1607369633@sss.pgh.pa.us
2021-05-04Fix OID passed to object-alter hook during ALTER CONSTRAINTAlvaro Herrera
The OID of the constraint is used instead of the OID of the trigger -- an easy mistake to make. Apparently the object-alter hooks are not very well tested :-( Backpatch to 12, where this typo was introduced by 578b229718e8 Discussion: https://postgr.es/m/20210503231633.GA6994@alvherre.pgsql
2021-05-04pg_dump: Fix dump of generated columns in partitionsPeter Eisentraut
The previous fix for dumping of inherited generated columns (0bf83648a52df96f7c8677edbbdf141bfa0cf32b) must not be applied to partitions, since, unlike normal inherited tables, they are always dumped separately and reattached. Reported-by: Santosh Udupi <email@hitha.net> Discussion: https://www.postgresql.org/message-id/flat/CACLRvHZ4a-%2BSM_159%2BtcrHdEqxFrG%3DW4gwTRnwf7Oj0UNj5R2A%40mail.gmail.com
2021-05-04Fix ALTER TABLE / INHERIT with generated columnsPeter Eisentraut
When running ALTER TABLE t2 INHERIT t1, we must check that columns in t2 that correspond to a generated column in t1 are also generated and have the same generation expression. Otherwise, this would allow creating setups that a normal CREATE TABLE sequence would not allow. Discussion: https://www.postgresql.org/message-id/22de27f6-7096-8d96-4619-7b882932ca25@2ndquadrant.com
2021-05-03Prevent lwlock dtrace probes from unnecessary workPeter Eisentraut
If dtrace is compiled in but disabled, the lwlock dtrace probes still evaluate their arguments. Since PostgreSQL 13, T_NAME(lock) does nontrivial work, so it should be avoided if not needed. To fix, make these calls conditional on the *_ENABLED() macro corresponding to each probe. Reviewed-by: Craig Ringer <craig.ringer@enterprisedb.com> Discussion: https://www.postgresql.org/message-id/CAGRY4nwxKUS_RvXFW-ugrZBYxPFFM5kjwKT5O+0+Stuga5b4+Q@mail.gmail.com
2021-04-30Disallow calling anything but plain functions via the fastpath API.Tom Lane
Reject aggregates, window functions, and procedures. Aggregates failed anyway, though with a somewhat obscure error message. Window functions would hit an Assert or null-pointer dereference. Procedures seemed to work as long as you didn't try to do transaction control, but (a) transaction control is sort of the point of a procedure, and (b) it's not entirely clear that no bugs lurk in that path. Given the lack of testing of this area, it seems safest to be conservative in what we support. Also reject proretset functions, as the fastpath protocol can't support returning a set. Also remove an easily-triggered assertion that the given OID isn't 0; the subsequent lookups can handle that case themselves. Per report from Theodor-Arsenij Larionov-Trichkin. Back-patch to all supported branches. (The procedure angle only applies in v11+, of course.) Discussion: https://postgr.es/m/2039442.1615317309@sss.pgh.pa.us
2021-04-29Fix some more omissions in pg_upgrade's tests for non-upgradable types.Tom Lane
Commits 29aeda6e4 et al closed up some oversights involving not checking for non-upgradable types within container types, such as arrays and ranges. However, I only looked at version.c, failing to notice that there were substantially-equivalent tests in check.c. (The division of responsibility between those files is less than clear...) In addition, because genbki.pl does not guarantee that auto-generated rowtype OIDs will hold still across versions, we need to consider that the composite type associated with a system catalog or view is non-upgradable. It seems unlikely that someone would have a user column declared that way, but if they did, trying to read it in another PG version would likely draw "no such pg_type OID" failures, thanks to the type OID embedded in composite Datums. To support the composite and reg*-type cases, extend the recursive query that does the search to allow any base query that returns a column of pg_type OIDs, rather than limiting it to exactly one starting type. As before, back-patch to all supported branches. Discussion: https://postgr.es/m/2798740.1619622555@sss.pgh.pa.us
2021-04-28Fix use-after-release issue with pg_identify_object_as_address()Michael Paquier
Spotted by buildfarm member prion, with -DRELCACHE_FORCE_RELEASE. Introduced in f7aab36. Discussion: https://postgr.es/m/2759018.1619577848@sss.pgh.pa.us Backpatch-through: 9.6
2021-04-28Fix pg_identify_object_as_address() with event triggersMichael Paquier
Attempting to use this function with event triggers failed, as, since its introduction in a676201, this code has never associated an object name with event triggers. This addresses the failure by adding the event trigger name to the set defining its object address. Note that regression tests are added within event_trigger and not object_address to avoid issues with concurrent connections in parallel schedules. Author: Joel Jacobson Discussion: https://postgr.es/m/3c905e77-a026-46ae-8835-c3f6cd1d24c8@www.fastmail.com Backpatch-through: 9.6
2021-04-22Don't crash on reference to an un-available system column.Tom Lane
Adopt a more consistent policy about what slot-type-specific getsysattr functions should do when system attributes are not available. To wit, they should all throw the same user-oriented error, rather than variously crashing or emitting developer-oriented messages. This closes a identifiable problem in commits a71cfc56b and 3fb93103a (in v13 and v12), so back-patch into those branches, along with a test case to try to ensure we don't break it again. It is not known that any of the former crash cases are reachable in HEAD, but this seems like a good safety improvement in any case. Discussion: https://postgr.es/m/141051591267657@mail.yandex.ru
2021-04-22Fix bugs in RETURNING in cross-partition UPDATE cases.Tom Lane
If the source and destination partitions don't have identical rowtypes (for example, one has dropped columns the other lacks), then the planSlot contents will be different because of that. If the query has a RETURNING list that tries to return resjunk columns out of the planSlot, that is columns from tables that were joined to the target table, we'd get errors or wrong answers. That's because we used the RETURNING list generated for the destination partition, which expects a planSlot matching that partition's subplan. The most practical fix seems to be to convert the updated destination tuple back to the source partition's rowtype, and then apply the RETURNING list generated for the source partition. This avoids making fragile assumptions about whether the per-subpartition subplans generated all the resjunk columns in the same order. This has been broken since v11 introduced cross-partition UPDATE. The lack of field complaints shows that non-identical partitions aren't a common case; therefore, don't stress too hard about making the conversion efficient. There's no such bug in HEAD, because commit 86dc90056 got rid of per-target-relation variance in the contents of the planSlot. Hence, patch v11-v13 only. Amit Langote and Etsuro Fujita, small changes by me Discussion: https://postgr.es/m/CA+HiwqE_UK1jTSNrjb8mpTdivzd3dum6mK--xqKq0Y9VmfwWQA@mail.gmail.com
2021-04-21fix silly perl error in commit d064afc720Andrew Dunstan
2021-04-21Only ever test for non-127.0.0.1 addresses on Windows in PostgresNodeAndrew Dunstan
This has been found to cause hangs where tcp usage is forced. Alexey Kodratov Discussion: https://postgr.es/m/82e271a9a11928337fcb5b5e57b423c0@postgrespro.ru Backpatch to all live branches
2021-04-20Fix planner failure in some cases of sorting by an aggregate.Tom Lane
An oversight introduced by the incremental-sort patches caused "could not find pathkey item to sort" errors in some situations where a sort key involves an aggregate or window function. The basic problem here is that find_em_expr_usable_for_sorting_rel isn't properly modeling what prepare_sort_from_pathkeys will do later. Rather than hoping we can keep those functions in sync, let's refactor so that they actually share the code for identifying a suitable sort expression. With this refactoring, tlist.c's tlist_member_ignore_relabel is unused. I removed it in HEAD but left it in place in v13, in case any extensions are using it. Per report from Luc Vlaming. Back-patch to v13 where the problem arose. James Coleman and Tom Lane Discussion: https://postgr.es/m/91f3ec99-85a4-fa55-ea74-33f85a5c651f@swarm64.com
2021-04-20Fix typo in commentMagnus Hagander
Author: Julien Rouhaud Backpatch-through: 11 Discussion: https://postgr.es/m/20210420121659.odjueyd4rpilorn5@nol
2021-04-16Allow TestLib::slurp_file to skip contents, and use as neededAndrew Dunstan
In order to avoid getting old logfile contents certain functions in PostgresNode were doing one of two things. On Windows it rotated the logfile and restarted the server, while elsewhere it truncated the log file. Both of these are unnecessary. We borrow from the buildfarm which does this instead: note the size of the logfile before we start, and then when fetching the logfile skip to that position before accumulating contents. This is spelled differently on Windows but the effect is the same. This is largely centralized in TestLib's slurp_file function, which has a new optional parameter, the offset to skip to before starting to reading the file. Code in the client becomes much neater. Backpatch to all live branches. Michael Paquier, slightly modified by me. Discussion: https://postgr.es/m/YHajnhcMAI3++pJL@paquier.xyz
2021-04-13Fix some inappropriately-disallowed uses of ALTER ROLE/DATABASE SET.Tom Lane
Most GUC check hooks that inspect database state have special checks that prevent them from throwing hard errors for state-dependent issues when source == PGC_S_TEST. This allows, for example, "ALTER DATABASE d SET default_text_search_config = foo" when the "foo" configuration hasn't been created yet. Without this, we have problems during dump/reload or pg_upgrade, because pg_dump has no idea about possible dependencies of GUC values and can't ensure a safe restore ordering. However, check_role() and check_session_authorization() hadn't gotten the memo about that, and would throw hard errors anyway. It's not entirely clear what is the use-case for "ALTER ROLE x SET role = y", but we've now heard two independent complaints about that bollixing an upgrade, so apparently some people are doing it. Hence, fix these two functions to act more like other check hooks with similar needs. (But I did not change their insistence on being inside a transaction, as it's still not apparent that setting either GUC from the configuration file would be wise.) Also fix check_temp_buffers, which had a different form of the disease of making state-dependent checks without any exception for PGC_S_TEST. A cursory survey of other GUC check hooks did not find any more issues of this ilk. (There are a lot of interdependencies among PGC_POSTMASTER and PGC_SIGHUP GUCs, which may be a bad idea, but they're not relevant to the immediate concern because they can't be set via ALTER ROLE/DATABASE.) Per reports from Charlie Hornsby and Nathan Bossart. Back-patch to all supported branches. Discussion: https://postgr.es/m/HE1P189MB0523B31598B0C772C908088DB7709@HE1P189MB0523.EURP189.PROD.OUTLOOK.COM Discussion: https://postgr.es/m/20160711223641.1426.86096@wrigleys.postgresql.org
2021-04-13Redesign the caching done by get_cached_rowtype().Tom Lane
Previously, get_cached_rowtype() cached a pointer to a reference-counted tuple descriptor from the typcache, relying on the ExprContextCallback mechanism to release the tupdesc refcount when the expression tree using the tupdesc was destroyed. This worked fine when it was designed, but the introduction of within-DO-block COMMITs broke it. The refcount is logged in a transaction-lifespan resource owner, but plpgsql won't destroy simple expressions made within the DO block (before its first commit) until the DO block is exited. That results in a warning about a leaked tupdesc refcount when the COMMIT destroys the original resource owner, and then an error about the active resource owner not holding a matching refcount when the expression is destroyed. To fix, get rid of the need to have a shutdown callback at all, by instead caching a pointer to the relevant typcache entry. Those survive for the life of the backend, so we needn't worry about the pointer becoming stale. (For registered RECORD types, we can still cache a pointer to the tupdesc, knowing that it won't change for the life of the backend.) This mechanism has been in use in plpgsql and expandedrecord.c since commit 4b93f5799, and seems to work well. This change requires modifying the ExprEvalStep structs used by the relevant expression step types, which is slightly worrisome for back-patching. However, there seems no good reason for extensions to be familiar with the details of these particular sub-structs. Per report from Rohit Bhogate. Back-patch to v11 where within-DO-block COMMITs became a thing. Discussion: https://postgr.es/m/CAAV6ZkQRCVBh8qAY+SZiHnz+U+FqAGBBDaDTjF2yiKa2nJSLKg@mail.gmail.com
2021-04-13Avoid improbable PANIC during heap_update.Tom Lane
heap_update needs to clear any existing "all visible" flag on the old tuple's page (and on the new page too, if different). Per coding rules, to do this it must acquire pin on the appropriate visibility-map page while not holding exclusive buffer lock; which creates a race condition since someone else could set the flag whenever we're not holding the buffer lock. The code is supposed to handle that by re-checking the flag after acquiring buffer lock and retrying if it became set. However, one code path through heap_update itself, as well as one in its subroutine RelationGetBufferForTuple, failed to do this. The end result, in the unlikely event that a concurrent VACUUM did set the flag while we're transiently not holding lock, is a non-recurring "PANIC: wrong buffer passed to visibilitymap_clear" failure. This has been seen a few times in the buildfarm since recent VACUUM changes that added code paths that could set the all-visible flag while holding only exclusive buffer lock. Previously, the flag was (usually?) set only after doing LockBufferForCleanup, which would insist on buffer pin count zero, thus preventing the flag from becoming set partway through heap_update. However, it's clear that it's heap_update not VACUUM that's at fault here. What's less clear is whether there is any hazard from these bugs in released branches. heap_update is certainly violating API expectations, but if there is no code path that can set all-visible without a cleanup lock then it's only a latent bug. That's not 100% certain though, besides which we should worry about extensions or future back-patch fixes that could introduce such code paths. I chose to back-patch to v12. Fixing RelationGetBufferForTuple before that would require also back-patching portions of older fixes (notably 0d1fe9f74), which is more code churn than seems prudent to fix a hypothetical issue. Discussion: https://postgr.es/m/2247102.1618008027@sss.pgh.pa.us
2021-04-12Use "-I." in directories holding Bison parsers, for Oracle compilers.Noah Misch
With the Oracle Developer Studio 12.6 compiler, #line directives alter the current source file location for purposes of #include "..." directives. Hence, a VPATH build failed with 'cannot find include file: "specscanner.c"'. With two exceptions, parser-containing directories already add "-I. -I$(srcdir)"; eliminate the exceptions. Back-patch to 9.6 (all supported versions).