summaryrefslogtreecommitdiff
path: root/src
AgeCommit message (Collapse)Author
2021-02-08Fix permission checks on constraint violation errors on partitions.Heikki Linnakangas
If a cross-partition UPDATE violates a constraint on the target partition, and the columns in the new partition are in different physical order than in the parent, the error message can reveal columns that the user does not have SELECT permission on. A similar bug was fixed earlier in commit 804b6b6db4. The cause of the bug is that the callers of the ExecBuildSlotValueDescription() function got confused when constructing the list of modified columns. If the tuple was routed from a parent, we converted the tuple to the parent's format, but the list of modified columns was grabbed directly from the child's RTE entry. ExecUpdateLockMode() had a similar issue. That lead to confusion on which columns are key columns, leading to wrong tuple lock being taken on tables referenced by foreign keys, when a row is updated with INSERT ON CONFLICT UPDATE. A new isolation test is added for that corner case. With this patch, the ri_RangeTableIndex field is no longer set for partitions that don't have an entry in the range table. Previously, it was set to the RTE entry of the parent relation, but that was confusing. NOTE: This modifies the ResultRelInfo struct, replacing the ri_PartitionRoot field with ri_RootResultRelInfo. That's a bit risky to backpatch, because it breaks any extensions accessing the field. The change that ri_RangeTableIndex is not set for partitions could potentially break extensions, too. The ResultRelInfos are visible to FDWs at least, and this patch required small changes to postgres_fdw. Nevertheless, this seem like the least bad option. I don't think these fields widely used in extensions; I don't think there are FDWs out there that uses the FDW "direct update" API, other than postgres_fdw. If there is, you will get a compilation error, so hopefully it is caught quickly. Backpatch to 11, where support for both cross-partition UPDATEs, and unique indexes on partitioned tables, were added. Reviewed-by: Amit Langote Security: CVE-2021-3393
2021-02-07Revert "Propagate CTE property flags when copying a CTE list into a rule."Tom Lane
This reverts commit ed290896335414c6c069b9ccae1f3dcdd2fac6ba and equivalent back-branch commits. The issue is subtler than I thought, and it's far from new, so just before a release deadline is no time to be fooling with it. We'll consider what to do at a bit more leisure. Discussion: https://postgr.es/m/CAJcOf-fAdj=nDKMsRhQzndm-O13NY4dL6xGcEvdX5Xvbbi0V7g@mail.gmail.com
2021-02-06Propagate CTE property flags when copying a CTE list into a rule.Tom Lane
rewriteRuleAction() neglected this step, although it was careful to propagate other similar flags such as hasSubLinks or hasRowSecurity. Omitting to transfer hasRecursive is just cosmetic at the moment, but omitting hasModifyingCTE is a live bug, since the executor certainly looks at that. The proposed test case only fails back to v10, but since the executor examines hasModifyingCTE in 9.x as well, I suspect that a test case could be devised that fails in older branches. Given the nearness of the release deadline, though, I'm not going to spend time looking for a better test. Report and patch by Greg Nancarrow, cosmetic changes by me Discussion: https://postgr.es/m/CAJcOf-fAdj=nDKMsRhQzndm-O13NY4dL6xGcEvdX5Xvbbi0V7g@mail.gmail.com
2021-02-06Disallow converting an inheritance child table to a view.Tom Lane
Generally, members of inheritance trees must be plain tables (or, in more recent versions, foreign tables). ALTER TABLE INHERIT rejects creating an inheritance relationship that has a view at either end. When DefineQueryRewrite attempts to convert a relation to a view, it already had checks prohibiting doing so for partitioning parents or children as well as traditional-inheritance parents ... but it neglected to check that a traditional-inheritance child wasn't being converted. Since the planner assumes that any inheritance child is a table, this led to making plans that tried to do a physical scan on a view, causing failures (or even crashes, in recent versions). One could imagine trying to support such a case by expanding the view normally, but since the rewriter runs before the planner does inheritance expansion, it would take some very fundamental refactoring to make that possible. There are probably a lot of other parts of the system that don't cope well with such a situation, too. For now, just forbid it. Per bug #16856 from Yang Lin. Back-patch to all supported branches. (In versions before v10, this includes back-patching the portion of commit 501ed02cf that added has_superclass(). Perhaps the lack of that infrastructure partially explains the missing check.) Discussion: https://postgr.es/m/16856-0363e05c6e1612fd@postgresql.org
2021-02-05Fix backslash-escaping multibyte chars in COPY FROM.Heikki Linnakangas
If a multi-byte character is escaped with a backslash in TEXT mode input, and the encoding is one of the client-only encodings where the bytes after the first one can have an ASCII byte "embedded" in the char, we didn't skip the character correctly. After a backslash, we only skipped the first byte of the next character, so if it was a multi-byte character, we would try to process its second byte as if it was a separate character. If it was one of the characters with special meaning, like '\n', '\r', or another '\\', that would cause trouble. One such exmple is the byte sequence '\x5ca45c2e666f6f' in Big5 encoding. That's supposed to be [backslash][two-byte character][.][f][o][o], but because the second byte of the two-byte character is 0x5c, we incorrectly treat it as another backslash. And because the next character is a dot, we parse it as end-of-copy marker, and throw an "end-of-copy marker corrupt" error. Backpatch to all supported versions. Reviewed-by: John Naylor, Kyotaro Horiguchi Discussion: https://www.postgresql.org/message-id/a897f84f-8dca-8798-3139-07da5bb38728%40iki.fi
2021-02-04Fix bug in HashAgg's selective-column-spilling logic.Tom Lane
Commit 230230223 taught nodeAgg.c that, when spilling tuples from memory in an oversized hash aggregation, it only needed to spill input columns referenced in the node's tlist and quals. Unfortunately, that's wrong: we also have to save the grouping columns. The error is masked in common cases because the grouping columns also appear in the tlist, but that's not necessarily true. The main category of plans where it's not true seem to come from semijoins ("WHERE outercol IN (SELECT innercol FROM innertable)") where the innercol needs an implicit promotion to make it comparable to the outercol. The grouping column will be "innercol::promotedtype", but that expression appears nowhere in the Agg node's own tlist and quals; only the bare "innercol" is found in the tlist. I spent quite a bit of time looking for a suitable regression test case for this, without much success. If the number of distinct values of the innercol is large enough to make spilling happen, the planner tends to prefer a non-HashAgg plan, at least for problem sizes that are reasonable to use in the regression tests. So, no new regression test. However, this patch does demonstrably fix the originally-reported test case. Per report from s.p.e (at) gmx-topmail.de. Backpatch to v13 where the troublesome code came in. Discussion: https://postgr.es/m/trinity-1c565d44-159f-488b-a518-caf13883134f-1611835701633@3c-app-gmx-bap78
2021-02-04Fix YA incremental sort bug.Tom Lane
switchToPresortedPrefixMode() did the wrong thing if it detected a batch boundary just at the last tuple of a fullsort group. The initially-reported symptom was a "retrieved too many tuples in a bounded sort" error, but the test case added here just silently gives the wrong answer without this patch. I (tgl) am not really happy about committing this patch without review from the incremental-sort authors, but they seem AWOL and we are hard against a release deadline. This does demonstrably make some cases better, anyway. Per bug #16846 from Yoran Heling. Back-patch to v13 where incremental sort was introduced. Neil Chen Discussion: https://postgr.es/m/16846-ae49f51ac379a4cb@postgresql.org
2021-02-03Avoid crash when rolling back within a prepared statement.Tom Lane
If a portal is used to run a prepared CALL or DO statement that contains a ROLLBACK, PortalRunMulti fails because the portal's statement list gets cleared by the rollback. (Since the grammar doesn't allow CALL/DO in PREPARE, the only easy way to get to this is via extended query protocol, which treats all inputs as prepared statements.) It's difficult to avoid resetting the portal early because of resource-management issues, so work around this by teaching PortalRunMulti to be wary of portal->stmts having suddenly become NIL. The crash has only been seen to occur in v13 and HEAD (as a consequence of commit 1cff1b95a having added an extra touch of portal->stmts). But even before that, the code involved touching a List that the portal no longer has any claim on. In the test case at hand, the List will still exist because of another refcount on the cached plan; but I'm far from convinced that it's impossible for the cached plan to have been dropped by the time control gets back to PortalRunMulti. Hence, backpatch to v11 where nested transactions were added. Thomas Munro and Tom Lane, per bug #16811 from James Inform Discussion: https://postgr.es/m/16811-c1b599b2c6c2d622@postgresql.org
2021-02-03pg_dump: Fix dumping of inherited generated columnsPeter Eisentraut
Generation expressions of generated columns are always inherited, so there is no need to set them separately in child tables, and there is no syntax to do so either. The code previously used the code paths for the handling of default values, for which different rules apply; in particular it might want to set a default value explicitly for an inherited column. This resulted in unrestorable dumps. For generated columns, just skip them in inherited tables. Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://www.postgresql.org/message-id/flat/15830.1575468847%40sss.pgh.pa.us
2021-02-02Remove extra increment of plpgsql's statement counter for FOR loops.Tom Lane
This left gaps in the internal statement numbering, which is not terribly harmful (else we'd have noticed sooner), but it's not great either. Oversight in bbd5c207b; backpatch to v12 where that came in. Pavel Stehule Discussion: https://postgr.es/m/CAFj8pRDXyQaJmpotNTQVc-t-WxdWZC35V2PnmwOaV1-taidFWA@mail.gmail.com
2021-01-30Revive "snapshot too old" with wal_level=minimal and SET TABLESPACE.Noah Misch
Given a permanent relation rewritten in the current transaction, the old_snapshot_threshold mechanism assumed the relation had never been subject to early pruning. Hence, a query could fail to report "snapshot too old" when the rewrite followed an early truncation. ALTER TABLE SET TABLESPACE is probably the only rewrite mechanism capable of exposing this bug. REINDEX sets indcheckxmin, avoiding the problem. CLUSTER has zeroed page LSNs since before old_snapshot_threshold existed, so old_snapshot_threshold has never cooperated with it. ALTER TABLE ... SET DATA TYPE makes the table look empty to every past snapshot, which is strictly worse. Back-patch to v13, where commit c6b92041d38512a4176ed76ad06f713d2e6c01a8 broke this. Kyotaro Horiguchi and Noah Misch Discussion: https://postgr.es/m/20210113.160705.2225256954956139776.horikyota.ntt@gmail.com
2021-01-30Fix error with CREATE PUBLICATION, wal_level=minimal, and new tables.Noah Misch
CREATE PUBLICATION has failed spuriously when applied to a permanent relation created or rewritten in the current transaction. Make the same change to another site having the same semantic intent; the second instance has no user-visible consequences. Back-patch to v13, where commit c6b92041d38512a4176ed76ad06f713d2e6c01a8 broke this. Kyotaro Horiguchi Discussion: https://postgr.es/m/20210113.160705.2225256954956139776.horikyota.ntt@gmail.com
2021-01-30Fix CREATE INDEX CONCURRENTLY for simultaneous prepared transactions.Noah Misch
In a cluster having used CREATE INDEX CONCURRENTLY while having enabled prepared transactions, queries that use the resulting index can silently fail to find rows. Fix this for future CREATE INDEX CONCURRENTLY by making it wait for prepared transactions like it waits for ordinary transactions. This expands the VirtualTransactionId structure domain to admit prepared transactions. It may be necessary to reindex to recover from past occurrences. Back-patch to 9.5 (all supported versions). Andrey Borodin, reviewed (in earlier versions) by Tom Lane and Michael Paquier. Discussion: https://postgr.es/m/2E712143-97F7-4890-B470-4A35142ABC82@yandex-team.ru
2021-01-28Silence another gcc 11 warning.Tom Lane
Per buildfarm and local experimentation, bleeding-edge gcc isn't convinced that the MemSet in reorder_function_arguments() is safe. Shut it up by adding an explicit check that pronargs isn't negative, and by changing MemSet to memset. (It appears that either change is enough to quiet the warning at -O2, but let's do both to be sure.)
2021-01-28Remove bogus restriction from BEFORE UPDATE triggersAlvaro Herrera
In trying to protect the user from inconsistent behavior, commit 487e9861d0cf "Enable BEFORE row-level triggers for partitioned tables" tried to prevent BEFORE UPDATE FOR EACH ROW triggers from moving the row from one partition to another. However, it turns out that the restriction is wrong in two ways: first, it fails spuriously, preventing valid situations from working, as in bug #16794; and second, they don't protect from any misbehavior, because tuple routing would cope anyway. Fix by removing that restriction. We keep the same restriction on BEFORE INSERT FOR EACH ROW triggers, though. It is valid and useful there. In the future we could remove it by having tuple reroute work for inserts as it does for updates. Backpatch to 13. Author: Álvaro Herrera <alvherre@alvh.no-ip.org> Reported-by: Phillip Menke <pg@pmenke.de> Discussion: https://postgr.es/m/16794-350a655580fbb9ae@postgresql.org
2021-01-28Fix hash partition pruning with asymmetric partition sets.Tom Lane
perform_pruning_combine_step() was not taught about the number of partition indexes used in hash partitioning; more embarrassingly, get_matching_hash_bounds() also had it wrong. These errors are masked in the common case where all the partitions have the same modulus and no partition is missing. However, with missing or unequal-size partitions, we could erroneously prune some partitions that need to be scanned, leading to silently wrong query answers. While a minimal-footprint fix for this could be to export get_partition_bound_num_indexes and make the incorrect functions use it, I'm of the opinion that that function should never have existed in the first place. It's not reasonable data structure design that PartitionBoundInfoData lacks any explicit record of the length of its indexes[] array. Perhaps that was all right when it could always be assumed equal to ndatums, but something should have been done about it as soon as that stopped being true. Putting in an explicit "nindexes" field makes both partition_bounds_equal() and partition_bounds_copy() simpler, safer, and faster than before, and removes explicit knowledge of the number-of-partition-indexes rules from some other places too. This change also makes get_hash_partition_greatest_modulus obsolete. I left that in place in case any external code uses it, but no core code does anymore. Per bug #16840 from Michał Albrycht. Back-patch to v11 where the hash partitioning code came in. (In the back branches, add the new field at the end of PartitionBoundInfoData to minimize ABI risks.) Discussion: https://postgr.es/m/16840-571a22976f829ad4@postgresql.org
2021-01-28Make ecpg's rjulmdy() and rmdyjul() agree with their declarations.Tom Lane
We had "short *mdy" in the extern declarations, but "short mdy[3]" in the actual function definitions. Per C99 these are equivalent, but recent versions of gcc have started to issue warnings about the inconsistency. Clean it up before the warnings get any more widespread. Back-patch, in case anyone wants to build older PG versions with bleeding-edge compilers. Discussion: https://postgr.es/m/2401575.1611764534@sss.pgh.pa.us
2021-01-28pgbench: Remove dead codeAlvaro Herrera
doConnect() never returns connections in state CONNECTION_BAD, so checking for that is pointless. Remove the code that does. This code has been dead since ba708ea3dc84, 20 years ago. Discussion: https://postgr.es/m/20210126195224.GA20361@alvherre.pgsql Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
2021-01-28Don't add bailout adjustment for non-strict deserialize calls.Andrew Gierth
When building aggregate expression steps, strict checks need a bailout jump for when a null value is encountered, so there is a list of steps that require later adjustment. Adding entries to that list for steps that aren't actually strict would be harmless, except that there is an Assert which catches them. This leads to spurious errors on asserts builds, for data sets that trigger parallel aggregation of an aggregate with a non-strict deserialization function (no such aggregates exist in the core system). Repair by not adding the adjustment entry when it's not needed. Backpatch back to 11 where the code was introduced. Per a report from Darafei (Komzpa) of the PostGIS project; analysis and patch by me. Discussion: https://postgr.es/m/87mty7peb3.fsf@news-spur.riddles.org.uk
2021-01-26Report the true database name on connection errorsAlvaro Herrera
When reporting connection errors, we might show a database name in the message that's not the one we actually tried to connect to, if the database was taken from libpq defaults instead of from user parameters. Fix such error messages to use PQdb(), which reports the correct name. (But, per commit 2930c05634bc, make sure not to try to print NULL.) Apply to branches 9.5 through 13. Branch master has already been changed differently by commit 58cd8dca3de0. Reported-by: Robert Haas <robertmhaas@gmail.com> Discussion: https://postgr.es/m/CA+TgmobssJ6rS22dspWnu-oDxXevGmhMD8VcRBjmj-b9UDqRjw@mail.gmail.com
2021-01-26Code review for psql's helpSQL() function.Tom Lane
The loops to identify word boundaries could access past the end of the input string. Likely that would never result in an actual crash, but it makes valgrind unhappy. The logic to try different numbers of words didn't work when the input has two words but we only have a match to the first, eg "\h with select". (We must "continue" the pass loop, not "break".) The logic to compute nl_count was bizarrely managed, and in at least two code paths could end up calling PageOutput with nl_count = 0, resulting in failing to paginate output that should have been fed to the pager. Also, in v12 and up, the nl_count calculation hadn't been updated to account for the addition of a URL. The PQExpBuffer holding the command syntax details wasn't freed, resulting in a session-lifespan memory leak. While here, improve some comments, choose a more descriptive name for a variable, fix inconsistent datatype choice for another variable. Per bug #16837 from Alexander Lakhin. This code is very old, so back-patch to all supported branches. Kyotaro Horiguchi and Tom Lane Discussion: https://postgr.es/m/16837-479bcd56040c71b3@postgresql.org
2021-01-25Don't clobber the calling user's credentials cache in Kerberos test.Tom Lane
Embarrassing oversight in this test script, which fortunately is not run by default. Report and patch by Jacob Champion. Discussion: https://postgr.es/m/1fcb175bafef6560f47a8c31229fa7c938486b8d.camel@vmware.com
2021-01-25Fix broken ruleutils support for function TRANSFORM clauses.Tom Lane
I chanced to notice that this dumped core due to a faulty Assert. To add insult to injury, the output has been misformatted since v11. Obviously we need some regression testing here. Discussion: https://postgr.es/m/d1cc628c-3953-4209-957b-29427acc38c8@www.fastmail.com
2021-01-25Fix hypothetical bug in heap backward scansDavid Rowley
Both heapgettup() and heapgettup_pagemode() incorrectly set the first page to scan in a backward scan in which the number of pages to scan was specified by heap_setscanlimits(). The code incorrectly started the scan at the end of the relation when startBlk was 0, or otherwise at startBlk - 1, neither of which is correct when only scanning a subset of pages. The fix here checks if heap_setscanlimits() has changed the number of pages to scan and if so we set the first page to scan as the final page in the specified range during backward scans. Proper adjustment of this code was forgotten when heap_setscanlimits() was added in 7516f5259 back in 9.5. However, practice, nowhere in core code performs backward scans after having used heap_setscanlimits(), yet, it is possible an extension uses the heap functions in this way, hence backpatch. An upcoming patch does use heap_setscanlimits() with backward scans, so this must be fixed before that can go in. Author: David Rowley Discussion: https://postgr.es/m/CAApHDvpGc9h0_oVD2CtgBcxCS1N-qDYZSeBRnUh+0CWJA9cMaA@mail.gmail.com Backpatch-through: 9.5, all supported versions
2021-01-24Update time zone data files to tzdata release 2021a.Tom Lane
DST law changes in Russia (Volgograd zone) and South Sudan. Historical corrections for Australia, Bahamas, Belize, Bermuda, Ghana, Israel, Kenya, Nigeria, Palestine, Seychelles, and Vanuatu. Notably, the Australia/Currie zone has been corrected to the point where it is identical to Australia/Hobart.
2021-01-21Fix pull_varnos' miscomputation of relids set for a PlaceHolderVar.Tom Lane
Previously, pull_varnos() took the relids of a PlaceHolderVar as being equal to the relids in its contents, but that fails to account for the possibility that we have to postpone evaluation of the PHV due to outer joins. This could result in a malformed plan. The known cases end up triggering the "failed to assign all NestLoopParams to plan nodes" sanity check in createplan.c, but other symptoms may be possible. The right value to use is the join level we actually intend to evaluate the PHV at. We can get that from the ph_eval_at field of the associated PlaceHolderInfo. However, there are some places that call pull_varnos() before the PlaceHolderInfos have been created; in that case, fall back to the conservative assumption that the PHV will be evaluated at its syntactic level. (In principle this might result in missing some legal optimization, but I'm not aware of any cases where it's an issue in practice.) Things are also a bit ticklish for calls occurring during deconstruct_jointree(), but AFAICS the ph_eval_at fields should have reached their final values by the time we need them. The main problem in making this work is that pull_varnos() has no way to get at the PlaceHolderInfos. We can fix that easily, if a bit tediously, in HEAD by passing it the planner "root" pointer. In the back branches that'd cause an unacceptable API/ABI break for extensions, so leave the existing entry points alone and add new ones with the additional parameter. (If an old entry point is called and encounters a PHV, it'll fall back to using the syntactic level, again possibly missing some valid optimization.) Back-patch to v12. The computation is surely also wrong before that, but it appears that we cannot reach a bad plan thanks to join order restrictions imposed on the subquery that the PlaceHolderVar came from. The error only became reachable when commit 4be058fe9 allowed trivial subqueries to be collapsed out completely, eliminating their join order restrictions. Per report from Stephan Springl. Discussion: https://postgr.es/m/171041.1610849523@sss.pgh.pa.us
2021-01-20Further tweaking of PG_SYSROOT heuristics for macOS.Tom Lane
It emerges that in some phases of the moon (perhaps to do with directory entry order?), xcrun will report that the SDK path is /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk which is normally a symlink to a version-numbered sibling directory. Our heuristic to skip non-version-numbered pathnames was rejecting that, which is the wrong thing to do. We'd still like to end up with a version-numbered PG_SYSROOT value, but we can have that by dereferencing the symlink. Like the previous fix, back-patch to all supported versions. Discussion: https://postgr.es/m/522433.1611089678@sss.pgh.pa.us
2021-01-20Fix bug in detecting concurrent page splits in GiST insertHeikki Linnakangas
In commit 9eb5607e699, I got the condition on checking for split or deleted page wrong: I used && instead of ||. The comment correctly said "concurrent split _or_ deletion". As a result, GiST insertion could miss a concurrent split, and insert to wrong page. Duncan Sands demonstrated this with a test script that did a lot of concurrent inserts. Backpatch to v12, where this was introduced. REINDEX is required to fix indexes that were affected by this bug. Backpatch-through: 12 Reported-by: Duncan Sands Discussion: https://www.postgresql.org/message-id/a9690483-6c6c-3c82-c8ba-dc1a40848f11%40deepbluecap.com
2021-01-20Fix ALTER DEFAULT PRIVILEGES with duplicated objectsMichael Paquier
Specifying duplicated objects in this command would lead to unique constraint violations in pg_default_acl or "tuple already updated by self" errors. Similarly to GRANT/REVOKE, increment the command ID after each subcommand processing to allow this case to work transparently. A regression test is added by tweaking one of the existing queries of privileges.sql to stress this case. Reported-by: Andrus Author: Michael Paquier Reviewed-by: Álvaro Herrera Discussion: https://postgr.es/m/ae2a7dc1-9d71-8cba-3bb9-e4cb7eb1f44e@hot.ee Backpatch-through: 9.5
2021-01-19Remove faulty support for MergeAppend plan with WHERE CURRENT OF.Tom Lane
Somebody extended search_plan_tree() to treat MergeAppend exactly like Append, which is 100% wrong, because unlike Append we can't assume that only one input node is actively returning tuples. Hence a cursor using a MergeAppend across a UNION ALL or inheritance tree could falsely match a WHERE CURRENT OF query at a row that isn't actually the cursor's current output row, but coincidentally has the same TID (in a different table) as the current output row. Delete the faulty code; this means that such a case will now return an error like 'cursor "foo" is not a simply updatable scan of table "bar"', instead of silently misbehaving. Users should not find that surprising though, as the same cursor query could have failed that way already depending on the chosen plan. (It would fail like that if the sort were done with an explicit Sort node instead of MergeAppend.) Expand the clearly-inadequate commentary to be more explicit about what this code is doing, in hopes of forestalling future mistakes. It's been like this for awhile, so back-patch to all supported branches. Discussion: https://postgr.es/m/482865.1611075182@sss.pgh.pa.us
2021-01-18Avoid crash with WHERE CURRENT OF and a custom scan plan.Tom Lane
execCurrent.c's search_plan_tree() assumed that ForeignScanStates and CustomScanStates necessarily have a valid ss_currentRelation. This is demonstrably untrue for postgres_fdw's remote join and remote aggregation plans, and non-leaf custom scans might not have an identifiable scan relation either. Avoid crashing by ignoring such nodes when the field is null. This solution will lead to errors like 'cursor "foo" is not a simply updatable scan of table "bar"' in cases where maybe we could have allowed WHERE CURRENT OF to work. That's not an issue for postgres_fdw's usages, since joins or aggregations would render WHERE CURRENT OF invalid anyway. But an otherwise-transparent upper level custom scan node might find this annoying. When and if someone cares to expend work on such a scenario, we could invent a custom-scan-provider callback to determine what's safe. Report and patch by David Geier, commentary by me. It's been like this for awhile, so back-patch to all supported branches. Discussion: https://postgr.es/m/0253344d-9bdd-11c4-7f0d-d88c02cd7991@swarm64.com
2021-01-16Fix pg_dump for GRANT OPTION among initial privileges.Noah Misch
The context is an object that no longer bears some aclitem that it bore initially. (A user issued REVOKE or GRANT statements upon the object.) pg_dump is forming SQL to reproduce the object ACL. Since initdb creates no ACL bearing GRANT OPTION, reaching this bug requires an extension where the creation script establishes such an ACL. No PGXN extension does that. If an installation did reach the bug, pg_dump would have omitted a semicolon, causing a REVOKE and the next SQL statement to fail. Separately, since the affected code exists to eliminate an entire aclitem, it wants plain REVOKE, not REVOKE GRANT OPTION FOR. Back-patch to 9.6, where commit 23f34fa4ba358671adab16773e79c17c92cbc870 first appeared. Discussion: https://postgr.es/m/20210109102423.GA160022@rfd.leadboat.com
2021-01-16Prevent excess SimpleLruTruncate() deletion.Noah Misch
Every core SLRU wraps around. With the exception of pg_notify, the wrap point can fall in the middle of a page. Account for this in the PagePrecedes callback specification and in SimpleLruTruncate()'s use of said callback. Update each callback implementation to fit the new specification. This changes SerialPagePrecedesLogically() from the style of asyncQueuePagePrecedes() to the style of CLOGPagePrecedes(). (Whereas pg_clog and pg_serial share a key space, pg_serial is nothing like pg_notify.) The bug fixed here has the same symptoms and user followup steps as 592a589a04bd456410b853d86bd05faa9432cbbb. Back-patch to 9.5 (all supported versions). Reviewed by Andrey Borodin and (in earlier versions) by Tom Lane. Discussion: https://postgr.es/m/20190202083822.GC32531@gust.leadboat.com
2021-01-15Disallow CREATE STATISTICS on system catalogsTomas Vondra
Add a check that CREATE STATISTICS does not add extended statistics on system catalogs, similarly to indexes etc. It can be overriden using the allow_system_table_mods GUC. This bug exists since 7b504eb282c, adding the extended statistics, so backpatch all the way back to PostgreSQL 10. Author: Tomas Vondra Reported-by: Dean Rasheed Backpatch-through: 10 Discussion: https://postgr.es/m/CAEZATCXAPrrOKwEsyZKQ4uzzJQWBCt6QAvOcgqRGdWwT1zb%2BrQ%40mail.gmail.com
2021-01-15Improve our heuristic for selecting PG_SYSROOT on macOS.Tom Lane
In cases where Xcode is newer than the underlying macOS version, asking xcodebuild for the SDK path will produce a pointer to the SDK shipped with Xcode, which may end up building code that does not work on the underlying macOS version. It appears that in such cases, xcodebuild's answer also fails to match the default behavior of Apple's compiler: assuming one has installed Xcode's "command line tools", there will be an SDK for the OS's own version in /Library/Developer/CommandLineTools, and the compiler will default to using that. This is all pretty poorly documented, but experimentation suggests that "xcrun --show-sdk-path" gives the sysroot path that the compiler is actually using, at least in some cases. Hence, try that first, but revert to xcodebuild if xcrun fails (in very old Xcode, it is missing or lacks the --show-sdk-path switch). Also, "xcrun --show-sdk-path" may give a path that is valid but lacks any OS version identifier. We don't really want that, since most of the motivation for wiring -isysroot into the build flags at all is to ensure that all parts of a PG installation are built against the same SDK, even when considering extensions built later and/or on a different machine. Insist on finding "N.N" in the directory name before accepting the result. (Adding "--sdk macosx" to the xcrun call seems to produce the same answer as xcodebuild, but usually more quickly because it's cached, so we also try that as a fallback.) The core reason why we don't want to use Xcode's default SDK in cases like this is that Apple's technology for introducing new syscalls does not play nice with Autoconf: for example, configure will think that preadv/pwritev exist when using a Big Sur SDK, even when building on an older macOS version where they don't exist. It'd be nice to have a better solution to that problem, but this patch doesn't attempt to fix that. Per report from Sergey Shinderuk. Back-patch to all supported versions. Discussion: https://postgr.es/m/ed3b8e5d-0da8-6ebd-fd1c-e0ac80a4b204@postgrespro.ru
2021-01-15Fix calculation of how much shared memory is required to store a TOC.Fujii Masao
Commit ac883ac453 refactored shm_toc_estimate() but changed its calculation of shared memory size for TOC incorrectly. Previously this could cause too large memory to be allocated. Back-patch to v11 where the bug was introduced. Author: Takayuki Tsunakawa Discussion: https://postgr.es/m/TYAPR01MB2990BFB73170E2C4921E2C4DFEA80@TYAPR01MB2990.jpnprd01.prod.outlook.com
2021-01-14pg_dump: label PUBLICATION TABLE ArchiveEntries with an owner.Tom Lane
This is the same fix as commit 9eabfe300 applied to INDEX ATTACH entries, but for table-to-publication attachments. As in that case, even though the backend doesn't record "ownership" of the attachment, we still ought to label it in the dump archive with the role name that should run the ALTER PUBLICATION command. The existing behavior causes the ALTER to be done by the original role that started the restore; that will usually work fine, but there may be corner cases where it fails. The bulk of the patch is concerned with changing struct PublicationRelInfo to include a pointer to the associated PublicationInfo object, so that we can get the owner's name out of that when the time comes. While at it, I rewrote getPublicationTables() to do just one query of pg_publication_rel, not one per table. Back-patch to v10 where this code was introduced. Discussion: https://postgr.es/m/1165710.1610473242@sss.pgh.pa.us
2021-01-14Prevent drop of tablespaces used by partitioned relationsAlvaro Herrera
When a tablespace is used in a partitioned relation (per commits ca4103025dfe in pg12 for tables and 33e6c34c3267 in pg11 for indexes), it is possible to drop the tablespace, potentially causing various problems. One such was reported in bug #16577, where a rewriting ALTER TABLE causes a server crash. Protect against this by using pg_shdepend to keep track of tablespaces when used for relations that don't keep physical files; we now abort a tablespace if we see that the tablespace is referenced from any partitioned relations. Backpatch this to 11, where this problem has been latent all along. We don't try to create pg_shdepend entries for existing partitioned indexes/tables, but any ones that are modified going forward will be protected. Note slight behavior change: when trying to drop a tablespace that contains both regular tables as well as partitioned ones, you'd previously get ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE and now you'll get ERRCODE_DEPENDENT_OBJECTS_STILL_EXIST. Arguably, the latter is more correct. It is possible to add protecting pg_shdepend entries for existing tables/indexes, by doing ALTER TABLE ONLY some_partitioned_table SET TABLESPACE pg_default; ALTER TABLE ONLY some_partitioned_table SET TABLESPACE original_tablespace; for each partitioned table/index that is not in the database default tablespace. Because these partitioned objects do not have storage, no file needs to be actually moved, so it shouldn't take more time than what's required to acquire locks. This query can be used to search for such relations: SELECT ... FROM pg_class WHERE relkind IN ('p', 'I') AND reltablespace <> 0 Reported-by: Alexander Lakhin <exclusion@gmail.com> Discussion: https://postgr.es/m/16577-881633a9f9894fd5@postgresql.org Author: Álvaro Herrera <alvherre@alvh.no-ip.org> Reviewed-by: Michael Paquier <michael@paquier.xyz>
2021-01-14Stabilize timeline switch regression test.Fujii Masao
Commit fef5b47f6b added the regression test to check whether a standby is able to follow a primary on a newer timeline when WAL archiving is enabled. But the buildfarm member florican reported that this test failed because the requested WAL segment was removed and replication failed. This is a timing issue. Since neither replication slot is used nor wal_keep_size is set in the test, checkpoint could remove the WAL segment that's still necessary for replication. This commit stabilizes the test by setting wal_keep_size. Back-patch to v13 where the regression test that this commit stabilizes was added. Author: Fujii Masao Discussion: https://postgr.es/m/X//PsenxcC50jDzX@paquier.xyz
2021-01-14Ensure that a standby is able to follow a primary on a newer timeline.Fujii Masao
Commit 709d003fbd refactored WAL-reading code, but accidentally caused WalSndSegmentOpen() to fail to follow a timeline switch while reading from a historic timeline. This issue caused a standby to fail to follow a primary on a newer timeline when WAL archiving is enabled. If there is a timeline switch within the segment, WalSndSegmentOpen() should read from the WAL segment belonging to the new timeline. But previously since it failed to follow a timeline switch, it tried to read the WAL segment with old timeline. When WAL archiving is enabled, that WAL segment with old timeline doesn't exist because it's renamed to .partial. This leads a primary to have tried to read non-existent WAL segment, and which caused replication to faill with the error "ERROR: requested WAL segment ... has already been removed". This commit fixes WalSndSegmentOpen() so that it's able to follow a timeline switch, to ensure that a standby is able to follow a primary on a newer timeline even when WAL archiving is enabled. This commit also adds the regression test to check whether a standby is able to follow a primary on a newer timeline when WAL archiving is enabled. Back-patch to v13 where the bug was introduced. Reported-by: Kyotaro Horiguchi Author: Kyotaro Horiguchi, tweaked by Fujii Masao Reviewed-by: Alvaro Herrera, Fujii Masao Discussion: https://postgr.es/m/20201209.174314.282492377848029776.horikyota.ntt@gmail.com
2021-01-13Disallow a digit as the first character of a variable name in pgbench.Tom Lane
The point of this restriction is to avoid trying to substitute variables into timestamp literal values, which may contain strings like '12:34'. There is a good deal more that should be done to reduce pgbench's tendency to substitute where it shouldn't. But this is sufficient to solve the case complained of by Jaime Soler, and it's simple enough to back-patch. Back-patch to v11; before commit 9d36a3866, pgbench had a slightly different definition of what a variable name is, and anyway it seems unwise to change long-stable branches for this. Fabien Coelho Discussion: https://postgr.es/m/alpine.DEB.2.22.394.2006291740420.805678@pseudo
2021-01-13Fix memory leak in SnapBuildSerialize.Amit Kapila
The memory for the snapshot was leaked while serializing it to disk during logical decoding. This memory will be freed only once walsender stops streaming the changes. This can lead to a huge memory increase when master logs Standby Snapshot too frequently say when the user is trying to create many replication slots. Reported-by: funnyxj.fxj@alibaba-inc.com Diagnosed-by: funnyxj.fxj@alibaba-inc.com Author: Amit Kapila Backpatch-through: 9.5 Discussion: https://postgr.es/m/033ab54c-6393-42ee-8ec9-2b399b5d8cde.funnyxj.fxj@alibaba-inc.com
2021-01-12pg_dump: label INDEX ATTACH ArchiveEntries with an owner.Tom Lane
Although a partitioned index's attachment to its parent doesn't have separate ownership, the ArchiveEntry for it needs to be marked with an owner anyway, to ensure that the ALTER command is run by the appropriate role when restoring with --use-set-session-authorization. Without this, the ALTER will be run by the role that started the restore session, which will usually work but it's formally the wrong thing. Back-patch to v11 where this type of ArchiveEntry was added. In HEAD, add equivalent commentary to the just-added TABLE ATTACH case, which I'd made do the right thing already. Discussion: https://postgr.es/m/1094034.1610418498@sss.pgh.pa.us
2021-01-12Fix thinko in commentAlvaro Herrera
This comment has been wrong since its introduction in commit 2c03216d8311. Author: Masahiko Sawada <sawada.mshk@gmail.com> Discussion: https://postgr.es/m/CAD21AoAzz6qipFJBbGEaHmyWxvvNDp8httbwLR9tUQWaTjUs2Q@mail.gmail.com
2021-01-12Fix relation descriptor leak.Amit Kapila
We missed closing the relation descriptor while sending changes via the root of partitioned relations during logical replication. Author: Amit Langote and Mark Zhao Reviewed-by: Amit Kapila and Ashutosh Bapat Backpatch-through: 13, where it was introduced Discussion: https://postgr.es/m/tencent_41FEA657C206F19AB4F406BE9252A0F69C06@qq.com Discussion: https://postgr.es/m/tencent_6E296D2F7D70AFC90D83353B69187C3AA507@qq.com
2021-01-08Fix ancient bug in parsing of BRE-mode regular expressions.Tom Lane
brenext(), when parsing a '*' quantifier, forgot to return any "value" for the token; per the equivalent case in next(), it should return value 1 to indicate that greedy rather than non-greedy behavior is wanted. The result is that the compiled regexp could behave like 'x*?' rather than the intended 'x*', if we were unlucky enough to have a zero in v->nextvalue at this point. That seems to happen with some reliability if we have '.*' at the beginning of a BRE-mode regexp, although that depends on the initial contents of a stack-allocated struct, so it's not guaranteed to fail. Found by Alexander Lakhin using valgrind testing. This bug seems to be aboriginal in Spencer's code, so back-patch all the way. Discussion: https://postgr.es/m/16814-6c5e3edd2bdf0d50@postgresql.org
2021-01-07Adjust createdb TAP tests to work on recent OpenBSD.Tom Lane
We found last February that the error-case tests added by commit 008cf0409 failed on OpenBSD, because that platform doesn't really check locale names. At the time it seemed that that was only an issue for LC_CTYPE, but testing on a more recent version of OpenBSD shows that it's now equally lax about LC_COLLATE. Rather than dropping the LC_COLLATE test too, put back LC_CTYPE (reverting c4b0edb07), and adjust these tests to accept the different error message that we get if setlocale() doesn't reject a bogus locale name. The point of these tests is not really what the backend does with the locale name, but to show that createdb quotes funny locale names safely; so we're not losing test reliability this way. Back-patch as appropriate. Discussion: https://postgr.es/m/231373.1610058324@sss.pgh.pa.us
2021-01-07Further second thoughts about idle_session_timeout patch.Tom Lane
On reflection, the order of operations in PostgresMain() is wrong. These timeouts ought to be shut down before, not after, we do the post-command-read CHECK_FOR_INTERRUPTS, to guarantee that any timeout error will be detected there rather than at some ill-defined later point (possibly after having wasted a lot of work). This is really an error in the original idle_in_transaction_timeout patch, so back-patch to 9.6 where that was introduced.
2021-01-06Detect the deadlocks between backends and the startup process.Fujii Masao
The deadlocks that the recovery conflict on lock is involved in can happen between hot-standby backends and the startup process. If a backend takes an access exclusive lock on the table and which finally triggers the deadlock, that deadlock can be detected as expected. On the other hand, previously, if the startup process took an access exclusive lock and which finally triggered the deadlock, that deadlock could not be detected and could remain even after deadlock_timeout passed. This is a bug. The cause of this bug was that the code for handling the recovery conflict on lock didn't take care of deadlock case at all. It assumed that deadlocks involving the startup process and backends were able to be detected by the deadlock detector invoked within backends. But this assumption was incorrect. The startup process also should have invoked the deadlock detector if necessary. To fix this bug, this commit makes the startup process invoke the deadlock detector if deadlock_timeout is reached while handling the recovery conflict on lock. Specifically, in that case, the startup process requests all the backends holding the conflicting locks to check themselves for deadlocks. Back-patch to v9.6. v9.5 has also this bug, but per discussion we decided not to back-patch the fix to v9.5. Because v9.5 doesn't have some infrastructure codes (e.g., 37c54863cf) that this bug fix patch depends on. We can apply those codes for the back-patch, but since the next minor version release is the final one for v9.5, it's risky to do that. If we unexpectedly introduce new bug to v9.5 by the back-patch, there is no chance to fix that. We determined that the back-patch to v9.5 would give more risk than gain. Author: Fujii Masao Reviewed-by: Bertrand Drouvot, Masahiko Sawada, Kyotaro Horiguchi Discussion: https://postgr.es/m/4041d6b6-cf24-a120-36fa-1294220f8243@oss.nttdata.com
2021-01-05Add an explicit cast to double when using fabs().Dean Rasheed
Commit bc43b7c2c0 used fabs() directly on an int variable, which apparently requires an explicit cast on some platforms. Per buildfarm.