summaryrefslogtreecommitdiff
path: root/src/include
AgeCommit message (Collapse)Author
2023-08-24Avoid unnecessary plancache revalidation of utility statements.Tom Lane
Revalidation of a plancache entry (after a cache invalidation event) requires acquiring a snapshot. Normally that is harmless, but not if the cached statement is one that needs to run without acquiring a snapshot. We were already aware of that for TransactionStmts, but for some reason hadn't extrapolated to the other statements that PlannedStmtRequiresSnapshot() knows mustn't set a snapshot. This can lead to unexpected failures of commands such as SET TRANSACTION ISOLATION LEVEL. We can fix it in the same way, by excluding those command types from revalidation. However, we can do even better than that: there is no need to revalidate for any statement type for which parse analysis, rewrite, and plan steps do nothing interesting, which is nearly all utility commands. To mechanize this, invent a parser function stmt_requires_parse_analysis() that tells whether parse analysis does anything beyond wrapping a CMD_UTILITY Query around the raw parse tree. If that's what it does, then rewrite and plan will just skip the Query, so that it is not possible for the same raw parse tree to produce a different plan tree after cache invalidation. stmt_requires_parse_analysis() is basically equivalent to the existing function analyze_requires_snapshot(), except that for obscure reasons that function omits ReturnStmt and CallStmt. It is unclear whether those were oversights or intentional. I have not been able to demonstrate a bug from not acquiring a snapshot while analyzing these commands, but at best it seems mighty fragile. It seems safer to acquire a snapshot for parse analysis of these commands too, which allows making stmt_requires_parse_analysis and analyze_requires_snapshot equivalent. In passing this fixes a second bug, which is that ResetPlanCache would exclude ReturnStmts and CallStmts from revalidation. That's surely *not* safe, since they contain parsable expressions. Per bug #18059 from Pavel Kulakov. Back-patch to all supported branches. Discussion: https://postgr.es/m/18059-79c692f036b25346@postgresql.org
2023-08-24Update DECLARE_INDEX documentationPeter Eisentraut
Update source code comment changes belonging to the changes in 6a6389a08b. Discussion: https://www.postgresql.org/message-id/flat/75ae5875-3abc-dafc-8aec-73247ed41cde@eisentraut.org
2023-08-03Update comments on CustomPath struct.Etsuro Fujita
Commit e7cb7ee14 allowed custom scan providers to create CustomPath paths for join relations as well, but missed updating the comments. Back-patch to all supported branches. Discussion: https://postgr.es/m/CAPmGK15ODkN%2B%3DhkBCufj1HBW0x5OTb65Xuy7ryXchMdiCMpx_g%40mail.gmail.com
2023-07-28Disallow replacing joins with scans in problematic cases.Etsuro Fujita
Commit e7cb7ee14, which introduced the infrastructure for FDWs and custom scan providers to replace joins with scans, failed to add support handling of pseudoconstant quals assigned to replaced joins in createplan.c, leading to an incorrect plan without a gating Result node when postgres_fdw replaced a join with such a qual. To fix, we could add the support by 1) modifying the ForeignPath and CustomPath structs to store the list of RestrictInfo nodes to apply to the join, as in JoinPaths, if they represent foreign and custom scans replacing a join with a scan, and by 2) modifying create_scan_plan() in createplan.c to use that list in that case, instead of the baserestrictinfo list, to get pseudoconstant quals assigned to the join; but #1 would cause an ABI break. So fix by modifying the infrastructure to just disallow replacing joins with such quals. Back-patch to all supported branches. Reported by Nishant Sharma. Patch by me, reviewed by Nishant Sharma and Richard Guo. Discussion: https://postgr.es/m/CADrsxdbcN1vejBaf8a%2BQhrZY5PXL-04mCd4GDu6qm6FigDZd6Q%40mail.gmail.com
2023-07-25Fix the display of UNKNOWN message type in apply worker.Amit Kapila
We include the message type while displaying an error context in the apply worker. Now, while retrieving the message type string if the message type is unknown we throw an error that will hide the original error. So, instead, we need to simply return the string indicating an unknown message type. Reported-by: Ashutosh Bapat Author: Euler Taveira, Amit Kapila Reviewed-by: Ashutosh Bapat Backpatch-through: 15 Discussion: https://postgr.es/m/CAExHW5suAEDW-mBZt_qu4RVxWZ1vL54-L+ci2zreYWebpzxYsA@mail.gmail.com
2023-07-13Handle DROP DATABASE getting interruptedAndres Freund
Until now, when DROP DATABASE got interrupted in the wrong moment, the removal of the pg_database row would also roll back, even though some irreversible steps have already been taken. E.g. DropDatabaseBuffers() might have thrown out dirty buffers, or files could have been unlinked. But we continued to allow connections to such a corrupted database. To fix this, mark databases invalid with an in-place update, just before starting to perform irreversible steps. As we can't add a new column in the back branches, we use pg_database.datconnlimit = -2 for this purpose. An invalid database cannot be connected to anymore, but can still be dropped. Unfortunately we can't easily add output to psql's \l to indicate that some database is invalid, it doesn't fit in any of the existing columns. Add tests verifying that a interrupted DROP DATABASE is handled correctly in the backend and in various tools. Reported-by: Evgeny Morozov <postgresql3@realityexists.net> Author: Andres Freund <andres@anarazel.de> Reviewed-by: Daniel Gustafsson <daniel@yesql.se> Reviewed-by: Thomas Munro <thomas.munro@gmail.com> Discussion: https://postgr.es/m/20230509004637.cgvmfwrbht7xm7p6@awork3.anarazel.de Discussion: https://postgr.es/m/20230314174521.74jl6ffqsee5mtug@awork3.anarazel.de Backpatch: 11-, bug present in all supported versions
2023-07-06Fix type of iterator variable in SH_START_ITERATEAndres Freund
Also add comment to make the reasoning behind the Assert() more explicit (per Tom). Reported-by: Ranier Vilela Discussion: https://postgr.es/m/CAEudQAocXNJ6s1VLz+hMamLAQAiewRoW17OJ6-+9GACKfj6iPQ@mail.gmail.com Backpatch: 11-
2023-05-19Fix misbehavior of EvalPlanQual checks with multiple result relations.Tom Lane
The idea of EvalPlanQual is that we replace the query's scan of the result relation with a single injected tuple, and see if we get a tuple out, thereby implying that the injected tuple still passes the query quals. (In join cases, other relations in the query are still scanned normally.) This logic was not updated when commit 86dc90056 made it possible for a single DML query plan to have multiple result relations, when the query target relation has inheritance or partition children. We replaced the output for the current result relation successfully, but other result relations were still scanned normally; thus, if any other result relation contained a tuple satisfying the quals, we'd think the EPQ check passed, even if it did not pass for the injected tuple itself. This would lead to update or delete actions getting performed when they should have been skipped due to a conflicting concurrent update in READ COMMITTED isolation mode. Fix by blocking all sibling result relations from emitting tuples during an EvalPlanQual recheck. In the back branches, the fix is complicated a bit by the need to not change the size of struct EPQState (else we'd have ABI-breaking changes in offsets in struct ModifyTableState). Like the back-patches of 3f7836ff6 and 4b3e37993, add a separately palloc'd struct to avoid that. The logic is the same as in HEAD otherwise. This is only a live bug back to v14 where 86dc90056 came in. However, I chose to back-patch the test cases further, on the grounds that this whole area is none too well tested. I skipped doing so in v11 though because none of the test applied cleanly, and it didn't quite seem worth extra work for a branch with only six months to live. Per report from Ante Krešić (via Aleksander Alekseev) Discussion: https://postgr.es/m/CAJ7c6TMBTN3rcz4=AjYhLPD_w3FFT0Wq_C15jxCDn8U4tZnH1g@mail.gmail.com
2023-05-19pageinspect: Fix gist_page_items() with included columnsMichael Paquier
Non-leaf pages of GiST indexes contain key attributes, leaf pages contain both key and non-key attributes, and gist_page_items() ignored the handling of non-key attributes. This caused a few problems when using gist_page_items() on a GiST index with INCLUDE: - On a non-leaf page, the function would crash. - On a leaf page, the function would work, but miss to display all the values for included attributes. This commit fixes gist_page_items() to handle such cases in a more appropriate way, and now displays the values of key and non-key attributes for each item separately in a style consistent with what ruleutils.c would generate for the attribute list, depending on the page type dealt with. In a way similar to how a record is displayed, values would be double-quoted for key or non-key attributes if required. ruleutils.c did not provide a routine able to control if non-key attributes should be displayed, so an extended() routine for index definitions is added to work around the leaf and non-leaf page differences. While on it, this commit fixes a third problem related to the amount of data reported for key attributes. The code originally relied on BuildIndexValueDescription() (used for error reports on constraints) that would not print all the data stored in the index but the index opclass's input type, so this limited the amount of information available. This switch makes gist_page_items() much cheaper as there is no need to run ACL checks for each item printed, which is not an issue anyway as superuser rights are required to execute the functions of pageinspect. Opclasses whose data cannot be displayed can rely on gist_page_items_bytea(). The documentation of this function was slightly incorrect for the output results generated on HEAD and v15, so adjust it on these branches. Author: Alexander Lakhin, Michael Paquier Discussion: https://postgr.es/m/17884-cb8c326522977acb@postgresql.org Backpatch-through: 14
2023-05-19Fix handling of empty ranges and NULLs in BRINTomas Vondra
BRIN indexes did not properly distinguish between summaries for empty (no rows) and all-NULL ranges, treating them as essentially the same thing. Summaries were initialized with allnulls=true, and opclasses simply reset allnulls to false when processing the first non-NULL value. This however produces incorrect results if the range starts with a NULL value (or a sequence of NULL values), in which case we forget the range contains NULL values when adding the first non-NULL value. This happens because the allnulls flag is used for two separate purposes - to mark empty ranges (not representing any rows yet) and ranges containing only NULL values. Opclasses don't know which of these cases it is, and so don't know whether to set hasnulls=true. Setting the flag in both cases would make it correct, but it would also make BRIN indexes useless for queries with IS NULL clauses. All ranges start empty (and thus allnulls=true), so all ranges would end up with either allnulls=true or hasnulls=true. The severity of the issue is somewhat reduced by the fact that it only happens when adding values to an existing summary with allnulls=true. This can happen e.g. for small tables (because a summary for the first range exists for all BRIN indexes), or for tables with large fraction of NULL values in the indexed columns. Bulk summarization (e.g. during CREATE INDEX or automatic summarization) that processes all values at once is not affected by this issue. In this case the flags were updated in a slightly different way, not forgetting the NULL values. To identify empty ranges we use a new flag, stored in an unused bit in the BRIN tuple header so the on-disk format remains the same. A matching flag is added to BrinMemTuple, into a 3B gap after bt_placeholder. That means there's no risk of ABI breakage, although we don't actually pass the BrinMemTuple to any public API. We could also skip storing index tuples for empty summaries, but then we'd have to always process such ranges - even if there are no rows in large parts of the table (e.g. after a bulk DELETE), it would still require reading the pages etc. So we store them, but ignore them when building the bitmap. Backpatch to 11. The issue exists since BRIN indexes were introduced in 9.5, but older releases are already EOL. Backpatch-through: 11 Reviewed-by: Justin Pryzby, Matthias van de Meent, Alvaro Herrera Discussion: https://postgr.es/m/402430e4-7d9d-6cf1-09ef-464d80afff3b@enterprisedb.com
2023-05-10Fix assertion failure when updating stats_fetch_consistency in a transactionMichael Paquier
An update of the GUC stats_fetch_consistency in a transaction would be able to trigger an assertion when doing cache->snapshot. In this case, when retrieving a pgstat entry after the switch, a new snapshot would be rebuilt, confusing pgstat_build_snapshot() because a snapshot is already cached with an unexpected mode ("cache"). In order to fix this problem, this commit adds a flag to force a snapshot clear each time this GUC is changed. Some tests are added to check, while on it. Some optimizations in avoiding the snapshot clear should be possible depending on what is cached and the current GUC value, I guess, but this solution is simple, and ensures that the state of the cache is updated each time a new pgstat entry is fetched, hence being consistent with the level wanted by the client that has set the GUC. Note that cache->none and snapshot->none would not cause issues, as fetching a pgstat entry would be retrieved from shared memory on the second attempt, however a snapshot would still be cached. Similarly, none->snapshot and none->cache would build a new snapshot on the second fetch attempt. Finally, snapshot->cache would cache a new snapshot on the second attempt. Reported-by: Alexander Lakhin Author: Kyotaro Horiguchi Discussion: https://postgr.es/m/17804-2a118cd046f2d0e5@postgresql.org backpatch-through: 15
2023-04-28Fix crashes with CREATE SCHEMA AUTHORIZATION and schema elementsMichael Paquier
CREATE SCHEMA AUTHORIZATION with appended schema elements can lead to crashes when comparing the schema name of the query with the schemas used in the qualification of some clauses in the elements' queries. The origin of the problem is that the transformation routine for the elements listed in a CREATE SCHEMA query uses as new, expected, schema name the one listed in CreateSchemaStmt itself. However, depending on the query, CreateSchemaStmt.schemaname may be NULL, being computed instead from the role specification of the query given by the AUTHORIZATION clause, that could be either: - A user name string, with the new schema name being set to the same value as the role given. - Guessed from CURRENT_ROLE, SESSION_ROLE or CURRENT_ROLE, with a new schema name computed from the security context where CREATE SCHEMA is running. Regression tests are added for CREATE SCHEMA with some appended elements (some of them with schema qualifications), covering also some role specification patterns. While on it, this simplifies the context structure used during the transformation of the elements listed in a CREATE SCHEMA query by removing the fields for the role specification and the role type. They were not used, and for the role specification this could be confusing as the schema name may by extracted from that at the beginning of CreateSchemaCommand(). This issue exists for a long time, so backpatch down to all the versions supported. Reported-by: Song Hongyu Author: Michael Paquier Reviewed-by: Richard Guo Discussion: https://postgr.es/m/17909-f65c12dfc5f0451d@postgresql.org Backpatch-through: 11
2023-04-12Fix detection of unseekable files for fseek() and ftello() with MSVCMichael Paquier
Calling fseek() or ftello() on a handle to a non-seeking device such as a pipe or a communications device is not supported. Unfortunately, MSVC's flavor of these routines, _fseeki64() and _ftelli64(), do not return an error when given a pipe as handle. Some of the logic of pg_dump and restore relies on these routines to check if a handle is seekable, causing failures when passing the contents of pg_dump to pg_restore through a pipe, for example. This commit introduces wrappers for fseeko() and ftello() on MSVC so as any callers are able to properly detect the cases of non-seekable handles. This relies mainly on GetFileType(), sharing a bit of code with the MSVC port for fstat(). The code in charge of getting a file type is refactored into a new file called win32common.c, shared by win32stat.c and the new win32fseek.c. It includes the MSVC ports for fseeko() and ftello(). Like 765f5df, this is backpatched down to 14, where the fstat() implementation for MSVC is able to understand about files larger than 4GB in size. Using a TAP test for that is proving to be tricky as IPC::Run handles the pipes by itself, still I have been able to check the fix manually. Reported-by: Daniel Watzinger Author: Juan José Santamaría Flecha, Michael Paquier Discussion: https://postgr.es/m/CAC+AXB26a4EmxM2suXxPpJaGrqAdxracd7hskLg-zxtPB50h7A@mail.gmail.com Backpatch-through: 14
2023-03-21Fix race in parallel hash join batch cleanup, take II.Thomas Munro
With unlucky timing and parallel_leader_participation=off (not the default), PHJ could attempt to access per-batch shared state just as it was being freed. There was code intended to prevent that by checking for a cleared pointer, but it was racy. Fix, by introducing an extra barrier phase. The new phase PHJ_BUILD_RUNNING means that it's safe to access the per-batch state to find a batch to help with, and PHJ_BUILD_DONE means that it is too late. The last to detach will free the array of per-batch state as before, but now it will also atomically advance the phase, so that late attachers can avoid the hazard. This mirrors the way per-batch hash tables are freed (see phases PHJ_BATCH_PROBING and PHJ_BATCH_DONE). An earlier attempt to fix this (commit 3b8981b6, later reverted) missed one special case. When the inner side is empty (the "empty inner optimization), the build barrier would only make it to PHJ_BUILD_HASHING_INNER phase before workers attempted to detach from the hashtable. In that case, fast-forward the build barrier to PHJ_BUILD_RUNNING before proceeding, so that our later assertions hold and we can still negotiate who is cleaning up. Revealed by build farm failures, where BarrierAttach() failed a sanity check assertion, because the memory had been clobbered by dsa_free(). In non-assert builds, the result could be a segmentation fault. Back-patch to all supported releases. Author: Thomas Munro <thomas.munro@gmail.com> Author: Melanie Plageman <melanieplageman@gmail.com> Reported-by: Michael Paquier <michael@paquier.xyz> Reported-by: David Geier <geidav.pg@gmail.com> Tested-by: David Geier <geidav.pg@gmail.com> Discussion: https://postgr.es/m/20200929061142.GA29096%40paquier.xyz
2023-03-17Fix t_isspace(), etc., when datlocprovider=i and datctype=C.Jeff Davis
Check whether the datctype is C to determine whether t_isspace() and related functions use isspace() or iswspace(). Previously, t_isspace() checked whether the database default collation was C; which is incorrect when the default collation uses the ICU provider. Discussion: https://postgr.es/m/79e4354d9eccfdb00483146a6b9f6295202e7890.camel@j-davis.com Reviewed-by: Peter Eisentraut Backpatch-through: 15
2023-03-13Fix concurrent update issues with MERGE.Dean Rasheed
If MERGE attempts an UPDATE or DELETE on a table with BEFORE ROW triggers, or a cross-partition UPDATE (with or without triggers), and a concurrent UPDATE or DELETE happens, the merge code would fail. In some cases this would lead to a crash, while in others it would cause the wrong merge action to be executed, or no action at all. The immediate cause of the crash was the trigger code calling ExecGetUpdateNewTuple() as part of the EPQ mechanism, which fails because during a merge ri_projectNew is NULL, since merge has its own per-action projection information, which ExecGetUpdateNewTuple() knows nothing about. Fix by arranging for the trigger code to exit early, returning the TM_Result and TM_FailureData information, if a concurrent modification is detected, allowing the merge code to do the necessary EPQ handling in its own way. Similarly, prevent the cross-partition update code from doing any EPQ processing for a merge, allowing the merge code to work out what it needs to do. This leads to a number of simplifications in nodeModifyTable.c. Most notably, the ModifyTableContext->GetUpdateNewTuple() callback is no longer needed, and mergeGetUpdateNewTuple() can be deleted, since there is no longer any requirement for get-update-new-tuple during a merge. Similarly, ModifyTableContext->cpUpdateRetrySlot is no longer needed. Thus ExecGetUpdateNewTuple() and the retry_slot handling of ExecCrossPartitionUpdate() can be restored to how they were in v14, before the merge code was added, and ExecMergeMatched() no longer needs any special-case handling for cross-partition updates. While at it, tidy up ExecUpdateEpilogue() a bit, making it handle recheckIndexes locally, rather than passing it in as a parameter, ensuring that it is freed properly. This dates back to when it was split off from ExecUpdate() to support merge. Per bug #17809 from Alexander Lakhin, and follow-up investigation of bug #17792, also from Alexander Lakhin. Back-patch to v15, where MERGE was introduced, taking care to preserve backwards-compatibility of the trigger API in v15 for any extensions that might use it. Discussion: https://postgr.es/m/17809-9e6650bef133f0fe%40postgresql.org https://postgr.es/m/17792-0f89452029662c36%40postgresql.org
2023-03-04Avoid failure when altering state of partitioned foreign-key triggers.Tom Lane
Beginning in v15, if you apply ALTER TABLE ENABLE/DISABLE TRIGGER to a partitioned table, it also affects the partitions' cloned versions of the affected trigger(s). The initial implementation of this located the clones by name, but that fails on foreign-key triggers which have names incorporating their own OIDs. We can fix that, and also make the behavior more bulletproof in the face of user-initiated trigger renames, by identifying the cloned triggers by tgparentid. Following the lead of earlier commits in this area, I took care not to break ABI in the v15 branch, even though I rather doubt there are any external callers of EnableDisableTrigger. While here, update the documentation, which was not touched when the semantics were changed. Per bug #17817 from Alan Hodgson. Back-patch to v15; older versions do not have this behavior. Discussion: https://postgr.es/m/17817-31dfb7c2100d9f3d@postgresql.org
2023-02-25Fix MULTIEXPR_SUBLINK with partitioned target tables, yet again.Tom Lane
We already tried to fix this in commits 3f7323cbb et al (and follow-on fixes), but now it emerges that there are still unfixed cases; moreover, these cases affect all branches not only pre-v14. I thought we had eliminated all cases of making multiple clones of an UPDATE's target list when we nuked inheritance_planner. But it turns out we still do that in some partitioned-UPDATE cases, notably including INSERT ... ON CONFLICT UPDATE, because ExecInitPartitionInfo thinks it's okay to clone and modify the parent's targetlist. This fix is based on a suggestion from Andres Freund: let's stop abusing the ParamExecData.execPlan mechanism, which was only ever meant to handle initplans, and instead solve the execution timing problem by having the expression compiler move MULTIEXPR_SUBLINK steps to the front of their expression step lists. This is feasible because (a) all branches still in support compile the entire targetlist of an UPDATE into a single ExprState, and (b) we know that all MULTIEXPR_SUBLINKs do need to be evaluated --- none could be buried inside a CASE, for example. There is a minor semantics change concerning the order of execution of the MULTIEXPR's subquery versus other parts of the parent targetlist, but that seems like something we can get away with. By doing that, we no longer need to worry about whether different clones of a MULTIEXPR_SUBLINK share output Params; their usage of that data structure won't overlap. Per bug #17800 from Alexander Lakhin. Back-patch to all supported branches. In v13 and earlier, we can revert 3f7323cbb and follow-on fixes; however, I chose to keep the SubPlan.subLinkId field added in ccbb54c72. We don't need that anymore in the core code, but it's cheap enough to fill, and removing a plan node field in a minor release seems like it'd be asking for trouble. Andres Freund and Tom Lane Discussion: https://postgr.es/m/17800-ff90866b3906c964@postgresql.org
2023-02-15Fix handling of SCRAM-SHA-256's channel binding with RSA-PSS certificatesMichael Paquier
OpenSSL 1.1.1 and newer versions have added support for RSA-PSS certificates, which requires the use of a specific routine in OpenSSL to determine which hash function to use when compiling it when using channel binding in SCRAM-SHA-256. X509_get_signature_nid(), that is the original routine the channel binding code has relied on, is not able to determine which hash algorithm to use for such certificates. However, X509_get_signature_info(), new to OpenSSL 1.1.1, is able to do it. This commit switches the channel binding logic to rely on X509_get_signature_info() over X509_get_signature_nid(), which would be the choice when building with 1.1.1 or newer. The error could have been triggered on the client or the server, hence libpq and the backend need to have their related code paths patched. Note that attempting to load an RSA-PSS certificate with OpenSSL 1.1.0 or older leads to a failure due to an unsupported algorithm. The discovery of relying on X509_get_signature_info() comes from Jacob, the tests have been written by Heikki (with few tweaks from me), while I have bundled the whole together while adding the bits needed for MSVC and meson. This issue exists since channel binding exists, so backpatch all the way down. Some tests are added in 15~, triggered if compiling with OpenSSL 1.1.1 or newer, where the certificate and key files can easily be generated for RSA-PSS. Reported-by: Gunnar "Nick" Bluth Author: Jacob Champion, Heikki Linnakangas Discussion: https://postgr.es/m/17760-b6c61e752ec07060@postgresql.org Backpatch-through: 11
2023-02-10Un-revert "Disable STARTUP_PROGRESS_TIMEOUT in standby mode."Robert Haas
This reverts commit 1eadfbdd7eb0679ba8d45787aa8b2f06e76de20a and thus reinstates commit 98e7234242a652497c99d4d0d6f2bf9a75d4e921. It's a better time to commit this now that the release is over. Discussion: http://postgr.es/m/3509384.1675878203@sss.pgh.pa.us
2023-02-06Revert "Disable STARTUP_PROGRESS_TIMEOUT in standby mode."Robert Haas
This reverts commit 98e7234242a652497c99d4d0d6f2bf9a75d4e921. I forgot that we're about to wrap a release, and this fix isn't critical enough to justify committing it right before we wrap a release. Discussion: http://postgr.es/m/2676424.1675700113@sss.pgh.pa.us
2023-02-06Disable STARTUP_PROGRESS_TIMEOUT in standby mode.Robert Haas
In standby mode, we don't actually report progress of recovery, but up until now, startup_progress_timeout_handler() nevertheless got called every log_startup_progress_interval seconds. That's an unnecessary expense, so avoid it. Report by Thomas Munro. Patch by Bharath Rupireddy, reviewed by Simon Riggs, Thomas Munro, and me. Back-patch to v15, where the problem was introduced. Discussion: https://www.postgresql.org/message-id/CA%2BhUKGKCHSffAj8zZJKJvNX7ygnQFxVD6wm1d-2j3fVw%2BMafPQ%40mail.gmail.com
2023-01-20Make our back branches build under -fkeep-inline-functions.Tom Lane
Add "#ifndef FRONTEND" where necessary to make pg_waldump build on compilers that don't elide unused static-inline functions. This back-patches relevant parts of commit 3e9ca5260, fixing build breakage from dc7420c2c and back-patching of f10f0ae42. Per recently-resurrected buildfarm member castoroides. We aren't expecting castoroides to build anything newer than v11, but we might as well clean up the intermediate branches while at it.
2023-01-15Make new GENERATED-expressions code more bulletproof.Tom Lane
In commit 8bf6ec3ba I assumed that no code path could reach ExecGetExtraUpdatedCols without having gone through ExecInitStoredGenerated. That turns out not to be the case in logical replication: if there's an ON UPDATE trigger on the target table, trigger.c will call this code before anybody has set up its generated columns. Having seen that, I don't have a lot of faith in there not being other such paths. ExecGetExtraUpdatedCols can call ExecInitStoredGenerated for itself, as long as we are willing to assume that it is only called in CMD_UPDATE operations, which on the whole seems like a safer leap of faith. Per report from Vitaly Davydov. Discussion: https://postgr.es/m/d259d69652b8c2ff50e14cda3c236c7f@postgrespro.ru
2023-01-05Fix calculation of which GENERATED columns need to be updated.Tom Lane
We were identifying the updatable generated columns of inheritance children by transposing the calculation made for their parent. However, there's nothing that says a traditional-inheritance child can't have generated columns that aren't there in its parent, or that have different dependencies than are in the parent's expression. (At present it seems that we don't enforce that for partitioning either, which is likely wrong to some degree or other; but the case clearly needs to be handled with traditional inheritance.) Hence, drop the very-klugy-anyway "extraUpdatedCols" RTE field in favor of identifying which generated columns depend on updated columns during executor startup. In HEAD we can remove extraUpdatedCols altogether; in back branches, it's still there but always empty. Another difference between the HEAD and back-branch versions of this patch is that in HEAD we can add the new bitmap field to ResultRelInfo, but that would cause an ABI break in back branches. Like 4b3e37993, add a List field at the end of struct EState instead. Back-patch to v13. The bogus calculation is also being made in v12, but it doesn't have the same visible effect because we don't use it to decide which generated columns to recalculate; as a consequence of which the patch doesn't apply easily. I think that there might still be a demonstrable bug associated with trigger firing conditions, but that's such a weird corner-case usage that I'm content to leave it unfixed in v12. Amit Langote and Tom Lane Discussion: https://postgr.es/m/CA+HiwqFshLKNvQUd1DgwJ-7tsTp=dwv7KZqXC4j2wYBV1aCDUA@mail.gmail.com Discussion: https://postgr.es/m/2793383.1672944799@sss.pgh.pa.us
2023-01-02perl: Hide warnings inside perl.h when using gcc compatible compilerAndres Freund
New versions of perl trigger warnings within perl.h with our compiler flags. At least -Wdeclaration-after-statement, -Wshadow=compatible-local are known to be problematic. To avoid these warnings, conditionally use #pragma GCC system_header before including plperl.h. Alternatively, we could add the include paths for problematic headers with -isystem, but that is a larger hammer and is harder to search for. A more granular alternative would be to use #pragma GCC diagnostic push/ignored/pop, but gcc warns about unknown warnings being ignored, so every to-be-ignored-temporarily compiler warning would require its own pg_config.h symbol and #ifdef. As the warnings are voluminous, it makes sense to backpatch this change. But don't do so yet, we first want gather buildfarm coverage - it's e.g. possible that some compiler claiming to be gcc compatible has issues with the pragma. Author: Andres Freund <andres@anarazel.de> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: Discussion: https://postgr.es/m/20221228182455.hfdwd22zztvkojy2@awork3.anarazel.de
2022-12-13Rethink handling of [Prevent|Is]InTransactionBlock in pipeline mode.Tom Lane
Commits f92944137 et al. made IsInTransactionBlock() set the XACT_FLAGS_NEEDIMMEDIATECOMMIT flag before returning "false", on the grounds that that kept its API promises equivalent to those of PreventInTransactionBlock(). This turns out to be a bad idea though, because it allows an ANALYZE in a pipelined series of commands to cause an immediate commit, which is unexpected. Furthermore, if we return "false" then we have another issue, which is that ANALYZE will decide it's allowed to do internal commit-and-start-transaction sequences, thus possibly unexpectedly committing the effects of previous commands in the pipeline. To fix the latter situation, invent another transaction state flag XACT_FLAGS_PIPELINING, which explicitly records the fact that we have executed some extended-protocol command and not yet seen a commit for it. Then, require that flag to not be set before allowing InTransactionBlock() to return "false". Having done that, we can remove its setting of NEEDIMMEDIATECOMMIT without fear of causing problems. This means that the API guarantees of IsInTransactionBlock now diverge from PreventInTransactionBlock, which is mildly annoying, but it seems OK given the very limited usage of IsInTransactionBlock. (In any case, a caller preferring the old behavior could always set NEEDIMMEDIATECOMMIT for itself.) For consistency also require XACT_FLAGS_PIPELINING to not be set in PreventInTransactionBlock. This too is meant to prevent commands such as CREATE DATABASE from silently committing previous commands in a pipeline. Per report from Peter Eisentraut. As before, back-patch to all supported branches (which sadly no longer includes v10). Discussion: https://postgr.es/m/65a899dd-aebc-f667-1d0a-abb89ff3abf8@enterprisedb.com
2022-12-08Remove new structure member from ResultRelInfo.Etsuro Fujita
In commit ffbb7e65a, I added a ModifyTableState member to ResultRelInfo to save the owning ModifyTableState for use by nodeModifyTable.c when performing batch inserts, but as pointed out by Tom Lane, that changed the array stride of es_result_relations, and that would break any previously-compiled extension code that accesses that array. Fix by removing that member from ResultRelInfo and instead adding a List member at the end of EState to save such ModifyTableStates. Per report from Tom Lane. Back-patch to v14, like the previous commit; I chose to apply the patch to HEAD as well, to make back-patching easy. Discussion: http://postgr.es/m/4065383.1669395453%40sss.pgh.pa.us
2022-12-05Fix Memoize to work with partitionwise joining.Tom Lane
A couple of places weren't up to speed for this. By sheer good luck, we didn't fail but just selected a non-memoized join plan, at least in the test case we have. Nonetheless, it's a bug, and I'm not quite sure that it couldn't have worse consequences in other examples. So back-patch to v14 where Memoize came in. Richard Guo Discussion: https://postgr.es/m/CAMbWs48GkNom272sfp0-WeD6_0HSR19BJ4H1c9ZKSfbVnJsvRg@mail.gmail.com
2022-12-02Fix incorrect output from pgoutput when using column lists.Amit Kapila
For Updates and Deletes, we were not honoring the columns list for old tuple values while sending tuple data via pgoutput. This results in pgoutput emitting more columns than expected. This is not a problem for built-in logical replication as we simply ignore additional columns based on the relation information sent previously which didn't have those columns. However, some other users of pgoutput plugin may expect the columns as per the column list. Also, sending extra columns unnecessarily consumes network bandwidth defeating the purpose of the column list feature. Reported-by: Gunnar Morling Author: Hou Zhijie Reviewed-by: Amit Kapila Backpatch-through: 15 Discussion: https://postgr.es/m/CADGJaX9kiRZ-OH0EpWF5Fkyh1ZZYofoNRCrhapBfdk02tj5EKg@mail.gmail.com
2022-11-29Improve heuristics for compressing the KnownAssignedXids array.Tom Lane
Previously, we'd compress only when the active range of array entries reached Max(4 * PROCARRAY_MAXPROCS, 2 * pArray->numKnownAssignedXids). If max_connections is large, the first term could result in not compressing for a long time, resulting in much wastage of cycles in hot-standby backends scanning the array to take snapshots. Get rid of that term, and just bound it to 2 * pArray->numKnownAssignedXids. That however creates the opposite risk, that we might spend too much effort compressing. Hence, consider compressing only once every 128 commit records. (This frequency was chosen by benchmarking. While we only tried one benchmark scenario, the results seem stable over a fairly wide range of frequencies.) Also, force compression when processing RecoveryInfo WAL records (which should be infrequent); the old code could perform compression then, but would do so only after the same array-range check as for the transaction-commit path. Also, opportunistically run compression if the startup process is about to wait for WAL, though not oftener than once a second. This should prevent cases where we waste lots of time by leaving the array not-compressed for long intervals due to low WAL traffic. Lastly, add a simple check to keep us from uselessly compressing when the array storage is already compact. Back-patch, as the performance problem is worse in pre-v14 branches than in HEAD. Simon Riggs and Michail Nikolaev, with help from Tom Lane and Andres Freund. Discussion: https://postgr.es/m/CALdSSPgahNUD_=pB_j=1zSnDBaiOtqVfzo8Ejt5J_k7qZiU1Tw@mail.gmail.com
2022-11-25Fix handling of pending inserts in nodeModifyTable.c.Etsuro Fujita
Commit b663a4136, which allowed FDWs to INSERT rows in bulk, added to nodeModifyTable.c code to flush pending inserts to the foreign-table result relation(s) before completing processing of the ModifyTable node, but the code failed to take into account the case where the INSERT query has modifying CTEs, leading to incorrect results. Also, that commit failed to flush pending inserts before firing BEFORE ROW triggers so that rows are visible to such triggers. In that commit we scanned through EState's es_tuple_routing_result_relations or es_opened_result_relations list to find the foreign-table result relations to which pending inserts are flushed, but that would be inefficient in some cases. So to fix, 1) add a List member to EState to record the insert-pending result relations, and 2) modify nodeModifyTable.c so that it adds the foreign-table result relation to the list in ExecInsert() if appropriate, and flushes pending inserts properly using the list where needed. While here, fix a copy-and-pasteo in a comment in ExecBatchInsert(), which was added by that commit. Back-patch to v14 where that commit appeared. Discussion: https://postgr.es/m/CAPmGK16qutyCmyJJzgQOhfBq%3DNoGDqTB6O0QBZTihrbqre%2BoxA%40mail.gmail.com
2022-11-21Add comments and a missing CHECK_FOR_INTERRUPTS in ts_headline.Tom Lane
I just spent an annoying amount of time reverse-engineering the 100%-undocumented API between ts_headline and the text search parser's prsheadline function. Add some commentary about that while it's fresh in mind. Also remove some unused macros in wparser_def.c. While at it, I noticed that when commit 78e73e875 added a CHECK_FOR_INTERRUPTS call in TS_execute_recurse, it missed doing so in the parallel function TS_phrase_execute, which surely needs one just as much. Back-patch because of the missing CHECK_FOR_INTERRUPTS. Might as well back-patch the rest of this too.
2022-11-04Fix CREATE DATABASE so we can pg_upgrade DBs with OIDs above 2^31.Tom Lane
Commit aa0105141 repeated one of the oldest mistakes in our book: thinking that OID is the same as int32. It isn't of course, and unsurprisingly the first person who came along with a database OID above 2 billion broke it. Repair. Per bug #17677 from Sergey Pankov. Back-patch to v15. Discussion: https://postgr.es/m/17677-a99fa067d7ed71c9@postgresql.org
2022-11-03Add casts to simplehash.h to silence C++ warnings.Tom Lane
Casting the result of palloc etc. to the intended type is more per project style anyway. (The fact that cpluspluscheck doesn't notice these problems is because it doesn't expand any macros, which seems like a troubling shortcoming. Don't have a good idea about improving that.) Back-patch to v13, which is as far as the patch applies cleanly; doesn't seem worth working harder. David Geier Discussion: https://postgr.es/m/aa5d88a3-71f4-3455-11cf-82de0372c941@gmail.com
2022-11-02Allow use of __sync_lock_test_and_set for spinlocks on any machine.Tom Lane
If we have no special-case code in s_lock.h for the current platform, but the compiler has __sync_lock_test_and_set, use that instead of failing. It's unlikely that anybody's __sync_lock_test_and_set would be so awful as to be worse than our semaphore-based fallback, but if it is, they can (continue to) use --disable-spinlocks. This allows removal of the RISC-V special case installed by commit c32fcac56, which generated exactly the same code but only on that platform. Usefully, the RISC-V buildfarm animals should now test at least the int variant of this patch. I've manually tested both variants on ARM by dint of removing the ARM-specific stanza. We don't want to drop that, because it already has some special knowledge and is likely to grow more over time. Likewise, this is not meant to preclude installing special cases for other arches if that proves worthwhile. Per discussion of a request to install the same code for loongarch64. Like the previous patch, we might as well back-patch to supported branches. Discussion: https://postgr.es/m/761ac43d44b84d679ba803c2bd947cc0@HSMAILSVR04.hs.handsome.com.cn
2022-11-02Fix outdated comment in tuplesort.hDavid Rowley
This was outdated by 77bae396d. Backpatch-through: 15, where 77bae396d was added
2022-10-24Update some comments that should've covered MERGEAlvaro Herrera
Oversight in 7103ebb7aae8. Backpatch to 15. Author: Richard Guo <guofenglinux@gmail.com> Discussion: https://postgr.es/m/CAMbWs48gnDjZXq3-b56dVpQCNUJ5hD9kdtWN4QFwKCEapspNsA@mail.gmail.com
2022-10-21pg_basebackup: Fix cross-platform tablespace relocation.Robert Haas
Specifically, when pg_basebackup is invoked with -Tx=y, don't error out if x could plausibly be an absolute path either on Windows or on non-Windows systems. We don't know whether the remote system is running the same OS as the local system, so it's not appropriate to assume that our local rule about absolute pathnames is the same as the rule on the remote system. Patch by me, reviewed by Tom Lane, Andrew Dunstan, and Davinder Singh. Discussion: http://postgr.es/m/CA+TgmoY+jC3YiskomvYKDPK3FbrmsDU7_8+wMHt02HOdJeRb0g@mail.gmail.com
2022-10-18Rename SetSingleFuncCall() to InitMaterializedSRF()Michael Paquier
Per discussion, the existing routine name able to initialize a SRF function with materialize mode is unpopular, so rename it. Equally, the flags of this function are renamed, as of: - SRF_SINGLE_USE_EXPECTED -> MAT_SRF_USE_EXPECTED_DESC - SRF_SINGLE_BLESS -> MAT_SRF_BLESS The previous function and flags introduced in 9e98583 are kept around for compatibility purposes, so as any extension code already compiled with v15 continues to work as-is. The declarations introduced here for compatibility will be removed from HEAD in a follow-up commit. The new names have been suggested by Andres Freund and Melanie Plageman. Discussion: https://postgr.es/m/20221013194820.ciktb2sbbpw7cljm@awork3.anarazel.de Backpatch-through: 15
2022-10-16Rename parser token REF to REF_P to avoid a symbol conflict.Tom Lane
In the latest version of Apple's macOS SDK, <sys/socket.h> fails to compile if "REF" is #define'd as something. Apple may or may not agree that this is a bug, and even if they do accept the bug report I filed, they probably won't fix it very quickly. In the meantime, our back branches will all fail to compile gram.y. v15 and HEAD currently escape the problem thanks to the refactoring done in 98e93a1fc, but that's purely accidental. Moreover, since that patch removed a widely-visible inclusion of <netdb.h>, back-patching it seems too likely to break third-party code. Instead, change the token's code name to REF_P, following our usual convention for naming parser tokens that are likely to have symbol conflicts. The effects of that should be localized to the grammar and immediately surrounding files, so it seems like a safer answer. Per project policy that we want to keep recently-out-of-support branches buildable on modern systems, back-patch all the way to 9.2. Discussion: https://postgr.es/m/1803927.1665938411@sss.pgh.pa.us
2022-10-08pgstat: Prevent stats reset from corrupting slotname by removing slotnameAndres Freund
Previously PgStat_StatReplSlotEntry contained the slotname, which was mainly used when writing out the stats during shutdown, to identify the slot in the serialized data (at runtime the index in ReplicationSlotCtl->replication_slots is used, but that can change during a restart). Unfortunately the slotname was overwritten when the slot's stats were reset. That turned out to only cause "real" problems if the slot was active during the reset, triggering an assertion failure at the next pgstat_report_replslot(). In other paths the stats were re-initialized during pgstat_acquire_replslot(). Fix this by removing slotname from PgStat_StatReplSlotEntry. Instead we can get the slot's name from the slot itself. Besides fixing a bug, this also is architecturally cleaner (a name is not really statistics). This is safe because stats, for a slot removed while shut down, will not be restored at startup. In 15 the slotname is not removed, but renamed, to avoid changing the stats format. In master, bump PGSTAT_FILE_FORMAT_ID. This commit does not contain a test for the fix. I think this can only be tested by a tap test starting pg_recvlogical in the background and checking pg_recvlogical's output. That type of test is notoriously hard to be reliable, so committing it shortly before the release is wrapped seems like a bad idea. Reported-by: Jaime Casanova <jcasanov@systemguards.com.ec> Author: Andres Freund <andres@anarazel.de> Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com> Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com> Discussion: https://postgr.es/m/YxfagaTXUNa9ggLb@ahch-to Backpatch: 15-, where the bug was introduced in 5891c7a8ed8f
2022-10-03Revert "Optimize order of GROUP BY keys".Tom Lane
This reverts commit db0d67db2401eb6238ccc04c6407a4fd4f985832 and several follow-on fixes. The idea of making a cost-based choice of the order of the sorting columns is not fundamentally unsound, but it requires cost information and data statistics that we don't really have. For example, relying on procost to distinguish the relative costs of different sort comparators is pretty pointless so long as most such comparator functions are labeled with cost 1.0. Moreover, estimating the number of comparisons done by Quicksort requires more than just an estimate of the number of distinct values in the input: you also need some idea of the sizes of the larger groups, if you want an estimate that's good to better than a factor of three or so. That's data that's often unknown or not very reliable. Worse, to arrive at estimates of the number of calls made to the lower-order-column comparison functions, the code needs to make estimates of the numbers of distinct values of multiple columns, which are necessarily even less trustworthy than per-column stats. Even if all the inputs are perfectly reliable, the cost algorithm as-implemented cannot offer useful information about how to order sorting columns beyond the point at which the average group size is estimated to drop to 1. Close inspection of the code added by db0d67db2 shows that there are also multiple small bugs. These could have been fixed, but there's not much point if we don't trust the estimates to be accurate in-principle. Finally, the changes in cost_sort's behavior made for very large changes (often a factor of 2 or so) in the cost estimates for all sorting operations, not only those for multi-column GROUP BY. That naturally changes plan choices in many situations, and there's precious little evidence to show that the changes are for the better. Given the above doubts about whether the new estimates are really trustworthy, it's hard to summon much confidence that these changes are better on the average. Since we're hard up against the release deadline for v15, let's revert these changes for now. We can always try again later. Note: in v15, I left T_PathKeyInfo in place in nodes.h even though it's unreferenced. Removing it would be an ABI break, and it seems a bit late in the release cycle for that. Discussion: https://postgr.es/m/TYAPR01MB586665EB5FB2C3807E893941F5579@TYAPR01MB5866.jpnprd01.prod.outlook.com
2022-09-17Make check_usermap() parameter names consistent.Peter Geoghegan
The function has a bool argument named "case_insensitive", but that was spelled "case_sensitive" in the declaration. Make them consistent now to avoid confusion in the future. Author: Peter Geoghegan <pg@bowt.ie> Reviewed-By: Michael Paquiër <michael@paquier.xyz> Discussion: https://postgr.es/m/CAH2-WznJt9CMM9KJTMjJh_zbL5hD9oX44qdJ4aqZtjFi-zA3Tg@mail.gmail.com Backpatch: 10-
2022-09-14Expand palloc/pg_malloc API for more type safetyPeter Eisentraut
This adds additional variants of palloc, pg_malloc, etc. that encapsulate common usage patterns and provide more type safety. Specifically, this adds palloc_object(), palloc_array(), and repalloc_array(), which take the type name of the object to be allocated as its first argument and cast the return as a pointer to that type. There are also palloc0_object() and palloc0_array() variants for initializing with zero, and pg_malloc_*() variants of all of the above. Inspired by the talloc library. This is backpatched from master so that future backpatchable code can make use of these APIs. This patch by itself does not contain any users of these APIs. Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://www.postgresql.org/message-id/flat/bb755632-2a43-d523-36f8-a1e7a389a907@enterprisedb.com
2022-09-14Simplify handling of compression level with compression specificationsMichael Paquier
PG_COMPRESSION_OPTION_LEVEL is removed from the compression specification logic, and instead the compression level is always assigned with each library's default if nothing is directly given. This centralizes the checks on the compression methods supported by a given build, and always assigns a default compression level when parsing a compression specification. This results in complaining at an earlier stage than previously if a build supports a compression method or not, aka when parsing a specification in the backend or the frontend, and not when processing it. zstd, lz4 and zlib are able to handle in their respective routines setting up the compression level the case of a default value, hence the backend or frontend code (pg_receivewal or pg_basebackup) has now no need to know what the default compression level should be if nothing is specified: the logic is now done so as the specification parsing assigns it. It can also be enforced by passing down a "level" set to the default value, that the backend will accept (the replication protocol is for example able to handle a command like BASE_BACKUP (COMPRESSION_DETAIL 'gzip:level=-1')). This code simplification fixes an issue with pg_basebackup --gzip introduced by ffd5365, where the tarball of the streamed WAL segments would be created as of pg_wal.tar.gz with uncompressed contents, while the intention is to compress the segments with gzip at a default level. The origin of the confusion comes from the handling of the default compression level of gzip (-1 or Z_DEFAULT_COMPRESSION) and the value of 0 was getting assigned, which is what walmethods.c would consider as equivalent to no compression when streaming WAL segments with its tar methods. Assigning always the compression level removes the confusion of some code paths considering a value of 0 set in a specification as either no compression or a default compression level. Note that 010_pg_basebackup.pl has to be adjusted to skip a few tests where the shape of the compression detail string for client and server-side compression was checked using gzip. This is a result of the code simplification, as gzip specifications cannot be used if a build does not support it. Reported-by: Tom Lane Reviewed-by: Tom Lane Discussion: https://postgr.es/m/1400032.1662217889@sss.pgh.pa.us Backpatch-through: 15
2022-09-08Fix recovery_prefetch with low maintenance_io_concurrency.Thomas Munro
We should process completed IOs *before* trying to start more, so that it is always possible to decode one more record when the decoded record queue is empty, even if maintenance_io_concurrency is set so low that a single earlier WAL record might have saturated the IO queue. That bug was hidden because the effect of maintenance_io_concurrency was arbitrarily clamped to be at least 2. Fix the ordering, and also remove that clamp. We need a special case for 0, which is now treated the same as recovery_prefetch=off, but otherwise the number is used directly. This allows for testing with 1, which would have made the problem obvious in simple test scenarios. Also add an explicit error message for missing contrecords. It was a bit strange that we didn't report an error already, and became a latent bug with prefetching, since the internal state that tracks aborted contrecords would not survive retrying, as revealed by 026_overwrite_contrecord.pl with this adjustment. Reporting an error prevents that. Back-patch to 15. Reported-by: Justin Pryzby <pryzby@telsasoft.com> Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com> Discussion: https://postgr.es/m/20220831140128.GS31833%40telsasoft.com
2022-09-06Fix new pg_publication_tables query.Tom Lane
The addition of published column names forgot to filter on attisdropped, leading to cases where you could see "........pg.dropped.1........" or the like as a reportedly-published column. While we're here, rewrite the new subquery to get a more efficient plan for it. Hou Zhijie, per report from Jaime Casanova. Back-patch to v15 where the bug was introduced. (Sadly, this means we need a post-beta4 catversion bump before beta4 has even hit the streets. I see no good alternative though.) Discussion: https://postgr.es/m/Yxa1SU4nH2HfN3/i@ahch-to
2022-09-03Fix cache invalidation bug in recovery_prefetch.Thomas Munro
XLogPageRead() can retry internally after a pread() system call has succeeded, in the case of short reads, and page validation failures while in standby mode (see commit 0668719801). Due to an oversight in commit 3f1ce973, these cases could leave stale data in the internal cache of xlogreader.c without marking it invalid. The main defense against stale cached data on failure to read a page was in the error handling path of the calling function ReadPageInternal(), but that wasn't quite enough for errors handled internally by XLogPageRead()'s retry loop if we then exited with XLREAD_WOULDBLOCK. 1. ReadPageInternal() now marks the cache invalid before calling the page_read callback, by setting state->readLen to 0. It'll be set to a non-zero value only after a successful read. It'll stay valid as long as the caller requests data in the cached range. 2. XLogPageRead() no long performs internal retries while reading ahead. While such retries should work, the general philosophy is that we should give up prefetching if anything unusual happens so we can handle it when recovery catches up, to reduce the complexity of the system. Let's do that here too. 3. While here, a new function XLogReaderResetError() improves the separation between xlogrecovery.c and xlogreader.c, where the former previously clobbered the latter's internal error buffer directly. The new function makes this more explicit, and also clears a related flag, without which a standby would needlessly retry in the outer function. Thanks to Noah Misch for tracking down the conditions required for a rare build farm failure in src/bin/pg_ctl/t/003_promote.pl, and providing a reproducer. Back-patch to 15. Reported-by: Noah Misch <noah@leadboat.com> Discussion: https://postgr.es/m/20220807003627.GA4168930%40rfd.leadboat.com
2022-09-01Revert SQL/JSON featuresAndrew Dunstan
The reverts the following and makes some associated cleanups: commit f79b803dc: Common SQL/JSON clauses commit f4fb45d15: SQL/JSON constructors commit 5f0adec25: Make STRING an unreserved_keyword. commit 33a377608: IS JSON predicate commit 1a36bc9db: SQL/JSON query functions commit 606948b05: SQL JSON functions commit 49082c2cc: RETURNING clause for JSON() and JSON_SCALAR() commit 4e34747c8: JSON_TABLE commit fadb48b00: PLAN clauses for JSON_TABLE commit 2ef6f11b0: Reduce running time of jsonb_sqljson test commit 14d3f24fa: Further improve jsonb_sqljson parallel test commit a6baa4bad: Documentation for SQL/JSON features commit b46bcf7a4: Improve readability of SQL/JSON documentation. commit 112fdb352: Fix finalization for json_objectagg and friends commit fcdb35c32: Fix transformJsonBehavior commit 4cd8717af: Improve a couple of sql/json error messages commit f7a605f63: Small cleanups in SQL/JSON code commit 9c3d25e17: Fix JSON_OBJECTAGG uniquefying bug commit a79153b7a: Claim SQL standard compliance for SQL/JSON features commit a1e7616d6: Rework SQL/JSON documentation commit 8d9f9634e: Fix errors in copyfuncs/equalfuncs support for JSON node types. commit 3c633f32b: Only allow returning string types or bytea from json_serialize commit 67b26703b: expression eval: Fix EEOP_JSON_CONSTRUCTOR and EEOP_JSONEXPR size. The release notes are also adjusted. Backpatch to release 15. Discussion: https://postgr.es/m/40d2c882-bcac-19a9-754d-4299e1d87ac7@postgresql.org