summaryrefslogtreecommitdiff
path: root/src/include
AgeCommit message (Collapse)Author
2020-10-29Don't use custom OID symbols in pg_type.dat, either.Tom Lane
On the same reasoning as in commit 36b931214, forbid using custom oid_symbol macros in pg_type as well as pg_proc, so that we always rely on the predictable macro names generated by genbki.pl. We do continue to grant grandfather status to the names CASHOID and LSNOID, although those are now considered deprecated aliases for the preferred names MONEYOID and PG_LSNOID. This is because there's likely to be client-side code using the old names, and this bout of neatnik-ism doesn't quite seem worth breaking client code. There might be a case for grandfathering EVTTRIGGEROID, too, since externally-maintained PLs may reference that symbol. But renaming such references to EVENT_TRIGGEROID doesn't seem like a particularly heavy lift --- we make far more significant backend API changes in every major release. For now I didn't add that, but we could reconsider if there's pushback. The other names changed here seem pretty unlikely to have any outside uses. Again, we could add alias macros if there are complaints, but for now I didn't. As before, no need for a catversion bump. John Naylor Discussion: https://postgr.es/m/CAFBsxsHpCbjfoddNGpnnnY5pHwckWfiYkMYSF74PmP1su0+ZOw@mail.gmail.com
2020-10-29Track statistics for streaming of changes from ReorderBuffer.Amit Kapila
This adds the statistics about transactions streamed to the decoding output plugin from ReorderBuffer. Users can query the pg_stat_replication_slots view to check these stats and call pg_stat_reset_replication_slot to reset the stats of a particular slot. Users can pass NULL in pg_stat_reset_replication_slot to reset stats of all the slots. Commit 9868167500 has added the basic infrastructure to capture the stats of slot and this commit extends the statistics collector to track additional information about slots. Bump the catversion as we have added new columns in the catalog entry. Author: Ajin Cherian and Amit Kapila Reviewed-by: Sawada Masahiko and Dilip Kumar Discussion: https://postgr.es/m/CAA4eK1+chpEomLzgSoky-D31qev19AmECNiEAietPQUGEFhtVA@mail.gmail.com
2020-10-28Centralize horizon determination for temp tables, fixing bug due to skew.Andres Freund
This fixes a bug in the edge case where, for a temp table, heap_page_prune() can end up with a different horizon than heap_vacuum_rel(). Which can trigger errors like "ERROR: cannot freeze committed xmax ...". The bug was introduced due to interaction of a7212be8b9e "Set cutoff xmin more aggressively when vacuuming a temporary table." with dc7420c2c92 "snapshot scalability: Don't compute global horizons while building snapshots.". The problem is caused by lazy_scan_heap() assuming that the only reason its HeapTupleSatisfiesVacuum() call would return HEAPTUPLE_DEAD is if the tuple is a HOT tuple, or if the tuple's inserting transaction has aborted since the heap_page_prune() call. But after a7212be8b9e that was also possible in other cases for temp tables, because heap_page_prune() uses a different visibility test after dc7420c2c92. The fix is fairly simple: Move the special case logic for temp tables from vacuum_set_xid_limits() to the infrastructure introduced in dc7420c2c92. That ensures that the horizon used for pruning is at least as aggressive as the one used by lazy_scan_heap(). The concrete horizon used for temp tables is slightly different than the logic in dc7420c2c92, but should always be as aggressive as before (see comments). A significant benefit to centralizing the logic procarray.c is that now the more aggressive horizons for temp tables does not just apply to VACUUM but also to e.g. HOT pruning and the nbtree killtuples logic. Because isTopLevel is not needed by vacuum_set_xid_limits() anymore, I undid the the related changes from a7212be8b9e. This commit also adds an isolation test ensuring that the more aggressive vacuuming and pruning of temp tables keeps working. Debugged-By: Amit Kapila <amit.kapila16@gmail.com> Debugged-By: Tom Lane <tgl@sss.pgh.pa.us> Debugged-By: Ashutosh Sharma <ashu.coek88@gmail.com> Author: Andres Freund <andres@anarazel.de> Discussion: https://postgr.es/m/20201014203103.72oke6hqywcyhx7s@alap3.anarazel.de Discussion: https://postgr.es/m/20201015083735.derdzysdtqdvxshp@alap3.anarazel.de
2020-10-28Calculate extraUpdatedCols in query rewriter, not parser.Tom Lane
It's unsafe to do this at parse time because addition of generated columns to a table would not invalidate stored rules containing UPDATEs on the table ... but there might now be dependent generated columns that were not there when the rule was made. This also fixes an oversight that rewriteTargetView failed to update extraUpdatedCols when transforming an UPDATE on an updatable view. (Since the new calculation is downstream of that, rewriteTargetView doesn't actually need to do anything; but before, there was a demonstrable bug there.) In v13 and HEAD, this leads to easily-visible bugs because (since commit c6679e4fc) we won't recalculate generated columns that aren't listed in extraUpdatedCols. In v12 this bitmap is mostly just used for trigger-firing decisions, so you'd only notice a problem if a trigger cared whether a generated column had been updated. I'd complained about this back in May, but then forgot about it until bug #16671 from Michael Paul Killian revived the issue. Back-patch to v12 where this field was introduced. If existing stored rules contain any extraUpdatedCols values, they'll be ignored because the rewriter will overwrite them, so the bug will be fixed even for existing rules. (But note that if someone were to update to 13.1 or 12.5, store some rules with UPDATEs on tables having generated columns, and then downgrade to a prior minor version, they might observe issues similar to what this patch fixes. That seems unlikely enough to not be worth going to a lot of effort to fix.) Discussion: https://postgr.es/m/10206.1588964727@sss.pgh.pa.us Discussion: https://postgr.es/m/16671-2fa55851859fb166@postgresql.org
2020-10-28Don't use custom OID symbols in pg_proc.dat.Tom Lane
We have a perfectly good convention for OID macros for built-in functions already, so making custom symbols is just introducing unnecessary deviation from the convention. Remove the one case that had snuck in, and add an error check in genbki.pl to discourage future instances. Although this touches pg_proc.dat, there's no need for a catversion bump since the actual catalog data isn't changed. John Naylor Discussion: https://postgr.es/m/CAFBsxsHpCbjfoddNGpnnnY5pHwckWfiYkMYSF74PmP1su0+ZOw@mail.gmail.com
2020-10-28Fix foreign-key selectivity estimation in the presence of constants.Tom Lane
get_foreign_key_join_selectivity() looks for join clauses that equate the two sides of the FK constraint. However, if we have a query like "WHERE fktab.a = pktab.a and fktab.a = 1", it won't find any such join clause, because equivclass.c replaces the given clauses with "fktab.a = 1 and pktab.a = 1", which can be enforced at the scan level, leaving nothing to be done for column "a" at the join level. We can fix that expectation without much trouble, but then a new problem arises: applying the foreign-key-based selectivity rule produces a rowcount underestimate, because we're effectively double-counting the selectivity of the "fktab.a = 1" clause. So we have to cancel that selectivity out of the estimate. To fix, refactor process_implied_equality() so that it can pass back the new RestrictInfo to its callers in equivclass.c, allowing the generated "fktab.a = 1" clause to be saved in the EquivalenceClass's ec_derives list. Then it's not much trouble to dig out the relevant RestrictInfo when we need to adjust an FK selectivity estimate. (While at it, we can also remove the expensive use of initialize_mergeclause_eclasses() to set up the new RestrictInfo's left_ec and right_ec pointers. The equivclass.c code can set those basically for free.) This seems like clearly a bug fix, but I'm hesitant to back-patch it, first because there's some API/ABI risk for extensions and second because we're usually loath to destabilize plan choices in stable branches. Per report from Sigrid Ehrenreich. Discussion: https://postgr.es/m/1019549.1603770457@sss.pgh.pa.us Discussion: https://postgr.es/m/AM6PR02MB5287A0ADD936C1FA80973E72AB190@AM6PR02MB5287.eurprd02.prod.outlook.com
2020-10-28Add pg_relation_check_pages() to check on-disk pages of a relationMichael Paquier
This makes use of CheckBuffer() introduced in c780a7a, adding a SQL wrapper able to do checks for all the pages of a relation. By default, all the fork types of a relation are checked, and it is possible to check only a given relation fork. Note that if the relation given in input has no physical storage or is temporary, then no errors are generated, allowing full-database checks when coupled with a simple scan of pg_class for example. This is not limited to clusters with data checksums enabled, as clusters without data checksums can still apply checks on pages using the page headers or for the case of a page full of zeros. This function returns a set of tuples consisting of: - The physical file where a broken page has been detected (without the segment number as that can be AM-dependent, which can be guessed from the block number for heap). A relative path from PGPATH is used. - The block number of the broken page. By default, only superusers have an access to this function but execution rights can be granted to other users. The feature introduced here is still minimal, and more improvements could be done, like: - Addition of a start and end block number to run checks on a range of blocks, which would apply only if one fork type is checked. - Addition of some progress reporting. - Throttling, with configuration parameters in function input or potentially some cost-based GUCs. Regression tests are added for positive cases in the main regression test suite, and TAP tests are added for cases involving the emulation of page corruptions. Bump catalog version. Author: Julien Rouhaud, Michael Paquier Reviewed-by: Masahiko Sawada, Justin Pryzby Discussion: https://postgr.es/m/CAOBaU_aVvMjQn=ge5qPiJOPMmOj5=ii3st5Q0Y+WuLML5sR17w@mail.gmail.com
2020-10-28Add CheckBuffer() to check on-disk pages without shared buffer loadingMichael Paquier
CheckBuffer() is designed to be a concurrent-safe function able to run sanity checks on a relation page without loading it into the shared buffers. The operation is done using a lock on the partition involved in the shared buffer mapping hashtable and an I/O lock for the buffer itself, preventing the risk of false positives due to any concurrent activity. The primary use of this function is the detection of on-disk corruptions for relation pages. If a page is found in shared buffers, the on-disk page is checked if not dirty (a follow-up checkpoint would flush a valid version of the page if dirty anyway), as it could be possible that a page was present for a long time in shared buffers with its on-disk version corrupted. Such a scenario could lead to a corrupted cluster if a host is plugged off for example. If the page is not found in shared buffers, its on-disk state is checked. PageIsVerifiedExtended() is used to apply the same sanity checks as when a page gets loaded into shared buffers. This function will be used by an upcoming patch able to check the state of on-disk relation pages using a SQL function. Author: Julien Rouhaud, Michael Paquier Reviewed-by: Masahiko Sawada Discussion: https://postgr.es/m/CAOBaU_aVvMjQn=ge5qPiJOPMmOj5=ii3st5Q0Y+WuLML5sR17w@mail.gmail.com
2020-10-27Add select_common_typmod()Peter Eisentraut
This accompanies select_common_type() and select_common_collation(). Typmods were previously combined using hand-coded logic in several places. The logic in select_common_typmod() isn't very exciting, but it makes the code more compact and readable in a few locations, and in the future we can perhaps do more complicated things if desired. As a small enhancement, the type unification of the direct and aggregate arguments of hypothetical-set aggregates now unifies the typmod as well using this new function, instead of just dropping it. Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi> Discussion: https://www.postgresql.org/message-id/flat/97df3af9-8b5e-fb7f-a029-3eb7e80d7af9@2ndquadrant.com
2020-10-26In INSERT/UPDATE, use the table's real tuple descriptor as target.Tom Lane
Previously, ExecInitModifyTable relied on ExecInitJunkFilter, and thence ExecCleanTypeFromTL, to build the target descriptor from the query tlist. While we just checked (in ExecCheckPlanOutput) that the tlist produces compatible output, this is not a great substitute for the relation's actual tuple descriptor that's available from the relcache. For one thing, dropped columns will not be correctly marked attisdropped; it's a bit surprising that we've gotten away with that this long. But the real reason for being concerned with this is that using the table's descriptor means that the slot will have correct attrmissing data, allowing us to revert the klugy fix of commit ba9f18abd. (This commit undoes that one's changes in trigger.c, but keeps the new test case.) Thus we can solve the bogus-trigger-tuple problem with fewer cycles rather than more. No back-patch, since this doesn't fix any additional bug, and it seems somewhat more likely to have unforeseen side effects than ba9f18abd's narrow fix. Discussion: https://postgr.es/m/16644-5da7ef98a7ac4545@postgresql.org
2020-10-26Extend PageIsVerified() to handle more custom optionsMichael Paquier
This is useful for checks of relation pages without having to load the pages into the shared buffers, and two cases can make use of that: page verification in base backups and the online, lock-safe, flavor. Compatibility is kept with past versions using a macro that calls the new extended routine with the set of options compatible with the original version. Extracted from a larger patch by the same author. Author: Anastasia Lubennikova Reviewed-by: Michael Paquier, Julien Rouhaud Discussion: https://postgr.es/m/608f3476-0598-2514-2c03-e05c7d2b0cbd@postgrespro.ru
2020-10-23Improve performance of Unicode {de,re}composition in the backendMichael Paquier
This replaces the existing binary search with two perfect hash functions for the composition and the decomposition in the backend code, at the cost of slightly-larger binaries there (35kB in libpgcommon_srv.a). Per the measurements done, this improves the speed of the recomposition and decomposition by up to 30~40 times for the NFC and NFKC conversions, while all other operations get at least 40% faster. This is not as "good" as what libicu has, but it closes the gap a lot as per the feedback from Daniel Verite. The decomposition table remains the same, getting used for the binary search in the frontend code, where we care more about the size of the libraries like libpq over performance as this gets involved only in code paths related to the SCRAM authentication. In consequence, note that the perfect hash function for the recomposition needs to use a new inverse lookup array back to to the existing decomposition table. The size of all frontend deliverables remains unchanged, even with --enable-debug, including libpq. Author: John Naylor Reviewed-by: Michael Paquier, Tom Lane Discussion: https://postgr.es/m/CAFBsxsHUuMFCt6-pU+oG-F1==CmEp8wR+O+bRouXWu6i8kXuqA@mail.gmail.com
2020-10-22Extend amcheck to check heap pages.Robert Haas
Mark Dilger, reviewed by Peter Geoghegan, Andres Freund, Álvaro Herrera, Michael Paquier, Amul Sul, and by me. Some last-minute cosmetic revisions by me. Discussion: http://postgr.es/m/12ED3DA8-25F0-4B68-937D-D907CFBF08E7@enterprisedb.com
2020-10-21Review format of code generated by PerfectHash.pmMichael Paquier
80f8eb7 has added to the normalization quick check headers some code generated by PerfectHash.pm that is incompatible with the settings of gitattributes for this repository, as whitespaces followed a set of tabs for the first element of a line in the table. Instead of adding a new exception to gitattributes, rework the format generated so as a right padding with spaces is used instead of a left padding. This keeps the table generated in a readable shape with its set of columns, making unnecessary an update of gitattributes. Reported-by: Peter Eisentraut Author: John Naylor Discussion: https://postgr.es/m/d601b3b5-a3c7-5457-2f84-3d6513d690fc@2ndquadrant.com
2020-10-20Change the attribute name in pg_stat_replication_slots view.Amit Kapila
Change the attribute 'name' to 'slot_name' in pg_stat_replication_slots view to make it clear and that way we will be consistent with the other places like pg_stat_wal_receiver view where we display the same attribute. In the passing, fix the typo in one of the macros in the related code. Bump the catversion as we have modified the name in the catalog as well. Reported-by: Noriyoshi Shinoda Author: Noriyoshi Shinoda Reviewed-by: Sawada Masahiko and Amit Kapila Discussion: https://postgr.es/m/CA+fd4k5_pPAYRTDrO2PbtTOe0eHQpBvuqmCr8ic39uTNmR49Eg@mail.gmail.com
2020-10-19Fix list-munging bug that broke SQL function result coercions.Tom Lane
Since commit 913bbd88d, check_sql_fn_retval() can either insert type coercion steps in-line in the Query that produces the SQL function's results, or generate a new top-level Query to perform the coercions, if modifying the Query's output in-place wouldn't be safe. However, it appears that the latter case has never actually worked, because the code tried to inject the new Query back into the query list it was passed ... which is not the list that will be used for later processing when we execute the SQL function "normally" (without inlining it). So we ended up with no coercion happening at run-time, leading to wrong results or crashes depending on the datatypes involved. While the regression tests look like they cover this area well enough, through a huge bit of bad luck all the test cases that exercise the separate-Query path were checking either inline-able cases (which accidentally didn't have the bug) or cases that are no-ops at runtime (e.g., varchar to text), so that the failure to perform the coercion wasn't obvious. The fact that the cases that don't work weren't allowed at all before v13 probably contributed to not noticing the problem sooner, too. To fix, get rid of the separate "flat" list of Query nodes and instead pass the real two-level list that is going to be used later. I chose to make the same change in check_sql_fn_statements(), although that has no actual bug, just so that we don't need that data structure at all. This is an API change, as evidenced by the adjustments needed to callers outside functions.c. That's a bit scary to be doing in a released branch, but so far as I can tell from a quick search, there are no outside callers of these functions (and they are sufficiently specific to our semantics for SQL-language functions that it's not apparent why any extension would need to call them). In any case, v13 already changed the API of check_sql_fn_retval() compared to prior branches. Per report from pinker. Back-patch to v13 where this code came in. Discussion: https://postgr.es/m/1603050466566-0.post@n3.nabble.com
2020-10-19Remove PartitionRoutingInfo struct.Heikki Linnakangas
The extra indirection neeeded to access its members via its enclosing ResultRelInfo seems pointless. Move all the fields from PartitionRoutingInfo to ResultRelInfo. Author: Amit Langote Reviewed-by: Alvaro Herrera Discussion: https://www.postgresql.org/message-id/CA%2BHiwqFViT47Zbr_ASBejiK7iDG8%3DQ1swQ-tjM6caRPQ67pT%3Dw%40mail.gmail.com
2020-10-19Revise child-to-root tuple conversion map management.Heikki Linnakangas
Store the tuple conversion map to convert a tuple from a child table's format to the root format in a new ri_ChildToRootMap field in ResultRelInfo. It is initialized if transition tuple capture for FOR STATEMENT triggers or INSERT tuple routing on a partitioned table is needed. Previously, ModifyTable kept the maps in the per-subplan ModifyTableState->mt_per_subplan_tupconv_maps array, or when tuple routing was used, in ResultRelInfo->ri_Partitioninfo->pi_PartitionToRootMap. The new field replaces both of those. Now that the child-to-root tuple conversion map is always available in ResultRelInfo (when needed), remove the TransitionCaptureState.tcs_map field. The callers of Exec*Trigger() functions no longer need to set or save it, which is much less confusing and bug-prone. Also, as a future optimization, this will allow us to delay creating the map for a given result relation until the relation is actually processed during execution. Author: Amit Langote Discussion: https://www.postgresql.org/message-id/CA%2BHiwqHtCWLdK-LO%3DNEsvOdHx%2B7yv4mE_zYK0i3BH7dXb-wxog%40mail.gmail.com
2020-10-19Clean up code to resolve the "root target relation" in nodeModifyTable.cHeikki Linnakangas
When executing DDL on a partitioned table or on a table with inheritance children, statement-level triggers must be fired against the table given in the original statement. The code to look that up was a bit messy and duplicative. Commit 501ed02cf6 added a helper function, getASTriggerResultRelInfo() (later renamed to getTargetResultRelInfo()) for it, but for some reason it was only used when firing AFTER STATEMENT triggers and the code to fire BEFORE STATEMENT triggers duplicated the logic. Determine the target relation in ExecInitModifyTable(), and set it always in ModifyTableState. Code that used to call getTargetResultRelInfo() can now use ModifyTableState->rootResultRelInfo directly. Discussion: https://www.postgresql.org/message-id/CA%2BHiwqFViT47Zbr_ASBejiK7iDG8%3DQ1swQ-tjM6caRPQ67pT%3Dw%40mail.gmail.com
2020-10-15llvmjit: Work around bug in LLVM 3.9 causing crashes after 72559438f92.Andres Freund
Unfortunately in LLVM 3.9 LLVMGetAttributeCountAtIndex(func, index) crashes when called with an index that has 0 attributes. Since there's no way to work around this in the C API, add a small C++ wrapper doing so. The only reason this didn't fail before 72559438f92 is that there always are function attributes... Author: Andres Freund <andres@anarazel.de> Discussion: https://postgr.es/m/20201016001254.w2nfj7gd74jmb5in@alap3.anarazel.de Backpatch: 11-, like 72559438f92
2020-10-15doc: improve description of synchronous_commit modesBruce Momjian
Previously it wasn't clear exactly what each of the synchronous_commit modes accomplished. This clarifies that, and adds a table describing it. Only backpatched through 9.6 since 9.5 doesn't have all the options. Reported-by: kghost0@gmail.com Discussion: https://postgr.es/m/159741195522.14321.13812604195366728976@wrigleys.postgresql.org Backpatch-through: 9.6
2020-10-15Review logical replication tablesync codeAlvaro Herrera
Most importantly, remove optimization in LogicalRepSyncTableStart that skips the normal walrcv_startstreaming/endstreaming dance. The optimization is not critically important for production uses anyway, since it only fires in cases with no activity, and saves an uninteresting amount of work even then. Critically, it obscures bugs by hiding the interesting code path from test cases. Also: in GetSubscriptionRelState, remove pointless relation open; access pg_subscription_rel->srsubstate with GETSTRUCT as is typical rather than SysCacheGetAttr; remove unused 'missing_ok' argument. In wait_for_relation_state_change, use explicit catalog snapshot invalidation rather than obscurely (and expensively) through GetLatestSnapshot. In various places: sprinkle comments more liberally and rewrite a number of them. Other cosmetic code improvements. No backpatch, since no bug is being fixed here. Author: Álvaro Herrera <alvherre@alvh.no-ip.org> Reviewed-by: Petr Jelínek <petr.jelinek@2ndquadrant.com> Discussion: https://postgr.es/m/20201010190637.GA5774@alvherre.pgsql
2020-10-15Add documentation link to attributes supported by ClangPeter Eisentraut
2020-10-15Execute invalidation messages for each XLOG_XACT_INVALIDATIONS messageAmit Kapila
during logical decoding. Prior to commit c55040ccd0 we have no way of knowing the invalidations before commit. So, while decoding we use to execute all the invalidations at each command end as we had no way of knowing which invalidations happened before that command. Due to this, transactions involving large amounts of DDLs use to take more time and also lead to high CPU usage. But now we know specific invalidations at each command end so we execute only required invalidations. It has been observed that decoding of a transaction containing truncation of a table with 1000 partitions would be finished in 1s whereas before this patch it used to take 4-5 minutes. Author: Dilip Kumar Reviewed-by: Amit Kapila and Keisuke Kuroda Discussion: https://postgr.es/m/CANDwggKYveEtXjXjqHA6RL3AKSHMsQyfRY6bK+NqhAWJyw8psQ@mail.gmail.com
2020-10-14Remove es_result_relation_info from EState.Heikki Linnakangas
Maintaining 'es_result_relation_info' correctly at all times has become cumbersome, especially with partitioning where each partition gets its own result relation info. Having to set and reset it across arbitrary operations has caused bugs in the past. This changes all the places that used 'es_result_relation_info', to receive the currently active ResultRelInfo via function parameters instead. Author: Amit Langote Discussion: https://www.postgresql.org/message-id/CA%2BHiwqGEmiib8FLiHMhKB%2BCH5dRgHSLc5N5wnvc4kym%2BZYpQEQ%40mail.gmail.com
2020-10-14Include result relation info in direct modify ForeignScan nodes.Heikki Linnakangas
FDWs that can perform an UPDATE/DELETE remotely using the "direct modify" set of APIs need to access the ResultRelInfo of the target table. That's currently available in EState.es_result_relation_info, but the next commit will remove that field. This commit adds a new resultRelation field in ForeignScan, to store the target relation's RT index, and the corresponding ResultRelInfo in ForeignScanState. The FDW's PlanDirectModify callback is expected to set 'resultRelation' along with 'operation'. The core code doesn't need them for anything, they are for the convenience of FDW's Begin- and IterateDirectModify callbacks. Authors: Amit Langote, Etsuro Fujita Discussion: https://www.postgresql.org/message-id/CA%2BHiwqGEmiib8FLiHMhKB%2BCH5dRgHSLc5N5wnvc4kym%2BZYpQEQ%40mail.gmail.com
2020-10-14Use https for gnu.org linksPeter Eisentraut
Mostly already done, but there were some stragglers.
2020-10-13Create ResultRelInfos later in InitPlan, index them by RT index.Heikki Linnakangas
Instead of allocating all the ResultRelInfos upfront in one big array, allocate them in ExecInitModifyTable(). es_result_relations is now an array of ResultRelInfo pointers, rather than an array of structs, and it is indexed by the RT index. This simplifies things: we get rid of the separate concept of a "result rel index", and don't need to set it in setrefs.c anymore. This also allows follow-up optimizations (not included in this commit yet) to skip initializing ResultRelInfos for target relations that were not needed at runtime, and removal of the es_result_relation_info pointer. The EState arrays of regular result rels and root result rels are merged into one array. Similarly, the resultRelations and rootResultRelations lists in PlannedStmt are merged into one. It's not actually clear to me why they were kept separate in the first place, but now that the es_result_relations array is indexed by RT index, it certainly seems pointless. The PlannedStmt->resultRelations list is now only needed for ExecRelationIsTargetRelation(). One visible effect of this change is that ExecRelationIsTargetRelation() will now return 'true' also for the partition root, if a partitioned table is updated. That seems like a good thing, although the function isn't used in core code, and I don't see any reason for an FDW to call it on a partition root. Author: Amit Langote Discussion: https://www.postgresql.org/message-id/CA%2BHiwqGEmiib8FLiHMhKB%2BCH5dRgHSLc5N5wnvc4kym%2BZYpQEQ%40mail.gmail.com
2020-10-11Choose ppc compare_exchange constant path for more operand values.Noah Misch
The implementation uses smaller code when the "expected" operand is a small constant, but the implementation needlessly defined the set of acceptable constants more narrowly than the ABI does. Core PostgreSQL and PGXN don't use the constant path at all, so this is future-proofing. Back-patch to v13, where commit 30ee5d17c20dbb282a9952b3048d6ad52d56c371 introduced this code. Reviewed by Tom Lane. Reported by Christoph Berg. Discussion: https://postgr.es/m/20201009092825.GD889580@msg.df7cb.de
2020-10-11For ppc gcc, implement 64-bit compare_exchange and fetch_add with asm.Noah Misch
While xlc defines __64BIT__, gcc does not. Due to this oversight in commit 30ee5d17c20dbb282a9952b3048d6ad52d56c371, gcc builds continued implementing 64-bit atomics by way of intrinsics. Back-patch to v13, where that commit first appeared. Reviewed by Tom Lane. Discussion: https://postgr.es/m/20201011051043.GA1724101@rfd.leadboat.com
2020-10-11Use perfect hash for NFC and NFKC Unicode Normalization quick checkMichael Paquier
This makes the normalization quick check about 30% faster for NFC and 50% faster for NFKC than the binary search used previously. The hash lookup reuses the existing array of bit fields used for the binary search to get the quick check property and is generated as part of "make update-unicode" in src/common/unicode/. Author: John Naylor Reviewed-by: Mark Dilger, Michael Paquier Discussion: https://postgr.es/m/CACPNZCt4fbJ0_bGrN5QPt34N4whv=mszM0LMVQdoa2rC9UMRXA@mail.gmail.com
2020-10-10Recognize network-failure errnos as indicating hard connection loss.Tom Lane
Up to now, only ECONNRESET (and EPIPE, in most but not quite all places) received special treatment in our error handling logic. This patch changes things so that related error codes such as ECONNABORTED are also recognized as indicating that the connection's dead and unlikely to come back. We continue to think, however, that only ECONNRESET and EPIPE should be reported as probable server crashes; the other cases indicate network connectivity problems but prove little about the server's state. Thus, there's no change in the error message texts that are output for such cases. The key practical effect is that errcode_for_socket_access() will report ERRCODE_CONNECTION_FAILURE rather than ERRCODE_INTERNAL_ERROR for a network failure. It's expected that this will fix buildfarm member lorikeet's failures since commit 32a9c0bdf, as that seems to be due to not treating ECONNABORTED equivalently to ECONNRESET. The set of errnos treated this way now includes ECONNABORTED, EHOSTDOWN, EHOSTUNREACH, ENETDOWN, ENETRESET, and ENETUNREACH. Several of these were second-class citizens in terms of their handling in places like get_errno_symbol(), so upgrade the infrastructure where necessary. As committed, this patch assumes that all these symbols are defined everywhere. POSIX specifies all of them except EHOSTDOWN, but that seems to exist on all platforms of interest; we'll see what the buildfarm says about that. Probably this should be back-patched, but let's see what the buildfarm thinks of it first. Fujii Masao and Tom Lane Discussion: https://postgr.es/m/2621622.1602184554@sss.pgh.pa.us
2020-10-09Fix our Windows stat() emulation to handle file sizes > 4GB.Tom Lane
Hack things so that our idea of "struct stat" is equivalent to Windows' struct __stat64, allowing it to have a wide enough st_size field. Instead of relying on native stat(), use GetFileInformationByHandle(). This avoids a number of issues with Microsoft's multiple and rather slipshod emulations of stat(). We still need to jump through hoops to deal with ERROR_DELETE_PENDING, though :-( Pull the relevant support code out of dirmod.c and put it into its own file, win32stat.c. Still TODO: do we need to do something different with lstat(), rather than treating it identically to stat()? Juan José Santamaría Flecha, reviewed by Emil Iggland; based on prior work by Michael Paquier, Sergey Zubkovsky, and others Discussion: https://postgr.es/m/1803D792815FC24D871C00D17AE95905CF5099@g01jpexmbkw24 Discussion: https://postgr.es/m/15858-9572469fd3b73263@postgresql.org
2020-10-08Track statistics for spilling of changes from ReorderBuffer.Amit Kapila
This adds the statistics about transactions spilled to disk from ReorderBuffer. Users can query the pg_stat_replication_slots view to check these stats and call pg_stat_reset_replication_slot to reset the stats of a particular slot. Users can pass NULL in pg_stat_reset_replication_slot to reset stats of all the slots. This commit extends the statistics collector to track this information about slots. Author: Sawada Masahiko and Amit Kapila Reviewed-by: Amit Kapila and Dilip Kumar Discussion: https://postgr.es/m/CA+fd4k5_pPAYRTDrO2PbtTOe0eHQpBvuqmCr8ic39uTNmR49Eg@mail.gmail.com
2020-10-07Prevent internal overflows in date-vs-timestamp and related comparisons.Tom Lane
The date-vs-timestamp, date-vs-timestamptz, and timestamp-vs-timestamptz comparators all worked by promoting the first type to the second and then doing a simple same-type comparison. This works fine, except when the conversion result is out of range, in which case we throw an entirely avoidable error. The sources of such failures are (a) type date can represent dates much farther in the future than the timestamp types can; (b) timezone rotation might cause a just-in-range timestamp value to become a just-out-of-range timestamptz value. Up to now we just ignored these corner-case issues, but now we have an actual user complaint (bug #16657 from Huss EL-Sheikh), so let's do something about it. It turns out that commit 52ad1e659 already built all the necessary infrastructure to support error-free comparisons, but neglected to actually use it in the main-line code paths. Fix that, do a little bit of code style review, and remove the now-duplicate logic in jsonpath_exec.c. Back-patch to v13 where 52ad1e659 came in. We could take this back further by back-patching said infrastructure, but given the small number of complaints so far, I don't feel a great need to. Discussion: https://postgr.es/m/16657-cde2f876d8cc7971@postgresql.org
2020-10-05Support for OUT parameters in proceduresPeter Eisentraut
Unlike for functions, OUT parameters for procedures are part of the signature. Therefore, they have to be listed in pg_proc.proargtypes as well as mentioned in ALTER PROCEDURE and DROP PROCEDURE. Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com> Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/2b8490fe-51af-e671-c504-47359dc453c5@2ndquadrant.com
2020-10-04Make postgres.bki use the same literal-string syntax as postgresql.conf.Tom Lane
The BKI file's string quoting conventions were previously quite weird, perhaps as a result of repurposing a function built to scan single-quoted strings to scan double-quoted ones. Change to use the same rules as we use in GUC files, allowing some simplifications in genbki.pl and initdb.c. While at it, completely remove the backend's scanstr() function, which was essentially a duplicate of the string dequoting code in guc-file.l. Instead export that one (under a less generic name than it had) and let bootscanner.l use it. Now we can clarify that scansup.c exists only to support the main lexer. We could alternatively have removed GUC_scanstr, but this way seems better since the previous arrangement could mislead a reader into thinking that scanstr() had something to do with the main lexer's handling of string literals. Maybe it did once, but if so it was a long time ago. This patch does not bump catversion, since the initially-installed catalog contents don't change. Note however that successful initdb after applying this patch will require up-to-date postgres.bki as well as postgres and initdb executables. In passing, remove a bunch of very-long-obsolete #include's in bootparse.y and bootscanner.l. John Naylor Discussion: https://postgr.es/m/CACPNZCtDpd18T0KATTmCggO2GdVC4ow86ypiq5ENff1VnauL8g@mail.gmail.com
2020-10-02Add pg_stat_wal statistics view.Fujii Masao
This view shows the statistics about WAL activity. Currently it has only two columns: wal_buffers_full and stats_reset. wal_buffers_full column indicates the number of times WAL data was written to the disk because WAL buffers got full. This information is useful when tuning wal_buffers. stats_reset column indicates the time at which these statistics were last reset. pg_stat_wal view is also the basic infrastructure to expose other various statistics about WAL activity later. Bump PGSTAT_FILE_FORMAT_ID due to the change in pgstat format. Bump catalog version. Author: Masahiro Ikeda Reviewed-by: Takayuki Tsunakawa, Kyotaro Horiguchi, Amit Kapila, Fujii Masao Discussion: https://postgr.es/m/188bd3f2d2233cf97753b5ced02bb050@oss.nttdata.com
2020-09-28Add for_each_from, to simplify loops starting from non-first list cells.Tom Lane
We have a dozen or so places that need to iterate over all but the first cell of a List. Prior to v13 this was typically written as for_each_cell(lc, lnext(list_head(list))) Commit 1cff1b95a changed these to for_each_cell(lc, list, list_second_cell(list)) This patch introduces a new macro for_each_from() which expresses the start point as a list index, allowing these to be written as for_each_from(lc, list, 1) This is marginally more efficient, since ForEachState.i can be initialized directly instead of backing into it from a ListCell address. It also seems clearer and less typo-prone. Some of the remaining uses of for_each_cell() look like they could profitably be changed to for_each_from(), but here I confined myself to changing uses of list_second_cell(). Also, fix for_each_cell_setup() and for_both_cell_setup() to const-ify their arguments; that's a simple oversight in 1cff1b95a. Back-patch into v13, on the grounds that (1) the const-ification is a minor bug fix, and (2) it's better for back-patching purposes if we only have two ways to write these loops rather than three. In HEAD, also remove list_third_cell() and list_fourth_cell(), which were also introduced in 1cff1b95a, and are unused as of cc99baa43. It seems unlikely that any third-party code would have started to use them already; anyone who has can be directed to list_nth_cell instead. Discussion: https://postgr.es/m/CAApHDvpo1zj9KhEpU2cCRZfSM3Q6XGdhzuAS2v79PH7WJBkYVA@mail.gmail.com
2020-09-29Revert "Change SHA2 implementation based on OpenSSL to use EVP digest routines"Michael Paquier
This reverts commit e21cbb4, as the switch to EVP routines requires a more careful design where we would need to have at least our wrapper routines return a status instead of issuing an error by themselves to let the caller do the error handling. The memory handling was also incorrect and could cause leaks in the backend if a failure happened, requiring most likely a callback to do the necessary cleanup as the only clean way to be able to allocate an EVP context requires the use of an allocation within OpenSSL. The potential rework of the wrappers also impacts the fallback implementation when not building with OpenSSL. Originally, prairiedog has reported a compilation failure, but after discussion with Tom Lane this needs a better design. Discussion: https://postgr.es/m/20200928073330.GC2316@paquier.xyz
2020-09-28Change SHA2 implementation based on OpenSSL to use EVP digest routinesMichael Paquier
The use of low-level hash routines is not recommended by upstream OpenSSL since 2000, and pgcrypto already switched to EVP as of 5ff4a67. Note that this also fixes a failure with SCRAM authentication when using FIPS in OpenSSL, but as there have been few complaints about this problem and as this causes an ABI breakage, no backpatch is done. Author: Michael Paquier, Alessandro Gherardi Reviewed-by: Daniel Gustafsson Discussion: https://postgr.es/m/20200924025314.GE7405@paquier.xyz Discussion: https://postgr.es/m/20180911030250.GA27115@paquier.xyz
2020-09-28Improve pg_list.h's linitial(), lsecond() and co macrosDavid Rowley
Prior to this commit, the linitial(), lsecond(), lthird(), lfourth() macros and their int and Oid list cousins would call their corresponding inlined function to fetch the cell of interest. Those inline functions were kind enough to return NULL if the particular cell did not exist. Unfortunately, the care that these functions took was of no relevance to the calling macros as they proceeded to directly dereference the returned value without any regard to whether that value was NULL or not. If it had been, we'd have segfaulted. Of course, the fact that we would have segfaulted on misuse of these macros just goes to prove that nobody is relying on the empty or list too small checks. So here we just get rid of those checks completely. The existing inline functions have been left alone as someone may be using those directly. We just replace the call within each macro to use list_nth_cell(). For the llast*() case we require a new list_last_cell() inline function to get away from the multiple evaluation hazard that we'd get if we fetched ->length on the macro's parameter. Author: David Rowley Reviewed-by: Tom Lane Discussion: https://postgr.es/m/CAApHDvpo1zj9KhEpU2cCRZfSM3Q6XGdhzuAS2v79PH7WJBkYVA@mail.gmail.com
2020-09-27Move resolution of AlternativeSubPlan choices to the planner.Tom Lane
When commit bd3daddaf introduced AlternativeSubPlans, I had some ambitions towards allowing the choice of subplan to change during execution. That has not happened, or even been thought about, in the ensuing twelve years; so it seems like a failed experiment. So let's rip that out and resolve the choice of subplan at the end of planning (in setrefs.c) rather than during executor startup. This has a number of positive benefits: * Removal of a few hundred lines of executor code, since AlternativeSubPlans need no longer be supported there. * Removal of executor-startup overhead (particularly, initialization of subplans that won't be used). * Removal of incidental costs of having a larger plan tree, such as tree-scanning and copying costs in the plancache; not to mention setrefs.c's own costs of processing the discarded subplans. * EXPLAIN no longer has to print a weird (and undocumented) representation of an AlternativeSubPlan choice; it sees only the subplan actually used. This should mean less confusion for users. * Since setrefs.c knows which subexpression of a plan node it's working on at any instant, it's possible to adjust the estimated number of executions of the subplan based on that. For example, we should usually estimate more executions of a qual expression than a targetlist expression. The implementation used here is pretty simplistic, because we don't want to expend a lot of cycles on the issue; but it's better than ignoring the point entirely, as the executor had to. That last point might possibly result in shifting the choice between hashed and non-hashed EXISTS subplans in a few cases, but in general this patch isn't meant to change planner choices. Since we're doing the resolution so late, it's really impossible to change any plan choices outside the AlternativeSubPlan itself. Patch by me; thanks to David Rowley for review. Discussion: https://postgr.es/m/1992952.1592785225@sss.pgh.pa.us
2020-09-26Fix the logical replication from HEAD to lower versions.Amit Kapila
Commit 464824323e changed the logical replication protocol to allow the streaming of in-progress transactions and used the new version of protocol irrespective of the server version. Use the appropriate version of the protocol based on the server version. Reported-by: Ashutosh Sharma Author: Dilip Kumar Reviewed-by: Ashutosh Sharma and Amit Kapila Discussion: https://postgr.es/m/CAE9k0P=9OpXcNrcU5Gsvd5MZ8GFpiN833vNHzX6Uc=8+h1ft1Q@mail.gmail.com
2020-09-25Defer flushing of SLRU files.Thomas Munro
Previously, we called fsync() after writing out individual pg_xact, pg_multixact and pg_commit_ts pages due to cache pressure, leading to regular I/O stalls in user backends and recovery. Collapse requests for the same file into a single system call as part of the next checkpoint, as we already did for relation files, using the infrastructure developed by commit 3eb77eba. This can cause a significant improvement to recovery performance, especially when it's otherwise CPU-bound. Hoist ProcessSyncRequests() up into CheckPointGuts() to make it clearer that it applies to all the SLRU mini-buffer-pools as well as the main buffer pool. Rearrange things so that data collected in CheckpointStats includes SLRU activity. Also remove the Shutdown{CLOG,CommitTS,SUBTRANS,MultiXact}() functions, because they were redundant after the shutdown checkpoint that immediately precedes them. (I'm not sure if they were ever needed, but they aren't now.) Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> (parts) Tested-by: Jakub Wartak <Jakub.Wartak@tomtom.com> Discussion: https://postgr.es/m/CA+hUKGLJ=84YT+NvhkEEDAuUtVHMfQ9i-N7k_o50JmQ6Rpj_OQ@mail.gmail.com
2020-09-24Expose oldSnapshotControl definition via new header.Robert Haas
This makes it possible for code outside snapmgr.c to examine the contents of this data structure. This commit does not add any code which actually does so; a subsequent commit will make that change. Patch by me, reviewed by Thomas Munro, Dilip Kumar, Hamid Akhtar. Discussion: http://postgr.es/m/CA+TgmoY=aqf0zjTD+3dUWYkgMiNDegDLFjo+6ze=Wtpik+3XqA@mail.gmail.com
2020-09-23Improve behavior of tsearch_readline(), and remove t_readline().Tom Lane
Commit fbeb9da22, which added the tsearch_readline APIs, left t_readline() in place as a compatibility measure. But that function has been unused and deprecated for twelve years now, so that seems like enough time to remove it. Doing so, and merging t_readline's code into tsearch_readline, aids in making several useful improvements: * The hard-wired 4K limit on line length in tsearch data files is removed, by using a StringInfo buffer instead of a fixed-size buffer. * We can buy back the per-line palloc/pfree added by 3ea7e9550 in the common case where encoding conversion is not required. * We no longer need a separate pg_verify_mbstr call, as that functionality was folded into encoding conversion some time ago. (We could have done some of this stuff while keeping t_readline as a separate API, but there seems little point, since there's no reason for anyone to still be using t_readline directly.) Discussion: https://postgr.es/m/48A4FA71-524E-41B9-953A-FD04EF36E2E7@yesql.se
2020-09-23Improve error cursor positions for problems with partition bounds.Tom Lane
We failed to pass down the query string to check_new_partition_bound, so that its attempts to provide error cursor positions were for naught; one must have the query string to get parser_errposition to do anything. Adjust its API to require a ParseState to be passed down. Also, improve the logic inside check_new_partition_bound so that the cursor points at the partition bound for the specific column causing the issue, when one can be identified. That part is also for naught if we can't determine the query position of the column with the problem. Improve transformPartitionBoundValue so that it makes sure that const-simplified partition expressions will be properly labeled with positions. In passing, skip calling evaluate_expr if the value is already a Const, which is surely the most common case. Alexandra Wang, Ashwin Agrawal, Amit Langote; reviewed by Ashutosh Bapat Discussion: https://postgr.es/m/CACiyaSopZoqssfMzgHk6fAkp01cL6vnqBdmTw2C5_KJaFR_aMg@mail.gmail.com Discussion: https://postgr.es/m/CAJV4CdrZ5mKuaEsRSbLf2URQ3h6iMtKD=hik8MaF5WwdmC9uZw@mail.gmail.com
2020-09-22Rethink API for pg_get_line.c, one more time.Tom Lane
Further experience says that the appending behavior offered by pg_get_line_append is useful to only a very small minority of callers. For most, the requirement to reset the buffer after each line is just an error-prone nuisance. Hence, invent another alternative call pg_get_line_buf, which takes care of that detail. Noted while reviewing a patch from Daniel Gustafsson. Discussion: https://postgr.es/m/48A4FA71-524E-41B9-953A-FD04EF36E2E7@yesql.se
2020-09-19Remove large fill factor support from dynahash.c.Thomas Munro
Since ancient times we have had support for a fill factor (maximum load factor) to be set for a dynahash hash table, but: 1. It was an integer, whereas for in-memory hash tables interesting load factor targets are probably somewhere near the 0.75-1.0 range. 2. It was implemented in a way that performed an expensive division operation that regularly showed up in profiles. 3. We are not aware of anyone ever having used a non-default value. Therefore, remove support, effectively fixing it at 1. Author: Jakub Wartak <Jakub.Wartak@tomtom.com> Reviewed-by: Alvaro Herrera <alvherre@2ndquadrant.com> Reviewed-by: Tomas Vondra <tomas.vondra@2ndquadrant.com> Reviewed-by: Thomas Munro <thomas.munro@gmail.com> Reviewed-by: David Rowley <dgrowleyml@gmail.com> Discussion: https://postgr.es/m/VI1PR0701MB696044FC35013A96FECC7AC8F62D0%40VI1PR0701MB6960.eurprd07.prod.outlook.com