summaryrefslogtreecommitdiff
path: root/src/include
AgeCommit message (Collapse)Author
2021-01-16Prevent excess SimpleLruTruncate() deletion.Noah Misch
Every core SLRU wraps around. With the exception of pg_notify, the wrap point can fall in the middle of a page. Account for this in the PagePrecedes callback specification and in SimpleLruTruncate()'s use of said callback. Update each callback implementation to fit the new specification. This changes SerialPagePrecedesLogically() from the style of asyncQueuePagePrecedes() to the style of CLOGPagePrecedes(). (Whereas pg_clog and pg_serial share a key space, pg_serial is nothing like pg_notify.) The bug fixed here has the same symptoms and user followup steps as 592a589a04bd456410b853d86bd05faa9432cbbb. Back-patch to 9.5 (all supported versions). Reviewed by Andrey Borodin and (in earlier versions) by Tom Lane. Discussion: https://postgr.es/m/20190202083822.GC32531@gust.leadboat.com
2021-01-14Prevent drop of tablespaces used by partitioned relationsAlvaro Herrera
When a tablespace is used in a partitioned relation (per commits ca4103025dfe in pg12 for tables and 33e6c34c3267 in pg11 for indexes), it is possible to drop the tablespace, potentially causing various problems. One such was reported in bug #16577, where a rewriting ALTER TABLE causes a server crash. Protect against this by using pg_shdepend to keep track of tablespaces when used for relations that don't keep physical files; we now abort a tablespace if we see that the tablespace is referenced from any partitioned relations. Backpatch this to 11, where this problem has been latent all along. We don't try to create pg_shdepend entries for existing partitioned indexes/tables, but any ones that are modified going forward will be protected. Note slight behavior change: when trying to drop a tablespace that contains both regular tables as well as partitioned ones, you'd previously get ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE and now you'll get ERRCODE_DEPENDENT_OBJECTS_STILL_EXIST. Arguably, the latter is more correct. It is possible to add protecting pg_shdepend entries for existing tables/indexes, by doing ALTER TABLE ONLY some_partitioned_table SET TABLESPACE pg_default; ALTER TABLE ONLY some_partitioned_table SET TABLESPACE original_tablespace; for each partitioned table/index that is not in the database default tablespace. Because these partitioned objects do not have storage, no file needs to be actually moved, so it shouldn't take more time than what's required to acquire locks. This query can be used to search for such relations: SELECT ... FROM pg_class WHERE relkind IN ('p', 'I') AND reltablespace <> 0 Reported-by: Alexander Lakhin <exclusion@gmail.com> Discussion: https://postgr.es/m/16577-881633a9f9894fd5@postgresql.org Author: Álvaro Herrera <alvherre@alvh.no-ip.org> Reviewed-by: Michael Paquier <michael@paquier.xyz>
2021-01-06Detect the deadlocks between backends and the startup process.Fujii Masao
The deadlocks that the recovery conflict on lock is involved in can happen between hot-standby backends and the startup process. If a backend takes an access exclusive lock on the table and which finally triggers the deadlock, that deadlock can be detected as expected. On the other hand, previously, if the startup process took an access exclusive lock and which finally triggered the deadlock, that deadlock could not be detected and could remain even after deadlock_timeout passed. This is a bug. The cause of this bug was that the code for handling the recovery conflict on lock didn't take care of deadlock case at all. It assumed that deadlocks involving the startup process and backends were able to be detected by the deadlock detector invoked within backends. But this assumption was incorrect. The startup process also should have invoked the deadlock detector if necessary. To fix this bug, this commit makes the startup process invoke the deadlock detector if deadlock_timeout is reached while handling the recovery conflict on lock. Specifically, in that case, the startup process requests all the backends holding the conflicting locks to check themselves for deadlocks. Back-patch to v9.6. v9.5 has also this bug, but per discussion we decided not to back-patch the fix to v9.5. Because v9.5 doesn't have some infrastructure codes (e.g., 37c54863cf) that this bug fix patch depends on. We can apply those codes for the back-patch, but since the next minor version release is the final one for v9.5, it's risky to do that. If we unexpectedly introduce new bug to v9.5 by the back-patch, there is no chance to fix that. We determined that the back-patch to v9.5 would give more risk than gain. Author: Fujii Masao Reviewed-by: Bertrand Drouvot, Masahiko Sawada, Kyotaro Horiguchi Discussion: https://postgr.es/m/4041d6b6-cf24-a120-36fa-1294220f8243@oss.nttdata.com
2020-12-28Fix assorted issues in backend's GSSAPI encryption support.Tom Lane
Unrecoverable errors detected by GSSAPI encryption can't just be reported with elog(ERROR) or elog(FATAL), because attempting to send the error report to the client is likely to lead to infinite recursion or loss of protocol sync. Instead make this code do what the SSL encryption code has long done, which is to just report any such failure to the server log (with elevel COMMERROR), then pretend we've lost the connection by returning errno = ECONNRESET. Along the way, fix confusion about whether message translation is done by pg_GSS_error() or its callers (the latter should do it), and make the backend version of that function work more like the frontend version. Avoid allocating the port->gss struct until it's needed; we surely don't need to allocate it in the postmaster. Improve logging of "connection authorized" messages with GSS enabled. (As part of this, I back-patched the code changes from dc11f31a1.) Make BackendStatusShmemSize() account for the GSS-related space that will be allocated by CreateSharedBackendStatus(). This omission could possibly cause out-of-shared-memory problems with very high max_connections settings. Remove arbitrary, pointless restriction that only GSS authentication can be used on a GSS-encrypted connection. Improve documentation; notably, document the fact that libpq now prefers GSS encryption over SSL encryption if both are possible. Per report from Mikael Gustavsson. Back-patch to v12 where this code was introduced. Discussion: https://postgr.es/m/e5b0b6ed05764324a2f3fe7acfc766d5@smhi.se
2020-12-24Fix race condition between shutdown and unstarted background workers.Tom Lane
If a database shutdown (smart or fast) is commanded between the time some process decides to request a new background worker and the time that the postmaster can launch that worker, then nothing happens because the postmaster won't launch any bgworkers once it's exited PM_RUN state. This is fine ... unless the requesting process is waiting for that worker to finish (or even for it to start); in that case the requestor is stuck, and only manual intervention will get us to the point of being able to shut down. To fix, cancel pending requests for workers when the postmaster sends shutdown (SIGTERM) signals, and similarly cancel any new requests that arrive after that point. (We can optimize things slightly by only doing the cancellation for workers that have waiters.) To fit within the existing bgworker APIs, the "cancel" is made to look like the worker was started and immediately stopped, causing deregistration of the bgworker entry. Waiting processes would have to deal with premature worker exit anyway, so this should introduce no bugs that weren't there before. We do have a side effect that registration records for restartable bgworkers might disappear when theoretically they should have remained in place; but since we're shutting down, that shouldn't matter. Back-patch to v10. There might be value in putting this into 9.6 as well, but the management of bgworkers is a bit different there (notably see 8ff518699) and I'm not convinced it's worth the effort to validate the patch for that branch. Discussion: https://postgr.es/m/661570.1608673226@sss.pgh.pa.us
2020-12-21Disallow SRFs when considering sorts below Gather MergeTomas Vondra
While we do allow SRFs in ORDER BY, scan/join processing should not consider such cases - such sorts should only happen via final Sort atop a ProjectSet. So make sure we don't try adding such sorts below Gather Merge, just like we do for expressions that are volatile and/or not parallel safe. Backpatch to PostgreSQL 13, where this code was introduced as part of the Incremental Sort patch. Author: James Coleman Reviewed-by: Tomas Vondra Backpatch-through: 13 Discussion: https://postgr.es/m/CAAaqYe8cK3g5CfLC4w7bs=hC0mSksZC=H5M8LSchj5e5OxpTAg@mail.gmail.com Discussion: https://postgr.es/m/295524.1606246314%40sss.pgh.pa.us
2020-12-21Check parallel safety in generate_useful_gather_pathsTomas Vondra
Commit ebb7ae839d ensured we ignore pathkeys with volatile expressions when considering adding a sort below a Gather Merge. Turns out we need to care about parallel safety of the pathkeys too, otherwise we might try sorting e.g. on results of a correlated subquery (as demonstrated by a report from Luis Roberto). Initial investigation by Tom Lane, patch by James Coleman. Backpatch to 13, where the code was instroduced (as part of Incremental Sort). Reported-by: Luis Roberto Author: James Coleman Reviewed-by: Tomas Vondra Backpatch-through: 13 Discussion: https://postgr.es/m/622580997.37108180.1604080457319.JavaMail.zimbra%40siscobra.com.br Discussion: https://postgr.es/m/CAAaqYe8cK3g5CfLC4w7bs=hC0mSksZC=H5M8LSchj5e5OxpTAg@mail.gmail.com
2020-12-01Ensure that expandTableLikeClause() re-examines the same table.Tom Lane
As it stood, expandTableLikeClause() re-did the same relation_openrv call that transformTableLikeClause() had done. However there are scenarios where this would not find the same table as expected. We hold lock on the LIKE source table, so it can't be renamed or dropped, but another table could appear before it in the search path. This explains the odd behavior reported in bug #16758 when cloning a table as a temp table of the same name. This case worked as expected before commit 502898192 introduced the need to open the source table twice, so we should fix it. To make really sure we get the same table, let's re-open it by OID not name. That requires adding an OID field to struct TableLikeClause, which is a little nervous-making from an ABI standpoint, but as long as it's at the end I don't think there's any serious risk. Per bug #16758 from Marc Boeren. Like the previous patch, back-patch to all supported branches. Discussion: https://postgr.es/m/16758-840e84a6cfab276d@postgresql.org
2020-11-30Fix missing outfuncs.c support for IncrementalSortPath.Tom Lane
For debugging purposes, Path nodes are supposed to have outfuncs support, but this was overlooked in the original incremental sort patch. While at it, clean up a couple other minor oversights, as well as bizarre choice of return type for create_incremental_sort_path(). (All the existing callers just cast it to "Path *" immediately, so they don't care, but some future caller might care.) outfuncs.c fix by Zhijie Hou, the rest by me Discussion: https://postgr.es/m/324c4d81d8134117972a5b1f6cdf9560@G08CNEXMBPEKD05.g08.fujitsu.local
2020-11-24Properly check index mark/restore in ExecSupportsMarkRestore.Andrew Gierth
Previously this code assumed that all IndexScan nodes supported mark/restore, which is not true since it depends on optional index AM support functions. This could lead to errors about missing support functions in rare edge cases of mergejoins with no sort keys, where an unordered non-btree index scan was placed on the inner path without a protecting Materialize node. (Normally, the fact that merge join requires ordered input would avoid this error.) Backpatch all the way since this bug is ancient. Per report from Eugen Konkov on irc. Discussion: https://postgr.es/m/87o8jn50be.fsf@news-spur.riddles.org.uk
2020-11-10Fix and simplify some usages of TimestampDifference().Tom Lane
Introduce TimestampDifferenceMilliseconds() to simplify callers that would rather have the difference in milliseconds, instead of the select()-oriented seconds-and-microseconds format. This gets rid of at least one integer division per call, and it eliminates some apparently-easy-to-mess-up arithmetic. Two of these call sites were in fact wrong: * pg_prewarm's autoprewarm_main() forgot to multiply the seconds by 1000, thus ending up with a delay 1000X shorter than intended. That doesn't quite make it a busy-wait, but close. * postgres_fdw's pgfdw_get_cleanup_result() thought it needed to compute microseconds not milliseconds, thus ending up with a delay 1000X longer than intended. Somebody along the way had noticed this problem but misdiagnosed the cause, and imposed an ad-hoc 60-second limit rather than fixing the units. This was relatively harmless in context, because we don't care that much about exactly how long this delay is; still, it's wrong. There are a few more callers of TimestampDifference() that don't have a direct need for seconds-and-microseconds, but can't use TimestampDifferenceMilliseconds() either because they do need microsecond precision or because they might possibly deal with intervals long enough to overflow 32-bit milliseconds. It might be worth inventing another API to improve that, but that seems outside the scope of this patch; so those callers are untouched here. Given the fact that we are fixing some bugs, and the likelihood that future patches might want to back-patch code that uses this new API, back-patch to all supported branches. Alexey Kondratov and Tom Lane Discussion: https://postgr.es/m/3b1c053a21c07c1ed5e00be3b2b855ef@postgrespro.ru
2020-11-08In INSERT/UPDATE, use the table's real tuple descriptor as target.Tom Lane
This back-patches commit 20d3fe900 into the v12 and v13 branches. At the time I thought that commit was not fixing any observable bug, but Bertrand Drouvot showed otherwise: adding a dropped column to the previously-considered scenario crashes v12 and v13, unless the dropped column happens to be an integer. That is, of course, because the tupdesc we derive from the plan output tlist fails to describe the dropped column accurately, so that we'll do the wrong thing with a tuple in which that column isn't NULL. There is no bug in pre-v12 branches because they already did use the table's real tuple descriptor for any trigger-returned tuple. It seems that this set of bugs can be blamed on the changes that removed es_trig_tuple_slot, though I've not attempted to pin that down precisely. Although there's no code change needed in HEAD, update the test case to include a dropped column there too. Discussion: https://postgr.es/m/db5d97c8-f48a-51e2-7b08-b73d5434d425@amazon.com Discussion: https://postgr.es/m/16644-5da7ef98a7ac4545@postgresql.org
2020-11-03Fix get_useful_pathkeys_for_relation for volatile expressionsTomas Vondra
When considering Incremental Sort below a Gather Merge, we need to be a bit more careful when matching pathkeys to EC members. It's not enough to find a member whose Vars are all in the current relation's target; volatile expressions in particular need to be contained in the target, otherwise it's too early to use the pathkey. Reported-by: Jaime Casanova Author: James Coleman Reviewed-by: Tomas Vondra Backpatch-through: 13, where the incremental sort code was added Discussion: https://postgr.es/m/CAJGNTeNaxpXgBVcRhJX%2B2vSbq%2BF2kJqGBcvompmpvXb7pq%2BoFA%40mail.gmail.com
2020-11-02Fix unportable use of getnameinfo() in pg_hba_file_rules view.Tom Lane
fill_hba_line() thought it could get away with passing sizeof(struct sockaddr_storage) rather than the actual addrlen previously returned by getaddrinfo(). While that appears to work on many platforms, it does not work on FreeBSD 11: you get back a failure, which leads to the view showing NULL for the address and netmask columns in all rows. The POSIX spec for getnameinfo() is pretty clearly on FreeBSD's side here: you should pass the actual address length. So it seems plausible that there are other platforms where this coding also fails, and we just hadn't noticed. Also, IMO the fact that getnameinfo() failure leads to a NULL output is pretty bogus in itself. Our pg_getnameinfo_all() wrapper is careful to emit "???" on failure, and we should use that in such cases. NULL should only be emitted in rows that don't have IP addresses. Per bug #16695 from Peter Vandivier. Back-patch to v10 where this code was added. Discussion: https://postgr.es/m/16695-a665558e2f630be7@postgresql.org
2020-11-02Extend PageIsVerified() to handle more custom optionsMichael Paquier
This is useful for checks of relation pages without having to load the pages into the shared buffers, and two cases can make use of that: page verification in base backups and the online, lock-safe, flavor. Compatibility is kept with past versions using a routine that calls the new extended routine with the set of options compatible with the original version. Contrary to d401c576, a macro cannot be used as there may be external code relying on the presence of the original routine. This is applied down to 11, where this will be used by a follow-up commit addressing a set of issues with page verification in base backups. Extracted from a larger patch by the same author. Author: Anastasia Lubennikova Reviewed-by: Michael Paquier, Julien Rouhaud Discussion: https://postgr.es/m/608f3476-0598-2514-2c03-e05c7d2b0cbd@postgrespro.ru Backpatch-through: 11
2020-11-01Preserve index data in pg_statistic across REINDEX CONCURRENTLYMichael Paquier
Statistics associated to an index got lost after running REINDEX CONCURRENTLY, while the non-concurrent case preserves these correctly. The concurrent and non-concurrent operations need to be consistent for the end-user, and missing statistics would force to wait for a new analyze to happen, which could take some time depending on the activity of the existing autovacuum workers. This issue is fixed by copying any existing entries in pg_statistic associated to the old index to the new one. Note that this copy is already done with the data of the index in the stats collector. Reported-by: Fabrízio de Royes Mello Author: Michael Paquier, Fabrízio de Royes Mello Reviewed-by: Justin Pryzby Discussion: https://postgr.es/m/CAFcNs+qpFPmiHd1oTXvcPdvAHicJDA9qBUSujgAhUMJyUMb+SA@mail.gmail.com Backpatch-through: 12
2020-10-28Calculate extraUpdatedCols in query rewriter, not parser.Tom Lane
It's unsafe to do this at parse time because addition of generated columns to a table would not invalidate stored rules containing UPDATEs on the table ... but there might now be dependent generated columns that were not there when the rule was made. This also fixes an oversight that rewriteTargetView failed to update extraUpdatedCols when transforming an UPDATE on an updatable view. (Since the new calculation is downstream of that, rewriteTargetView doesn't actually need to do anything; but before, there was a demonstrable bug there.) In v13 and HEAD, this leads to easily-visible bugs because (since commit c6679e4fc) we won't recalculate generated columns that aren't listed in extraUpdatedCols. In v12 this bitmap is mostly just used for trigger-firing decisions, so you'd only notice a problem if a trigger cared whether a generated column had been updated. I'd complained about this back in May, but then forgot about it until bug #16671 from Michael Paul Killian revived the issue. Back-patch to v12 where this field was introduced. If existing stored rules contain any extraUpdatedCols values, they'll be ignored because the rewriter will overwrite them, so the bug will be fixed even for existing rules. (But note that if someone were to update to 13.1 or 12.5, store some rules with UPDATEs on tables having generated columns, and then downgrade to a prior minor version, they might observe issues similar to what this patch fixes. That seems unlikely enough to not be worth going to a lot of effort to fix.) Discussion: https://postgr.es/m/10206.1588964727@sss.pgh.pa.us Discussion: https://postgr.es/m/16671-2fa55851859fb166@postgresql.org
2020-10-19Fix list-munging bug that broke SQL function result coercions.Tom Lane
Since commit 913bbd88d, check_sql_fn_retval() can either insert type coercion steps in-line in the Query that produces the SQL function's results, or generate a new top-level Query to perform the coercions, if modifying the Query's output in-place wouldn't be safe. However, it appears that the latter case has never actually worked, because the code tried to inject the new Query back into the query list it was passed ... which is not the list that will be used for later processing when we execute the SQL function "normally" (without inlining it). So we ended up with no coercion happening at run-time, leading to wrong results or crashes depending on the datatypes involved. While the regression tests look like they cover this area well enough, through a huge bit of bad luck all the test cases that exercise the separate-Query path were checking either inline-able cases (which accidentally didn't have the bug) or cases that are no-ops at runtime (e.g., varchar to text), so that the failure to perform the coercion wasn't obvious. The fact that the cases that don't work weren't allowed at all before v13 probably contributed to not noticing the problem sooner, too. To fix, get rid of the separate "flat" list of Query nodes and instead pass the real two-level list that is going to be used later. I chose to make the same change in check_sql_fn_statements(), although that has no actual bug, just so that we don't need that data structure at all. This is an API change, as evidenced by the adjustments needed to callers outside functions.c. That's a bit scary to be doing in a released branch, but so far as I can tell from a quick search, there are no outside callers of these functions (and they are sufficiently specific to our semantics for SQL-language functions that it's not apparent why any extension would need to call them). In any case, v13 already changed the API of check_sql_fn_retval() compared to prior branches. Per report from pinker. Back-patch to v13 where this code came in. Discussion: https://postgr.es/m/1603050466566-0.post@n3.nabble.com
2020-10-15llvmjit: Work around bug in LLVM 3.9 causing crashes after 72559438f92.Andres Freund
Unfortunately in LLVM 3.9 LLVMGetAttributeCountAtIndex(func, index) crashes when called with an index that has 0 attributes. Since there's no way to work around this in the C API, add a small C++ wrapper doing so. The only reason this didn't fail before 72559438f92 is that there always are function attributes... Author: Andres Freund <andres@anarazel.de> Discussion: https://postgr.es/m/20201016001254.w2nfj7gd74jmb5in@alap3.anarazel.de Backpatch: 11-, like 72559438f92
2020-10-15doc: improve description of synchronous_commit modesBruce Momjian
Previously it wasn't clear exactly what each of the synchronous_commit modes accomplished. This clarifies that, and adds a table describing it. Only backpatched through 9.6 since 9.5 doesn't have all the options. Reported-by: kghost0@gmail.com Discussion: https://postgr.es/m/159741195522.14321.13812604195366728976@wrigleys.postgresql.org Backpatch-through: 9.6
2020-10-11Choose ppc compare_exchange constant path for more operand values.Noah Misch
The implementation uses smaller code when the "expected" operand is a small constant, but the implementation needlessly defined the set of acceptable constants more narrowly than the ABI does. Core PostgreSQL and PGXN don't use the constant path at all, so this is future-proofing. Back-patch to v13, where commit 30ee5d17c20dbb282a9952b3048d6ad52d56c371 introduced this code. Reviewed by Tom Lane. Reported by Christoph Berg. Discussion: https://postgr.es/m/20201009092825.GD889580@msg.df7cb.de
2020-10-11For ppc gcc, implement 64-bit compare_exchange and fetch_add with asm.Noah Misch
While xlc defines __64BIT__, gcc does not. Due to this oversight in commit 30ee5d17c20dbb282a9952b3048d6ad52d56c371, gcc builds continued implementing 64-bit atomics by way of intrinsics. Back-patch to v13, where that commit first appeared. Reviewed by Tom Lane. Discussion: https://postgr.es/m/20201011051043.GA1724101@rfd.leadboat.com
2020-10-07Prevent internal overflows in date-vs-timestamp and related comparisons.Tom Lane
The date-vs-timestamp, date-vs-timestamptz, and timestamp-vs-timestamptz comparators all worked by promoting the first type to the second and then doing a simple same-type comparison. This works fine, except when the conversion result is out of range, in which case we throw an entirely avoidable error. The sources of such failures are (a) type date can represent dates much farther in the future than the timestamp types can; (b) timezone rotation might cause a just-in-range timestamp value to become a just-out-of-range timestamptz value. Up to now we just ignored these corner-case issues, but now we have an actual user complaint (bug #16657 from Huss EL-Sheikh), so let's do something about it. It turns out that commit 52ad1e659 already built all the necessary infrastructure to support error-free comparisons, but neglected to actually use it in the main-line code paths. Fix that, do a little bit of code style review, and remove the now-duplicate logic in jsonpath_exec.c. Back-patch to v13 where 52ad1e659 came in. We could take this back further by back-patching said infrastructure, but given the small number of complaints so far, I don't feel a great need to. Discussion: https://postgr.es/m/16657-cde2f876d8cc7971@postgresql.org
2020-09-28Add for_each_from, to simplify loops starting from non-first list cells.Tom Lane
We have a dozen or so places that need to iterate over all but the first cell of a List. Prior to v13 this was typically written as for_each_cell(lc, lnext(list_head(list))) Commit 1cff1b95a changed these to for_each_cell(lc, list, list_second_cell(list)) This patch introduces a new macro for_each_from() which expresses the start point as a list index, allowing these to be written as for_each_from(lc, list, 1) This is marginally more efficient, since ForEachState.i can be initialized directly instead of backing into it from a ListCell address. It also seems clearer and less typo-prone. Some of the remaining uses of for_each_cell() look like they could profitably be changed to for_each_from(), but here I confined myself to changing uses of list_second_cell(). Also, fix for_each_cell_setup() and for_both_cell_setup() to const-ify their arguments; that's a simple oversight in 1cff1b95a. Back-patch into v13, on the grounds that (1) the const-ification is a minor bug fix, and (2) it's better for back-patching purposes if we only have two ways to write these loops rather than three. In HEAD, also remove list_third_cell() and list_fourth_cell(), which were also introduced in 1cff1b95a, and are unused as of cc99baa43. It seems unlikely that any third-party code would have started to use them already; anyone who has can be directed to list_nth_cell instead. Discussion: https://postgr.es/m/CAApHDvpo1zj9KhEpU2cCRZfSM3Q6XGdhzuAS2v79PH7WJBkYVA@mail.gmail.com
2020-09-16Fix bogus completion tag usage in walsenderAlvaro Herrera
Since commit fd5942c18f97 (2012, 9.3-era), walsender has been sending completion tags for certain replication commands twice -- and they're not even consistent. Apparently neither libpq nor JDBC have a problem with it, but it's not kosher. Fix by remove the EndCommand() call in the common code path for them all, and inserting specific calls to EndReplicationCommand() specifically in those places where it's needed. EndReplicationCommand() is a new simple function to send the completion tag for replication commands. Do this instead of sending a generic SELECT completion tag for them all, which was also pretty bogus (if innocuous). While at it, change StartReplication() to use EndReplicationCommand() instead of pg_puttextmessage(). In commit 2f9661311b83, I failed to realize that replication commands are not close-enough kin of regular SQL commands, so the DROP_REPLICATION_SLOT tag I added is undeserved and a type pun. Take it out. Backpatch to 13, where the latter commit appeared. The duplicate tag has been sent since 9.3, but since nothing is broken, it doesn't seem worth fixing. Per complaints from Tom Lane. Discussion: https://postgr.es/m/1347966.1600195735@sss.pgh.pa.us
2020-09-16Fix bogus cache-invalidation logic in logical replication worker.Tom Lane
The code recorded cache invalidation events by zeroing the "localreloid" field of affected cache entries. However, it's possible for an inval event to occur even while we have the entry open and locked. So an ill-timed inval could result in "cache lookup failed for relation 0" errors, if the worker's code tried to use the cleared field. We can fix that by creating a separate bool field to record whether the entry needs to be revalidated. (In the back branches, cram the bool into what had been padding space, to avoid an ABI break in the somewhat unlikely event that any extension is looking at this struct.) Also, rearrange the logic in logicalrep_rel_open so that it does the right thing in cases where table_open would fail. We should retry the lookup by name in that case, but we didn't. The real-world impact of this is probably small. In the first place, the error conditions are very low probability, and in the second place, the worker would just exit and get restarted. We only noticed because in a CLOBBER_CACHE_ALWAYS build, the failure can occur repeatedly, preventing the worker from making progress. Nonetheless, it's clearly a bug, and it impedes a useful type of testing; so back-patch to v10 where this code was introduced. Discussion: https://postgr.es/m/1032727.1600096803@sss.pgh.pa.us
2020-09-11logtape.c: do not preallocate for tapes when sortingJeff Davis
The preallocation logic is only useful for HashAgg, so disable it when sorting. Also, adjust an out-of-date comment. Reviewed-by: Peter Geoghegan Discussion: https://postgr.es/m/CAH2-Wzn_o7tE2+hRVvwSFghRb75AJ5g-nqGzDUqLYMexjOAe=g@mail.gmail.com Backpatch-through: 13
2020-09-04C comment: correct use of 64-"byte" cache line sizeBruce Momjian
Reported-by: Kelly Min Discussion: https://postgr.es/m/CAPSbxatOiQO90LYpSC3+svAU9-sHgDfEP4oFhcEUt_X=DqFA9g@mail.gmail.com Backpatch-through: 9.5
2020-08-31C comment: remove mention of use of t_hoff WAL structure memberBruce Momjian
Reported-by: Antonin Houska Discussion: https://postgr.es/m/21643.1595353537@antos Backpatch-through: 9.5
2020-08-30Mark factorial operator, and postfix operators in general, as deprecated.Tom Lane
Per discussion, we're planning to remove parser support for postfix operators in order to simplify the grammar. So it behooves us to put out a deprecation notice at least one release before that. There is only one built-in postfix operator, ! for factorial. Label it deprecated in the docs and in pg_description, and adjust some examples that formerly relied on it. (The sister prefix operator !! is also deprecated. We don't really have to remove that one, but since we're suggesting that people use factorial() instead, it seems better to remove both operators.) Also state in the CREATE OPERATOR ref page that postfix operators in general are going away. Although this changes the initial contents of pg_description, I did not force a catversion bump; it doesn't seem essential. In v13, also back-patch 4c5cf5431, so that there's someplace for the <link>s to point to. Mark Dilger and John Naylor, with some adjustments by me Discussion: https://postgr.es/m/BE2DF53D-251A-4E26-972F-930E523580E9@enterprisedb.com
2020-08-21Fix handling of CREATE TABLE LIKE with inheritance.Tom Lane
If a CREATE TABLE command uses both LIKE and traditional inheritance, Vars in CHECK constraints and expression indexes that are absorbed from a LIKE parent table tended to get mis-numbered, resulting in wrong answers and/or bizarre error messages (though probably not any actual crashes, thanks to validation occurring in the executor). In v12 and up, the same could happen to Vars in GENERATED expressions, even in cases with no LIKE clause but multiple traditional-inheritance parents. The cause of the problem for LIKE is that parse_utilcmd.c supposed it could renumber such Vars correctly during transformCreateStmt(), which it cannot since we have not yet accounted for columns added via inheritance. Fix that by postponing processing of LIKE INCLUDING CONSTRAINTS, DEFAULTS, GENERATED, INDEXES till after we've performed DefineRelation(). The error with GENERATED and multiple inheritance is a simple oversight in MergeAttributes(); it knows it has to renumber Vars in inherited CHECK constraints, but forgot to apply the same processing to inherited GENERATED expressions (a/k/a defaults). Per bug #16272 from Tom Gottfried. The non-GENERATED variants of the issue are ancient, presumably dating right back to the addition of CREATE TABLE LIKE; hence back-patch to all supported branches. Discussion: https://postgr.es/m/16272-6e32da020e9a9381@postgresql.org
2020-08-21Fix a few typos in JIT comments and READMEDavid Rowley
Reviewed-by: Abhijit Menon-Sen Reviewed-by: Andres Freund Discussion: https://postgr.es/m/CAApHDvobgmCs6CohqhKTUf7D8vffoZXQTCBTERo9gbOeZmvLTw%40mail.gmail.com Backpatch-through: 11, where JIT was added
2020-08-19Suppress unnecessary RelabelType nodes in yet more cases.Tom Lane
Commit a477bfc1d fixed eval_const_expressions() to ensure that it didn't generate unnecessary RelabelType nodes, but I failed to notice that some other places in the planner had the same issue. Really noplace in the planner should be using plain makeRelabelType(), for fear of generating expressions that should be equal() to semantically equivalent trees, but aren't. An example is that because canonicalize_ec_expression() failed to be careful about this, we could end up with an equivalence class containing both a plain Const, and a Const-with-RelabelType representing exactly the same value. So far as I can tell this led to no visible misbehavior, but we did waste a bunch of cycles generating and evaluating "Const = Const-with-RelabelType" to prove such entries are redundant. Hence, move the support function added by a477bfc1d to where it can be more generally useful, and use it in the places where planner code previously used makeRelabelType. Back-patch to v12, like the previous patch. While I have no concrete evidence of any real misbehavior here, it's certainly possible that I overlooked a case where equivalent expressions that aren't equal() could cause a user-visible problem. In any case carrying extra RelabelType nodes through planning to execution isn't very desirable. Discussion: https://postgr.es/m/1311836.1597781384@sss.pgh.pa.us
2020-08-15Prevent concurrent SimpleLruTruncate() for any given SLRU.Noah Misch
The SimpleLruTruncate() header comment states the new coding rule. To achieve this, add locktype "frozenid" and two LWLocks. This closes a rare opportunity for data loss, which manifested as "apparent wraparound" or "could not access status of transaction" errors. Data loss is more likely in pg_multixact, due to released branches' thin margin between multiStopLimit and multiWrapLimit. If a user's physical replication primary logged ": apparent wraparound" messages, the user should rebuild standbys of that primary regardless of symptoms. At less risk is a cluster having emitted "not accepting commands" errors or "must be vacuumed" warnings at some point. One can test a cluster for this data loss by running VACUUM FREEZE in every database. Back-patch to 9.5 (all supported versions). Discussion: https://postgr.es/m/20190218073103.GA1434723@rfd.leadboat.com
2020-08-14Be more careful about the shape of hashable subplan clauses.Tom Lane
nodeSubplan.c expects that the testexpr for a hashable ANY SubPlan has the form of one or more OpExprs whose LHS is an expression of the outer query's, while the RHS is an expression over Params representing output columns of the subquery. However, the planner only went as far as verifying that the clauses were all binary OpExprs. This works 99.99% of the time, because the clauses have the right shape when emitted by the parser --- but it's possible for function inlining to break that, as reported by PegoraroF10. To fix, teach the planner to check that the LHS and RHS contain the right things, or more accurately don't contain the wrong things. Given that this has been broken for years without anyone noticing, it seems sufficient to just give up hashing when it happens, rather than go to the trouble of commuting the clauses back again (which wouldn't necessarily work anyway). While poking at that, I also noticed that nodeSubplan.c had a baked-in assumption that the number of hash clauses is identical to the number of subquery output columns. Again, that's fine as far as parser output goes, but it's not hard to break it via function inlining. There seems little reason for that assumption though --- AFAICS, the only thing it's buying us is not having to store the number of hash clauses explicitly. Adding code to the planner to reject such cases would take more code than getting nodeSubplan.c to cope, so I fixed it that way. This has been broken for as long as we've had hashable SubPlans, so back-patch to all supported branches. Discussion: https://postgr.es/m/1549209182255-0.post@n3.nabble.com
2020-08-14Fix postmaster's behavior during smart shutdown.Tom Lane
Up to now, upon receipt of a SIGTERM ("smart shutdown" command), the postmaster has immediately killed all "optional" background processes, and subsequently refused to launch new ones while it's waiting for foreground client processes to exit. No doubt this seemed like an OK policy at some point; but it's a pretty bad one now, because it makes for a seriously degraded environment for the remaining clients: * Parallel queries are killed, and new ones fail to launch. (And our parallel-query infrastructure utterly fails to deal with the case in a reasonable way --- it just hangs waiting for workers that are not going to arrive. There is more work needed in that area IMO.) * Autovacuum ceases to function. We can tolerate that for awhile, but if bulk-update queries continue to run in the surviving client sessions, there's eventually going to be a mess. In the worst case the system could reach a forced shutdown to prevent XID wraparound. * The bgwriter and walwriter are also stopped immediately, likely resulting in performance degradation. Hence, let's rearrange things so that the only immediate change in behavior is refusing to let in new normal connections. Once the last normal connection is gone, shut everything down as though we'd received a "fast" shutdown. To implement this, remove the PM_WAIT_BACKUP and PM_WAIT_READONLY states, instead staying in PM_RUN or PM_HOT_STANDBY while normal connections remain. A subsidiary state variable tracks whether or not we're letting in new connections in those states. This also allows having just one copy of the logic for killing child processes in smart and fast shutdown modes. I moved that logic into PostmasterStateMachine() by inventing a new state PM_STOP_BACKENDS. Back-patch to 9.6 where parallel query was added. In principle this'd be a good idea in 9.5 as well, but the risk/reward ratio is not as good there, since lack of autovacuum is not a problem during typical uses of smart shutdown. Per report from Bharath Rupireddy. Patch by me, reviewed by Thomas Munro Discussion: https://postgr.es/m/CALj2ACXAZ5vKxT9P7P89D87i3MDO9bfS+_bjMHgnWJs8uwUOOw@mail.gmail.com
2020-08-10Move connect.h from fe_utils to src/include/common.Noah Misch
Any libpq client can use the header. Clients include backend components postgres_fdw, dblink, and logical replication apply worker. Back-patch to v10, because another fix needs this. In released branches, just copy the header and keep the original.
2020-08-02Use int64 instead of long in incremental sort codeDavid Rowley
Windows 64bit has 4-byte long values which is not suitable for tracking disk space usage in the incremental sort code. Let's just make all these fields int64s. Author: James Coleman Discussion: https://postgr.es/m/CAApHDvpky%2BUhof8mryPf5i%3D6e6fib2dxHqBrhp0Qhu0NeBhLJw%40mail.gmail.com Backpatch-through: 13, where the incremental sort code was added
2020-07-29Add hash_mem_multiplier GUC.Peter Geoghegan
Add a GUC that acts as a multiplier on work_mem. It gets applied when sizing executor node hash tables that were previously size constrained using work_mem alone. The new GUC can be used to preferentially give hash-based nodes more memory than the generic work_mem limit. It is intended to enable admin tuning of the executor's memory usage. Overall system throughput and system responsiveness can be improved by giving hash-based executor nodes more memory (especially over sort-based alternatives, which are often much less sensitive to being memory constrained). The default value for hash_mem_multiplier is 1.0, which is also the minimum valid value. This means that hash-based nodes continue to apply work_mem in the traditional way by default. hash_mem_multiplier is generally useful. However, it is being added now due to concerns about hash aggregate performance stability for users that upgrade to Postgres 13 (which added disk-based hash aggregation in commit 1f39bce0). While the old hash aggregate behavior risked out-of-memory errors, it is nevertheless likely that many users actually benefited. Hash agg's previous indifference to work_mem during query execution was not just faster; it also accidentally made aggregation resilient to grouping estimate problems (at least in cases where this didn't create destabilizing memory pressure). hash_mem_multiplier can provide a certain kind of continuity with the behavior of Postgres 12 hash aggregates in cases where the planner incorrectly estimates that all groups (plus related allocations) will fit in work_mem/hash_mem. This seems necessary because hash-based aggregation is usually much slower when only a small fraction of all groups can fit. Even when it isn't possible to totally avoid hash aggregates that spill, giving hash aggregation more memory will reliably improve performance (the same cannot be said for external sort operations, which appear to be almost unaffected by memory availability provided it's at least possible to get a single merge pass). The PostgreSQL 13 release notes should advise users that increasing hash_mem_multiplier can help with performance regressions associated with hash aggregation. That can be taken care of by a later commit. Author: Peter Geoghegan Reviewed-By: Álvaro Herrera, Jeff Davis Discussion: https://postgr.es/m/20200625203629.7m6yvut7eqblgmfo@alap3.anarazel.de Discussion: https://postgr.es/m/CAH2-WzmD%2Bi1pG6rc1%2BCjc4V6EaFJ_qSuKCCHVnH%3DoruqD-zqow%40mail.gmail.com Backpatch: 13-, where disk-based hash aggregation was introduced.
2020-07-28HashAgg: use better cardinality estimate for recursive spilling.Jeff Davis
Use HyperLogLog to estimate the group cardinality in a spilled partition. This estimate is used to choose the number of partitions if we recurse. The previous behavior was to use the number of tuples in a spilled partition as the estimate for the number of groups, which lead to overpartitioning. That could cause the number of batches to be much higher than expected (with each batch being very small), which made it harder to interpret EXPLAIN ANALYZE results. Reviewed-by: Peter Geoghegan Discussion: https://postgr.es/m/a856635f9284bc36f7a77d02f47bbb6aaf7b59b3.camel@j-davis.com Backpatch-through: 13
2020-07-27Remove hashagg_avoid_disk_plan GUC.Peter Geoghegan
Note: This GUC was originally named enable_hashagg_disk when it appeared in commit 1f39bce0, which added disk-based hash aggregation. It was subsequently renamed in commit 92c58fd9. Author: Peter Geoghegan Reviewed-By: Jeff Davis, Álvaro Herrera Discussion: https://postgr.es/m/9d9d1e1252a52ea1bad84ea40dbebfd54e672a0f.camel%40j-davis.com Backpatch: 13-, where disk-based hash aggregation was introduced.
2020-07-26Fix LookupTupleHashEntryHash() pipeline-stall issue.Jeff Davis
Refactor hash lookups in nodeAgg.c to improve performance. Author: Andres Freund and Jeff Davis Discussion: https://postgr.es/m/20200612213715.op4ye4q7gktqvpuo%40alap3.anarazel.de Backpatch-through: 13
2020-07-24Replace TS_execute's TS_EXEC_CALC_NOT flag with TS_EXEC_SKIP_NOT.Tom Lane
It's fairly silly that ignoring NOT subexpressions is TS_execute's default behavior. It's wrong on its face and it encourages errors of omission. Moreover, the only two remaining callers that aren't specifying CALC_NOT are in ts_headline calculations, and it's very arguable that those are bugs: if you've specified "!foo" in your query, why would you want to get a headline that includes "foo"? Hence, rip that out and change the default behavior to be to calculate NOT accurately. As a concession to the slim chance that there is still somebody somewhere who needs the incorrect behavior, provide a new SKIP_NOT flag to explicitly request that. Back-patch into v13, mainly because it seems better to change this at the same time as the previous commit's rejiggering of TS_execute related APIs. Any outside callers affected by this change are probably also affected by that one. Discussion: https://postgr.es/m/CALT9ZEE-aLotzBg-pOp2GFTesGWVYzXA3=mZKzRDa_OKnLF7Mg@mail.gmail.com
2020-07-24Fix assorted bugs by changing TS_execute's callback API to ternary logic.Tom Lane
Text search sometimes failed to find valid matches, for instance '!crew:A'::tsquery might fail to locate 'crew:1B'::tsvector during an index search. The root of the issue is that TS_execute's callback functions were not changed to use ternary (yes/no/maybe) reporting when we made the search logic itself do so. It's somewhat annoying to break that API, but on the other hand we now see that any code using plain boolean logic is almost certainly broken since the addition of phrase search. There seem to be very few outside callers of this code anyway, so we'll just break them intentionally to get them to adapt. This allows removal of tsginidx.c's private re-implementation of TS_execute, since that's now entirely duplicative. It's also no longer necessary to avoid use of CALC_NOT in tsgistidx.c, since the underlying callbacks can now do something reasonable. Back-patch into v13. We can't change this in stable branches, but it seems not quite too late to fix it in v13. Tom Lane and Pavel Borisov Discussion: https://postgr.es/m/CALT9ZEE-aLotzBg-pOp2GFTesGWVYzXA3=mZKzRDa_OKnLF7Mg@mail.gmail.com
2020-07-20Correctly mark pg_subscription_rel.srsublsn as nullable.Tom Lane
The code has always set this column to NULL when it's not valid, but the catalog header's description failed to reflect that, as did the SGML docs, as did some of the code. To prevent future coding errors of the same ilk, let's hide the field from C code as though it were variable-length (which, in a sense, it is). As with commit 72eab84a5, we can only fix this cleanly in HEAD and v13; the problem extends further back but we'll need some klugery in the released branches. Discussion: https://postgr.es/m/367660.1595202498@sss.pgh.pa.us
2020-07-20Rename wal_keep_segments to wal_keep_size.Fujii Masao
max_slot_wal_keep_size that was added in v13 and wal_keep_segments are the GUC parameters to specify how much WAL files to retain for the standby servers. While max_slot_wal_keep_size accepts the number of bytes of WAL files, wal_keep_segments accepts the number of WAL files. This difference of setting units between those similar parameters could be confusing to users. To alleviate this situation, this commit renames wal_keep_segments to wal_keep_size, and make users specify the WAL size in it instead of the number of WAL files. There was also the idea to rename max_slot_wal_keep_size to max_slot_wal_keep_segments, in the discussion. But we have been moving away from measuring in segments, for example, checkpoint_segments was replaced by max_wal_size. So we concluded to rename wal_keep_segments to wal_keep_size. Back-patch to v13 where max_slot_wal_keep_size was added. Author: Fujii Masao Reviewed-by: Álvaro Herrera, Kyotaro Horiguchi, David Steele Discussion: https://postgr.es/m/574b4ea3-e0f9-b175-ead2-ebea7faea855@oss.nttdata.com
2020-07-19Correctly mark pg_subscription.subslotname as nullable.Tom Lane
Due to the layout of this catalog, subslotname has to be explicitly marked BKI_FORCE_NULL, else initdb will default to the assumption that it's non-nullable. Since, in fact, CREATE/ALTER SUBSCRIPTION will store null values there, the existing marking is just wrong, and has been since this catalog was invented. We haven't noticed because not much in the system actually depends on attnotnull being truthful. However, JIT'ed tuple deconstruction does depend on that in some cases, allowing crashes or wrong answers in queries that inspect pg_subscription. Commit 9de77b545 quite accidentally exposed this on the buildfarm members that force JIT activation. Back-patch to v13. The problem goes further back, but we cannot force initdb in released branches, so some klugier solution will be needed there. Before working on that, push this simple fix to try to get the buildfarm back to green. Discussion: https://postgr.es/m/4118109.1595096139@sss.pgh.pa.us
2020-07-14Fix comments related to table AMsMichael Paquier
Incorrect function names were referenced. As this fixes some portions of tableam.h, that is mentioned in the docs as something to look at when implementing a table AM, backpatch down to 12 where this has been introduced. Author: Hironobu Suzuki Discussion: https://postgr.es/m/8fe6d672-28dd-3f1d-7aed-ac2f6d599d3f@interdb.jp Backpatch-through: 12
2020-07-12HashAgg: before spilling tuples, set unneeded columns to NULL.Jeff Davis
This is a replacement for 4cad2534. Instead of projecting all tuples going into a HashAgg, only remove unnecessary attributes when actually spilling. This avoids the regression for the in-memory case. Discussion: https://postgr.es/m/a2fb7dfeb4f50aa0a123e42151ee3013933cb802.camel%40j-davis.com Backpatch-through: 13
2020-07-13Revert "Track statistics for spilling of changes from ReorderBuffer".Amit Kapila
The stats with this commit was available only for WALSenders, however, users might want to see for backends doing logical decoding via SQL API. Then, users might want to reset and access these stats across server restart which was not possible with the current patch. List of commits reverted: caa3c4242c Don't call elog() while holding spinlock. e641b2a995 Doc: Update the documentation for spilled transaction statistics. 5883f5fe27 Fix unportable printf format introduced in commit 9290ad198. 9290ad198b Track statistics for spilling of changes from ReorderBuffer. Additionaly, remove the release notes entry for this feature. Backpatch-through: 13, where it was introduced Discussion: https://postgr.es/m/CA+fd4k5_pPAYRTDrO2PbtTOe0eHQpBvuqmCr8ic39uTNmR49Eg@mail.gmail.com