summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2018-04-20Change more places to be less trusting of RestrictInfo.is_pushed_down.Tom Lane
On further reflection, commit e5d83995e didn't go far enough: pretty much everywhere in the planner that examines a clause's is_pushed_down flag ought to be changed to use the more complicated behavior where we also check the clause's required_relids. Otherwise we could make incorrect decisions about whether, say, a clause is safe to use as a hash clause. Some (many?) of these places are safe as-is, either because they are never reached while considering a parameterized path, or because there are additional checks that would reject a pushed-down clause anyway. However, it seems smarter to just code them all the same way rather than rely on easily-broken reasoning of that sort. In support of that, invent a new macro RINFO_IS_PUSHED_DOWN that should be used in place of direct tests on the is_pushed_down flag. Like the previous patch, back-patch to all supported branches. Discussion: https://postgr.es/m/f8128b11-c5bf-3539-48cd-234178b2314d@proxel.se
2018-04-19Fix broken extract_actual_join_clauses call in 9.6 postgres_fdw.Tom Lane
In commits e5d83995e et al, I changed the signature of extract_actual_join_clauses, thinking that it was not called from anywhere but createplan.c. I missed that postgres_fdw uses it in the 9.6 branch only. This opens up the question of whether any third-party modules might be calling it, and whether we need to take steps to avoid an API break for them. But for the moment, just get the buildfarm green again.
2018-04-19Fix incorrect handling of join clauses pushed into parameterized paths.Tom Lane
In some cases a clause attached to an outer join can be pushed down into the outer join's RHS even though the clause is not degenerate --- this can happen if we choose to make a parameterized path for the RHS. If the clause ends up attached to a lower outer join, we'd misclassify it as being a "join filter" not a plain "filter" condition at that node, leading to wrong query results. To fix, teach extract_actual_join_clauses to examine each join clause's required_relids, not just its is_pushed_down flag. (The latter now seems vestigial, or at least in need of rethinking, but we won't do anything so invasive as redefining it in a bug-fix patch.) This has been wrong since we introduced parameterized paths in 9.2, though it's evidently hard to hit given the lack of previous reports. The test case used here involves a lateral function call, and I think that a lateral reference may be required to get the planner to select a broken plan; though I wouldn't swear to that. In any case, even if LATERAL is needed to trigger the bug, it still affects all supported branches, so back-patch to all. Per report from Andreas Karlsson. Thanks to Andrew Gierth for preliminary investigation. Discussion: https://postgr.es/m/f8128b11-c5bf-3539-48cd-234178b2314d@proxel.se
2018-04-19Enlarge find_other_exec's meager fgets bufferAlvaro Herrera
The buffer was 100 bytes long, which is barely sufficient when the version string gets longer (such as by configure --with-extra-version). Set it to MAXPGPATH. Author: Nikhil Sontakke Discussion: https://postgr.es/m/CAMGcDxfLfpYU_Jru++L6ARPCOyxr0W+2O3Q54TDi5XdYeU36ow@mail.gmail.com
2018-04-18Better fix for deadlock hazard in CREATE INDEX CONCURRENTLY.Tom Lane
Commit 54eff5311 did not account for the possibility that we'd have a transaction snapshot due to default_transaction_isolation being set high enough to require one. The transaction snapshot is enough to hold back our advertised xmin and thus risk deadlock anyway. The only way to get rid of that snap is to start a new transaction, so let's do that instead. Also throw in an assert checking that we really have gotten to a state where no xmin is being advertised. Back-patch to 9.4, like the previous commit. Discussion: https://postgr.es/m/CAMkU=1ztk3TpQdcUNbxq93pc80FrXUjpDWLGMeVBDx71GHNwZQ@mail.gmail.com
2018-04-16Fix broken collation-aware searches in SP-GiST text opclass.Tom Lane
spg_text_leaf_consistent() supposed that it should compare only Min(querylen, entrylen) bytes of the two strings, and then deal with any excess bytes in one string or the other by assuming the longer string is greater if the prefixes are equal. Quite aside from the fact that that's just wrong in some locales (e.g., 'ch' is not less than 'd' in cs_CZ), it also risked passing incomplete multibyte characters to strcoll(), with ensuing bad results. Instead, just pass the full strings to varstr_cmp, and let it decide what to do about unequal-length strings. Fortunately, this error doesn't imply any index corruption, it's just that searches might return the wrong set of entries. Per report from Emre Hasegeli, though this is not his patch. Thanks to Peter Geoghegan for review and discussion. This code was born broken, so back-patch to all supported branches. In HEAD, I failed to resist the temptation to do a bit of cosmetic cleanup/pgindent'ing on 710d90da1, too. Discussion: https://postgr.es/m/CAE2gYzzb6K51VnTq5i5p52z+j9p2duEa-K1T3RrC_GQEynAKEg@mail.gmail.com
2018-04-15Fix potentially-unportable code in contrib/adminpack.Tom Lane
Spelling access(2)'s second argument as "2" is just horrid. POSIX makes no promises as to the numeric values of W_OK and related macros. Even if it accidentally works as intended on every supported platform, it's still unreadable and inconsistent with adjacent code. In passing, don't spell "NULL" as "0" either. Yes, that's legal C; no, it's not project style. Back-patch, just in case the unportability is real and not theoretical. (Most likely, even if a platform had different bit assignments for access()'s modes, there'd not be an observable behavior difference here; but I'm being paranoid today.)
2018-04-13In libpq, free any partial query result before collecting a server error.Tom Lane
We'd throw away the partial result anyway after parsing the error message. Throwing it away beforehand costs nothing and reduces the risk of out-of-memory failure. Also, at least in systems that behave like glibc/Linux, if the partial result was very large then the error PGresult would get allocated at high heap addresses, preventing the heap storage used by the partial result from being released to the OS until the error PGresult is freed. In psql >= 9.6, we hold onto the error PGresult until another error is received (for \errverbose), so that this behavior causes a seeming memory leak to persist for awhile, as in a recent complaint from Darafei Praliaskouski. This is a potential performance regression from older versions, justifying back-patching at least that far. But similar behavior may occur in other client applications, so it seems worth just back-patching to all supported branches. Discussion: https://postgr.es/m/CAC8Q8tJ=7cOkPePyAbJE_Pf691t8nDFhJp0KZxHvnq_uicfyVg@mail.gmail.com
2018-04-12Fix bogus affix-merging code.Tom Lane
NISortAffixes() compared successive compound affixes incorrectly, thus possibly failing to merge identical affixes, or (less likely) merging ones that shouldn't be merged. The user-visible effects of this are unclear, to me anyway. Per bug #15150 from Alexander Lakhin. It's been broken for a long time, so back-patch to all supported branches. Arthur Zakirov Discussion: https://postgr.es/m/152353327780.31225.13445405496721177988@wrigleys.postgresql.org
2018-04-11Ignore nextOid when replaying an ONLINE checkpoint.Tom Lane
The nextOid value is from the start of the checkpoint and may well be stale compared to values from more recent XLOG_NEXTOID records. Previously, we adopted it anyway, allowing the OID counter to go backwards during a crash. While this should be harmless, it contributed to the severity of the bug fixed in commit 0408e1ed5, by allowing duplicate TOAST OIDs to be assigned immediately following a crash. Without this error, that issue would only have arisen when TOAST objects just younger than a multiple of 2^32 OIDs were deleted and then not vacuumed in time to avoid a conflict. Pavan Deolasee Discussion: https://postgr.es/m/CABOikdOgWT2hHkYG3Wwo2cyZJq2zfs1FH0FgX-=h4OLosXHf9w@mail.gmail.com
2018-04-11Do not select new object OIDs that match recently-dead entries.Tom Lane
When selecting a new OID, we take care to avoid picking one that's already in use in the target table, so as not to create duplicates after the OID counter has wrapped around. However, up to now we used SnapshotDirty when scanning for pre-existing entries. That ignores committed-dead rows, so that we could select an OID matching a deleted-but-not-yet-vacuumed row. While that mostly worked, it has two problems: * If recently deleted, the dead row might still be visible to MVCC snapshots, creating a risk for duplicate OIDs when examining the catalogs within our own transaction. Such duplication couldn't be visible outside the object-creating transaction, though, and we've heard few if any field reports corresponding to such a symptom. * When selecting a TOAST OID, deleted toast rows definitely *are* visible to SnapshotToast, and will remain so until vacuumed away. This leads to a conflict that will manifest in errors like "unexpected chunk number 0 (expected 1) for toast value nnnnn". We've been seeing reports of such errors from the field for years, but the cause was unclear before. The fix is simple: just use SnapshotAny to search for conflicting rows. This results in a slightly longer window before object OIDs can be recycled, but that seems unlikely to create any large problems. Pavan Deolasee Discussion: https://postgr.es/m/CABOikdOgWT2hHkYG3Wwo2cyZJq2zfs1FH0FgX-=h4OLosXHf9w@mail.gmail.com
2018-04-11Make local copy of client hostnames in backend status array.Heikki Linnakangas
The other strings, application_name and query string, were snapshotted to local memory in pgstat_read_current_status(), but we forgot to do that for client hostnames. As a result, the client hostname would appear to change in the local copy, if the client disconnected. Backpatch to all supported versions. Author: Edmund Horner Reviewed-by: Michael Paquier Discussion: https://www.postgresql.org/message-id/CAMyN-kA7aOJzBmrYFdXcc7Z0NmW%2B5jBaf_m%3D_-77uRNyKC9r%3DA%40mail.gmail.com
2018-04-10Fix incorrect close() call in dsm_impl_mmap().Tom Lane
One improbable error-exit path in this function used close() where it should have used CloseTransientFile(). This is unlikely to be hit in the field, and I think the consequences wouldn't be awful (just an elog(LOG) bleat later). But a bug is a bug, so back-patch to 9.4 where this code came in. Pan Bian Discussion: https://postgr.es/m/152056616579.4966.583293218357089052@wrigleys.postgresql.org
2018-04-10Remove wrongly backpatched piece of code in cube.cTeodor Sigaev
Due to sloppy division of changes between f50c80dbb (which was not back-patched) and 563a053bd, this piece of code was wrongly backpatched to REL_10_STABLE and REL9_6_STABLE. This code never causes real error because its condition is never satisfied, but it's a dead code, which needs to be removed. Alexander Korotkov per gripe from Tom Lane
2018-04-08Doc: clarify explanation of pg_dump usage.Tom Lane
This section confusingly used both "infile" and "outfile" to refer to the same file, i.e. the textual output of pg_dump. Use "dumpfile" for both cases, per suggestion from Jonathan Katz. Discussion: https://postgr.es/m/152311295239.31235.6487236091906987117@wrigleys.postgresql.org
2018-04-07Remove overzeleous assertions in pg_atomic_flag code.Andres Freund
The atomics code asserts proper alignment in various places. That's mainly because the alignment of 64bit integers is not sufficient for atomic operations on all platforms. Some ABIs only have four byte alignment, but don't have atomic behavior when crossing page boundaries. The flags code isn't affected by that however, as the type alignment always is sufficient for atomic operations. Nevertheless the code asserted alignment requirements. Before 8c3debbb it was only broken on hppa, after it probably affect further platforms. Thus remove the assertions for pg_atomic_flag operators. Per buildfarm animal pademelon. Discussion: https://postgr.es/m/7223.1523124425@sss.pgh.pa.us Backpatch: 9.5-
2018-04-06Fix and improve pg_atomic_flag fallback implementation.Andres Freund
The atomics fallback implementation for pg_atomic_flag was broken, returning the inverted value from pg_atomic_test_set_flag(). This was unnoticed because a) atomic flags were unused until recently b) the test code wasn't run when the fallback implementation was in use (because it didn't allow to test for some edge cases). Fix the bug, and improve the fallback so it has the same behaviour as the non-fallback implementation in the problematic edge cases. That breaks ABI compatibility in the back branches when fallbacks are in use, but given they were broken until now... Author: Andres Freund Reported-by: Daniel Gustafsson Discussion: https://postgr.es/m/FB948276-7B32-4B77-83E6-D00167F8EEB4@yesql.se https://postgr.es/m/20180406233854.uni2h3mbnveczl32@alap3.anarazel.de Backpatch: 9.5-, where the atomics abstraction was introduced.
2018-04-05doc: remove mention of the DMOZ catalog in ltree docsBruce Momjian
Discussion: https://postgr.es/m/CAF4Au4xYem_W3KOuxcKct7=G4j8Z3uO9j3DUKTFJqUsfp_9pQg@mail.gmail.com Author: Oleg Bartunov Backpatch-through: 9.3
2018-04-04docs: update ltree URL for the DMOZ catalogBruce Momjian
Reported-by: bbrincat@gmail.com Discussion: https://postgr.es/m/152283596377.1441.11672249301622760943@wrigleys.postgresql.org Author: Oleg Bartunov Backpatch-through: 9.3
2018-04-04Also fix the descriptions in pg_config.h.win32.Heikki Linnakangas
I missed pg_config.h.win32 in the previous commit that fixed these in pg_config.h.in.
2018-04-04Fix incorrect description of USE_SLICING_BY_8_CRC32C.Heikki Linnakangas
And a typo in the description of USE_SSE42_CRC32C_WITH_RUNTIME_CHECK, spotted by Daniel Gustafsson.
2018-04-02doc: document "IS NOT DOCUMENT"Bruce Momjian
Reported-by: scott.ure@caseware.com Discussion: https://postgr.es/m/152056505045.4963.16783351661813640274@wrigleys.postgresql.org Author: Euler Taveira Backpatch-through: 9.3
2018-03-31Fix assorted issues in parallel vacuumdb.Tom Lane
Avoid storing the result of PQsocket() in a pgsocket variable; it's declared as int, and the no-socket test is properly written as "x < 0" not "x == PGINVALID_SOCKET". This accidentally had no bad effect because we never got to init_slot() with a bad connection, but it's still wrong. Actually, it seems like we should avoid storing the result for a long period at all. The function's not so expensive that it's worth avoiding, and the existing coding technique here would fail if anyone tried to PQreset the connection during the life of the program. Hence, just re-call PQsocket every time we construct a select(2) mask. Speaking of select(), GetIdleSlot imagined that it could compute the select mask once and continue to use it over multiple calls to select_loop(), which is pretty bogus since that would stomp on the mask on return. This could only matter if the function's outer loop iterated more than once, which is unlikely (it'd take some connection receiving data, but not enough to complete its command). But if it did happen, we'd acquire "tunnel vision" and stop watching the other connections for query termination, with the effect of losing parallelism. Another way in which GetIdleSlot could lose parallelism is that once PQisBusy returns false, it would lock in on that connection and do PQgetResult until that returns NULL; in some cases that could result in blocking. (Perhaps this can never happen in vacuumdb due to the limited set of commands that it can issue, but I'm not quite sure of that, and even if true today it's not a future-proof assumption.) Refactor the code to do that properly, so that it risks blocking in PQgetResult only in cases where we need to wait anyway. Another loss-of-parallelism problem, which *is* easily demonstrable, is that any setup queries issued during prepare_vacuum_command() were always issued on the last-to-be-created connection, whether or not that was idle. Long-running operations on that connection thus prevented issuance of additional operations on the other ones, except in the limited cases where no preparatory query was needed. Instead, wait till we've identified a free connection and use that one. Also, avoid core dump due to undersized malloc request in the case that no tables are identified to be vacuumed. The bogus no-socket test was noted by CharSyam, the other problems identified in my own code review. Back-patch to 9.5 where parallel vacuumdb was introduced. Discussion: https://postgr.es/m/CAMrLSE6etb33-192DTEUGkV-TsvEcxtBDxGWG1tgNOMnQHwgDA@mail.gmail.com
2018-03-30Fix bogus provolatile/proparallel markings on a few built-in functions.Tom Lane
Richard Yen reported that pg_upgrade failed if the target cluster had force_parallel_mode = on, because binary_upgrade_create_empty_extension() is marked parallel restricted, allowing it to be executed in parallel mode, which complains because it tries to acquire an XID. In general, no function that might try to modify database data should be considered parallel safe or restricted, since execution of it might force XID acquisition. We found several other examples of this mistake. Furthermore, functions that execute user-supplied SQL queries or query fragments, or pull data from user-supplied cursors, had better be marked both volatile and parallel unsafe, because we don't know what the supplied query or cursor might try to do. There were several tsquery and XML functions that had the wrong proparallel marking for this, and some of them were even mislabeled as to volatility. All these bugs are old, dating back to 9.6 for the proparallel mistakes and much further for the provolatile mistakes. We can't force a catversion bump in the back branches, but we can at least ensure that installations initdb'd in future have the right values. Thomas Munro and Tom Lane Discussion: https://postgr.es/m/CAEepm=2sNDScSLTfyMYu32Q=ob98ZGW-vM_2oLxinzSABGQ6VA@mail.gmail.com
2018-03-30docs: add parameter with brackets around varbit()Bruce Momjian
Reported-by: scott.ure@caseware.com Discussion: https://postgr.es/m/152074343671.1853.18284519607571497106@wrigleys.postgresql.org Author: Euler Taveira Backpatch-through: 9.3
2018-03-29Fix handling of files that source server removes during pg_rewind is running.Fujii Masao
After processing the filemap to build the list of chunks that will be fetched from the source to rewing the target server, it is possible that a file which was previously processed is removed from the source. A simple example of such an occurence is a WAL segment which gets recycled on the target in-between. When the filemap is processed, files not categorized as relation files are first truncated to prepare for its full copy of which is going to be taken from the source, divided into a set of junks. However, for a recycled WAL segment, this would result in a segment which has a zero-byte size. With such an empty file, post-rewind recovery thinks that records are saved but they are actually not because of the truncation which happened when processing the filemap, resulting in data loss. In order to fix the problem, make sure that files which are found as removed on the source when receiving chunks of them are as well deleted on the target server for consistency. Back-patch to 9.5 where pg_rewind was added. Author: Tsunakawa Takayuki Reviewed-by: Michael Paquier Reported-by: Tsunakawa Takayuki Discussion: https://postgr.es/m/0A3221C70F24FB45833433255569204D1F8DAAA2%40G01JPEXMBYT05
2018-03-28Fix actual and potential double-frees around tuplesort usage.Tom Lane
tuplesort_gettupleslot() passed back tuples allocated in the tuplesort's own memory context, even when the caller was responsible to free them. This created a double-free hazard, because some callers might destroy the tuplesort object (via tuplesort_end) before trying to clean up the last returned tuple. To avoid this, change the API to specify that the tuple is allocated in the caller's memory context. v10 and HEAD already did things that way, but in 9.5 and 9.6 this is a live bug that can demonstrably cause crashes with some grouping-set usages. In 9.5 and 9.6, this requires doing an extra tuple copy in some cases, which is unfortunate. But the amount of refactoring needed to avoid it seems excessive for a back-patched change, especially since the cases where an extra copy happens are less performance-critical. Likewise change tuplesort_getdatum() to return pass-by-reference Datums in the caller's context not the tuplesort's context. There seem to be no live bugs among its callers, but clearly the same sort of situation could happen in future. For other tuplesort fetch routines, continue to allocate the memory in the tuplesort's context. This is a little inconsistent with what we now do for tuplesort_gettupleslot() and tuplesort_getdatum(), but that's preferable to adding new copy overhead in the back branches where it's clearly unnecessary. These other fetch routines provide the weakest possible guarantees about tuple memory lifespan from v10 on, anyway, so this actually seems more consistent overall. Adjust relevant comments to reflect these API redefinitions. Arguably, we should change the pre-9.5 branches as well, but since there are no known failure cases there, it seems not worth the risk. Peter Geoghegan, per report from Bernd Helmle. Reviewed by Kyotaro Horiguchi; thanks also to Andreas Seltenreich for extracting a self-contained test case. Discussion: https://postgr.es/m/1512661638.9720.34.camel@oopsware.de
2018-03-26Fix thinko in commentAlvaro Herrera
The listed numbers disagreed with the ones being used in the symbols; but instead of just fixing the numbers in the comment, use the symbolic name instead, which seems clearer. This has been wrong all along, so apply back to 9.5 where BRIN was introduced. Reported-by: Tomas Vondra Discussion: https://postgr.es/m/5ff514f2-8b1e-6366-b11c-8e2ed442562d@2ndquadrant.com
2018-03-25Doc: add example of type resolution in nested UNIONs.Tom Lane
Section 10.5 didn't say explicitly that multiple UNIONs are resolved pairwise. Since the resolution algorithm is described as taking any number of inputs, readers might well think that a query like "select x union select y union select z" would be resolved by considering x, y, and z in one resolution step. But that's not what happens (and I think that behavior is per SQL spec). Add an example clarifying this point. Per bug #15129 from Philippe Beaudoin. Discussion: https://postgr.es/m/152196085023.32649.9916472370480121694@wrigleys.postgresql.org
2018-03-25Doc: remove extra comma in syntax summary for array_fill().Tom Lane
Noted by Scott Ure. Back-patch to all supported branches. Discussion: https://postgr.es/m/152199346794.4544.1888397173908716912@wrigleys.postgresql.org
2018-03-23Don't qualify type pg_catalog.text in extend-extensions-example.Noah Misch
Extension scripts begin execution with pg_catalog at the front of the search path, so type names reliably refer to pg_catalog. Remove these superfluous qualifications. Earlier <programlisting> of this <sect1> already omitted them. Back-patch to 9.3 (all supported versions).
2018-03-23Fix make rules that generate multiple output files.Tom Lane
For years, our makefiles have correctly observed that "there is no correct way to write a rule that generates two files". However, what we did is to provide empty rules that "generate" the secondary output files from the primary one, and that's not right either. Depending on the details of the creating process, the primary file might end up timestamped later than one or more secondary files, causing subsequent make runs to consider the secondary file(s) out of date. That's harmless in a plain build, since make will just re-execute the empty rule and nothing happens. But it's fatal in a VPATH build, since make will expect the secondary file to be rebuilt in the build directory. This would manifest as "file not found" failures during VPATH builds from tarballs, if we were ever unlucky enough to ship a tarball with apparently out-of-date secondary files. (It's not clear whether that has ever actually happened, but it definitely could.) To ensure that secondary output files have timestamps >= their primary's, change our makefile convention to be that we provide a "touch $@" action not an empty rule. Also, make sure that this rule actually gets invoked during a distprep run, else the hazard remains. It's been like this a long time, so back-patch to all supported branches. In HEAD, I skipped the changes in src/backend/catalog/Makefile, because those rules are due to get replaced soon in the bootstrap data format patch, and there seems no need to create a merge issue for that patch. If for some reason we fail to land that patch in v11, we'll need to back-fill the changes in that one makefile from v10. Discussion: https://postgr.es/m/18556.1521668179@sss.pgh.pa.us
2018-03-22Fix tuple counting in SP-GiST index build.Tom Lane
Count the number of tuples in the index honestly, instead of assuming that it's the same as the number of tuples in the heap. (It might be different if the index is partial.) Back-patch to all supported versions. Tomas Vondra Discussion: https://postgr.es/m/3b3d8eac-c709-0d25-088e-b98339a1b28a@2ndquadrant.com
2018-03-22Fix errors in contrib/bloom index build.Tom Lane
Count the number of tuples in the index honestly, instead of assuming that it's the same as the number of tuples in the heap. (It might be different if the index is partial.) Fix counting of tuples in current index page, too. This error would have led to failing to write out the final page of the index if it contained exactly one tuple, so that the last tuple of the relation would not get indexed. Back-patch to 9.6 where contrib/bloom was added. Tomas Vondra and Tom Lane Discussion: https://postgr.es/m/3b3d8eac-c709-0d25-088e-b98339a1b28a@2ndquadrant.com
2018-03-21Fix mishandling of quoted-list GUC values in pg_dump and ruleutils.c.Tom Lane
Code that prints out the contents of setconfig or proconfig arrays in SQL format needs to handle GUC_LIST_QUOTE variables differently from other ones, because for those variables, flatten_set_variable_args() already applied a layer of quoting. The value can therefore safely be printed as-is, and indeed must be, or flatten_set_variable_args() will muck it up completely on reload. For all other GUC variables, it's necessary and sufficient to quote the value as a SQL literal. We'd recognized the need for this long ago, but mis-analyzed the need slightly, thinking that all GUC_LIST_INPUT variables needed the special treatment. That's actually wrong, since a valid value of a LIST variable might include characters that need quoting, although no existing variables accept such values. More to the point, we hadn't made any particular effort to keep the various places that deal with this up-to-date with the set of variables that actually need special treatment, meaning that we'd do the wrong thing with, for example, temp_tablespaces values. This affects dumping of SET clauses attached to functions, as well as ALTER DATABASE/ROLE SET commands. In ruleutils.c we can fix it reasonably honestly by exporting a guc.c function that allows discovering the flags for a given GUC variable. But pg_dump doesn't have easy access to that, so continue the old method of having a hard-wired list of affected variable names. At least we can fix it to have just one list not two, and update the list to match current reality. A remaining problem with this is that it only works for built-in GUC variables. pg_dump's list obvious knows nothing of third-party extensions, and even the "ask guc.c" method isn't bulletproof since the relevant extension might not be loaded. There's no obvious solution to that, so for now, we'll just have to discourage extension authors from inventing custom GUCs that need GUC_LIST_QUOTE. This has been busted for a long time, so back-patch to all supported branches. Michael Paquier and Tom Lane, reviewed by Kyotaro Horiguchi and Pavel Stehule Discussion: https://postgr.es/m/20180111064900.GA51030@paquier.xyz
2018-03-21Fix typo.Tatsuo Ishii
Patch by me.
2018-03-21Rework word_similarity documentation, make it close to actual algorithm.Teodor Sigaev
word_similarity before claimed as returning similarity of closest word in string, but, actually it returns similarity of substring. Also fix mistyped comments. Author: Alexander Korotkov Review by: David Steele, Liudmila Mantrova Discussionis: https://www.postgresql.org/message-id/flat/CY4PR17MB13207ED8310F847CF117EED0D85A0@CY4PR17MB1320.namprd17.prod.outlook.com https://www.postgresql.org/message-id/flat/f43b242d-000c-f4c8-cb8b-d37e9752cd93%40postgrespro.ru
2018-03-20Doc: typo fix, "PG_" should be "TG_" here.Tom Lane
Too much PG on the brain in commit 769159fd3, evidently. Noted by marcelhuberfoo@gmail.com. Discussion: https://postgr.es/m/152154834496.11957.17112112802418832865@wrigleys.postgresql.org
2018-03-19Prevent query-lifespan memory leakage of SP-GiST traversal values.Tom Lane
The original coding of the SP-GiST scan traversalValue feature (commit ccd6eb49a) arranged for traversal values to be stored in the query's main executor context. That's fine if there's only one index scan per query, but if there are many, we have a memory leak as successive scans create new traversal values. Fix it by creating a separate memory context for traversal values, which we can reset during spgrescan(). Back-patch to 9.6 where this code was introduced. In principle, adding the traversalCxt field to SpGistScanOpaqueData creates an ABI break in the back branches. But I (tgl) have little sympathy for extensions including spgist_private.h, so I'm not very worried about that. Alternatively we could stick the new field at the end of the struct in back branches, but that has its own downsides. Anton Dignös, reviewed by Alexander Kuzmenkov Discussion: https://postgr.es/m/CALNdv1jb6y2Te-m8xHLxLX12RsBmZJ1f4hESX7J0HjgyOhA9eA@mail.gmail.com
2018-03-19Fix some corner-case issues in REFRESH MATERIALIZED VIEW CONCURRENTLY.Tom Lane
refresh_by_match_merge() has some issues in the way it builds a SQL query to construct the "diff" table: 1. It doesn't require the selected unique index(es) to be indimmediate. 2. It doesn't pay attention to the particular equality semantics enforced by a given index, but just assumes that they must be those of the column datatype's default btree opclass. 3. It doesn't check that the indexes are btrees. 4. It's insufficiently careful to ensure that the parser will pick the intended operator when parsing the query. (This would have been a security bug before CVE-2018-1058.) 5. It's not careful about indexes on system columns. The way to fix #4 is to make use of the existing code in ri_triggers.c for generating an arbitrary binary operator clause. I chose to move that to ruleutils.c, since that seems a more reasonable place to be exporting such functionality from than ri_triggers.c. While #1, #3, and #5 are just latent given existing feature restrictions, and #2 doesn't arise in the core system for lack of alternate opclasses with different equality behaviors, #4 seems like an issue worth back-patching. That's the bulk of the change anyway, so just back-patch the whole thing to 9.4 where this code was introduced. Discussion: https://postgr.es/m/13836.1521413227@sss.pgh.pa.us
2018-03-19Fix performance hazard in REFRESH MATERIALIZED VIEW CONCURRENTLY.Tom Lane
Jeff Janes discovered that commit 7ca25b7de made one of the queries run by REFRESH MATERIALIZED VIEW CONCURRENTLY perform badly. The root cause is bad cardinality estimation for correlated quals, but a principled solution to that problem is some way off, especially since the planner lacks any statistics about whole-row variables. Moreover, in non-error cases this query produces no rows, meaning it must be run to completion; but use of LIMIT 1 encourages the planner to pick a fast-start, slow-completion plan, exactly not what we want. Remove the LIMIT clause, and instead rely on the count parameter we pass to SPI_execute() to prevent excess work if the query does return some rows. While we've heard no field reports of planner misbehavior with this query, it could be that people are having performance issues that haven't reached the level of pain needed to cause a bug report. In any case, that LIMIT clause can't possibly do anything helpful with any existing version of the planner, and it demonstrably can cause bad choices in some cases, so back-patch to 9.4 where the code was introduced. Thomas Munro Discussion: https://postgr.es/m/CAMkU=1z-JoGymHneGHar1cru4F1XDfHqJDzxP_CtK5cL3DOfmg@mail.gmail.com
2018-03-18Doc: note that statement-level view triggers require an INSTEAD OF trigger.Tom Lane
If a view lacks an INSTEAD OF trigger, DML on it can only work by rewriting the command into a command on the underlying base table(s). Then we will fire triggers attached to those table(s), not those for the view. This seems appropriate from a consistency standpoint, but nowhere was the behavior explicitly documented, so let's do that. There was some discussion of throwing an error or warning if a statement trigger is created on a view without creating a row INSTEAD OF trigger. But a simple implementation of that would result in dump/restore ordering hazards. Given that it's been like this all along, and we hadn't heard a complaint till now, a documentation improvement seems sufficient. Per bug #15106 from Pu Qun. Back-patch to all supported branches. Discussion: https://postgr.es/m/152083391168.1215.16892140713507052796@wrigleys.postgresql.org
2018-03-18Fix pg_recvlogical for pre-10 versionsMagnus Hagander
In e170b8c8, protection against modified search_path was added. However, PostgreSQL versions prior to 10 does not accept SQL commands over a replication connection, so the protection would generate a syntax error. Since we cannot run SQL commands on it, we are also not vulnerable to the issue that e170b8c8 fixes, so we can just skip this command for older versions. Author: Michael Paquier <michael@paquier.xyz>
2018-03-17Fix overflow handling in plpgsql's integer FOR loops.Tom Lane
The test to exit the loop if the integer control value would overflow an int32 turns out not to work on some ICC versions, as it's dependent on the assumption that the compiler will execute the code as written rather than "optimize" it. ICC lacks any equivalent of gcc's -fwrapv switch, so it was optimizing on the assumption of no integer overflow, and that breaks this. Rewrite into a form that in fact does not do any overflowing computations. Per Tomas Vondra and buildfarm member fulmar. It's been like this for a long time, although it was not till we added a regression test case covering the behavior (in commit dd2243f2a) that the problem became apparent. Back-patch to all supported versions. Discussion: https://postgr.es/m/50562fdc-0876-9843-c883-15b8566c7511@2ndquadrant.com
2018-03-17Fix WHERE CURRENT OF when the referenced cursor uses an index-only scan.Tom Lane
"UPDATE/DELETE WHERE CURRENT OF cursor_name" failed, with an error message like "cannot extract system attribute from virtual tuple", if the cursor was using a index-only scan for the target table. Fix it by digging the current TID out of the indexscan state. It seems likely that the same failure could occur for CustomScan plans and perhaps some FDW plan types, so that leaving this to be treated as an internal error with an obscure message isn't as good an idea as it first seemed. Hence, add a bit of heaptuple.c infrastructure to let us deliver a more on-topic message. I chose to make the message match what you get for the case where execCurrentOf can't identify the target scan node at all, "cursor "foo" is not a simply updatable scan of table "bar"". Perhaps it should be different, but we can always adjust that later. In the future, it might be nice to provide hooks that would let custom scan providers and/or FDWs deal with this in other ways; but that's not a suitable topic for a back-patchable bug fix. It's been like this all along, so back-patch to all supported branches. Yugo Nagata and Tom Lane Discussion: https://postgr.es/m/20180201013349.937dfc5f.nagata@sraoss.co.jp
2018-03-16Fix query-lifespan memory leakage in repeatedly executed hash joins.Tom Lane
ExecHashTableCreate allocated some memory that wasn't freed by ExecHashTableDestroy, specifically the per-hash-key function information. That's not a huge amount of data, but if one runs a query that repeats a hash join enough times, it builds up. Fix by arranging for the data in question to be kept in the hashtable's hashCxt instead of leaving it "loose" in the query-lifespan executor context. (This ensures that we'll also clean up anything that the hash functions allocate in fn_mcxt.) Per report from Amit Khandekar. It's been like this forever, so back-patch to all supported branches. Discussion: https://postgr.es/m/CAJ3gD9cFofAWGvcxLOxDHC=B0hjtW8yGmUsF2hdGh97CM38=7g@mail.gmail.com
2018-03-16Doc: explicitly point out that enum values can't be dropped.Tom Lane
This was not stated in so many words anywhere. Document it to make clear that it's a design limitation and not just an oversight or documentation omission. Discussion: https://postgr.es/m/152089733343.1222.6927268289645380498@wrigleys.postgresql.org
2018-03-15test_ddl_deparse: rename matviewAlvaro Herrera
Should have done this in e69f5e0efca ... Per note from Tom Lane.
2018-03-15test_ddl_deparse: Don't use pg_class as source for a matviewAlvaro Herrera
Doing so causes a pg_upgrade of a database containing these objects to fail whenever pg_class changes. And it's pointless anyway: we have more interesting tables anyhow. Discussion: https://postgr.es/m/CAD5tBc+s8pW9WvH2+_z=B4x95FD4QuzZKcaMpff_9H4rS0VU1A@mail.gmail.com
2018-03-14Fix double frees in ecpg.Michael Meskes
Patch by Patrick Krecker <patrick@judicata.com>