summaryrefslogtreecommitdiff
path: root/src
AgeCommit message (Collapse)Author
2019-01-24Remove infinite-loop hazards in ecpg test suite.Tom Lane
A report from Andrew Dunstan showed that an ecpglib breakage that causes repeated query failures could lead to infinite loops in some ecpg test scripts, because they contain "while(1)" loops with no exit condition other than successful test completion. That might be all right for manual testing, but it seems entirely unacceptable for automated test environments such as our buildfarm. We don't want buildfarm owners to have to intervene manually when a test goes wrong. To fix, just change all those while(1) loops to exit after at most 100 iterations (which is more than any of them expect to iterate). This seems sufficient since we'd see discrepancies in the test output if any loop executed the wrong number of times. I tested this by dint of intentionally breaking ecpg_do_prologue to always fail, and verifying that the tests still got to completion. Back-patch to all supported branches, since the whole point of this exercise is to protect the buildfarm against future mistakes. Discussion: https://postgr.es/m/18693.1548302004@sss.pgh.pa.us
2019-01-23Blind attempt to fix _configthreadlocale() failures on MinGW.Tom Lane
Apparently, some builds of MinGW contain a version of _configthreadlocale() that always returns -1, indicating failure. Rather than treating that as a curl-up-and-die condition, soldier on as though the function didn't exist. This leaves us without thread safety on such MinGW versions, but we didn't have it anyway. Discussion: https://postgr.es/m/d06a16bc-52d6-9f0d-2379-21242d7dbe81@2ndQuadrant.com
2019-01-23Fix misc typos in comments.Heikki Linnakangas
Spotted mostly by Fabien Coelho. Discussion: https://www.postgresql.org/message-id/alpine.DEB.2.21.1901230947050.16643@lancre
2019-01-21Avoid thread-safety problem in ecpglib.Tom Lane
ecpglib attempts to force the LC_NUMERIC locale to "C" while reading server output, to avoid problems with strtod() and related functions. Historically it's just issued setlocale() calls to do that, but that has major problems if we're in a threaded application. setlocale() itself is not required by POSIX to be thread-safe (and indeed is not, on recent OpenBSD). Moreover, its effects are process-wide, so that we could cause unexpected results in other threads, or another thread could change our setting. On platforms having uselocale(), which is required by POSIX:2008, we can avoid these problems by using uselocale() instead. Windows goes its own way as usual, but we can make it safe by using _configthreadlocale(). Platforms having neither continue to use the old code, but that should be pretty much nobody among current systems. (Subsequent buildfarm results show that recent NetBSD versions still lack uselocale(), but it's not a big problem because they also do not support non-"C" settings for LC_NUMERIC.) Back-patch of commits 8eb4a9312 and ee27584c4. Michael Meskes and Tom Lane; thanks also to Takayuki Tsunakawa. Discussion: https://postgr.es/m/31420.1547783697@sss.pgh.pa.us
2019-01-21Flush relcache entries when their FKs are meddled withAlvaro Herrera
Back in commit 100340e2dcd0, we made relcache entries keep lists of the foreign keys applying to the relation -- but we forgot to update CacheInvalidateHeapTuple to flush those entries when new FKs got created or existing ones updated/deleted. No bugs appear to have been reported that would be explained by this ommission, but I noticed the problem while working on an unrelated bugfix which clearly showed it. Fix by adding relcache flush on relevant foreign key changes. Backpatch to 9.6, like the aforementioned commit. Discussion: https://postgr.es/m/201901211927.7mmhschxlejh@alvherre.pgsql Reviewed-by: Tom Lane
2019-01-19Revert "Add valgrind suppressions for wcsrtombs optimizations"Tomas Vondra
This reverts commit 71b2951ccc433a6dd799efedff9f5927c32440f6. Per discussion, it's not desirable to add valgrind suppressions for outside our own code base (e.g. glibc in this case), especially when the suppressions may be platform-specific. There are better ways to deal with that, e.g. by providing local suppressions. Discussion: https://www.postgresql.org/message-id/flat/90ac0452-e907-e7a4-b3c8-15bd33780e62%402ndquadrant.com
2019-01-18Use our own getopt() on OpenBSD.Tom Lane
Recent OpenBSD (at least 5.9 and up) has a version of getopt(3) that will not cope with the "-:" spec we use to accept double-dash options in postgres.c and postmaster.c. Admittedly, that's a hack because POSIX only requires getopt() to allow alphanumeric option characters. I have no desire to find another way, however, so let's just do what we were already doing on Solaris: force use of our own src/port/getopt.c implementation. In passing, improve some of the comments around said implementation. Per buildfarm and local testing. Back-patch to all supported branches. Discussion: https://postgr.es/m/30197.1547835700@sss.pgh.pa.us
2019-01-17Postpone aggregate checks until after collation is assigned.Andrew Gierth
Previously, parseCheckAggregates was run before assign_query_collations, but this causes problems if any expression has already had a collation assigned by some transform function (e.g. transformCaseExpr) before parseCheckAggregates runs. The differing collations would cause expressions not to be recognized as equal to the ones in the GROUP BY clause, leading to spurious errors about unaggregated column references. The result was that CASE expr WHEN val ... would fail when "expr" contained a GROUPING() expression or matched one of the group by expressions, and where collatable types were involved; whereas the supposedly identical CASE WHEN expr = val ... would succeed. Backpatch all the way; this appears to have been wrong ever since collations were introduced. Per report from Guillaume Lelarge, analysis and patch by me. Discussion: https://postgr.es/m/CAECtzeVSO_US8C2Khgfv54ZMUOBR4sWq+6_bLrETnWExHT=rFg@mail.gmail.com Discussion: https://postgr.es/m/87muo0k0c7.fsf@news-spur.riddles.org.uk
2019-01-13fix typoAndrew Dunstan
2019-01-13Make DLSUFFIX easily discoverable by build scriptsAndrew Dunstan
This will enable things like the buildfarm client to discover more reliably if certain libraries have been installed. Discussion: https://postgr.es/m/859e7c91-7ef4-d4b4-2ca2-8046e0cbee09@2ndQuadrant.com Backpatch to all live branches.
2019-01-11Avoid sharing PARAM_EXEC slots between different levels of NestLoop.Tom Lane
Up to now, createplan.c attempted to share PARAM_EXEC slots for NestLoopParams across different plan levels, if the same underlying Var was being fed down to different righthand-side subplan trees by different NestLoops. This was, I think, more of an artifact of using subselect.c's PlannerParamItem infrastructure than an explicit design goal, but anyway that was the end result. This works well enough as long as the plan tree is executing synchronously, but the feature whereby Gather can execute the parallelized subplan locally breaks it. An upper NestLoop node might execute for a row retrieved from a parallel worker, and assign a value for a PARAM_EXEC slot from that row, while the leader's copy of the parallelized subplan is suspended with a different active value of the row the Var comes from. When control eventually returns to the leader's subplan, it gets the wrong answers if the same PARAM_EXEC slot is being used within the subplan, as reported in bug #15577 from Bartosz Polnik. This is pretty reminiscent of the problem fixed in commit 46c508fbc, and the proper fix seems to be the same: don't try to share PARAM_EXEC slots across different levels of controlling NestLoop nodes. This requires decoupling NestLoopParam handling from PlannerParamItem handling, although the logic remains somewhat similar. To avoid bizarre division of labor between subselect.c and createplan.c, I decided to move all the param-slot-assignment logic for both cases out of those files and put it into a new file paramassign.c. Hopefully it's a bit better documented now, too. A regression test case for this might be nice, but we don't know a test case that triggers the problem with a suitably small amount of data. Back-patch to 9.6 where we added Gather nodes. It's conceivable that related problems exist in older branches; but without some evidence for that, I'll leave the older branches alone. Discussion: https://postgr.es/m/15577-ca61ab18904af852@postgresql.org
2019-01-03Improve ANALYZE's handling of concurrent-update scenarios.Tom Lane
This patch changes the rule for whether or not a tuple seen by ANALYZE should be included in its sample. When we last touched this logic, in commit 51e1445f1, we weren't thinking very hard about tuples being UPDATEd by a long-running concurrent transaction. In such a case, we might see the pre-image as either LIVE or DELETE_IN_PROGRESS depending on timing; and we might see the post-image not at all, or as INSERT_IN_PROGRESS. Since the existing code will not sample either DELETE_IN_PROGRESS or INSERT_IN_PROGRESS tuples, this leads to concurrently-updated rows being omitted from the sample entirely. That's not very helpful, and it's especially the wrong thing if the concurrent transaction ends up rolling back. The right thing seems to be to sample DELETE_IN_PROGRESS rows just as if they were live. This makes the "sample it" and "count it" decisions the same, which seems good for consistency. It's clearly the right thing if the concurrent transaction ends up rolling back; in effect, we are sampling as though IN_PROGRESS transactions haven't happened yet. Also, this combination of choices ensures maximum robustness against the different combinations of whether and in which state we might see the pre- and post-images of an update. It's slightly annoying that we end up recording immediately-out-of-date stats in the case where the transaction does commit, but on the other hand the stats are fine for columns that didn't change in the update. And the alternative of sampling INSERT_IN_PROGRESS rows instead seems like a bad idea, because then the sampling would be inconsistent with the way rows are counted for the stats report. Per report from Mark Chambers; thanks to Jeff Janes for diagnosing what was happening. Back-patch to all supported versions. Discussion: https://postgr.es/m/CAFh58O_Myr6G3tcH3gcGrF-=OExB08PJdWZcSBcEcovaiPsrHA@mail.gmail.com
2019-01-03Update ssl test certificates and keysPeter Eisentraut
Debian testing and newer now require that RSA and DHE keys are at least 2048 bit long and no longer allow SHA-1 for signatures in certificates. This is currently causing the ssl tests to fail there because the test certificates and keys have been created in violation of those conditions. Update the parameters to create the test files and create a new set of test files. Author: Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> Reported-by: Michael Paquier <michael@paquier.xyz> Discussion: https://www.postgresql.org/message-id/flat/20180917131340.GE31460%40paquier.xyz
2019-01-02Don't believe MinMaxExpr is leakproof without checking.Tom Lane
MinMaxExpr invokes the btree comparison function for its input datatype, so it's only leakproof if that function is. Many such functions are indeed leakproof, but others are not, and we should not just assume that they are. Hence, adjust contain_leaked_vars to verify the leakproofness of the referenced function explicitly. I didn't add a regression test because it would need to depend on some particular comparison function being leaky, and that's a moving target, per discussion. This has been wrong all along, so back-patch to supported branches. Discussion: https://postgr.es/m/31042.1546194242@sss.pgh.pa.us
2018-12-31pg_regress: Promptly detect failed postmaster startup.Noah Misch
Detect it the way pg_ctl's wait_for_postmaster() does. When pg_regress spawned a postmaster that failed startup, we were detecting that only with "pg_regress: postmaster did not respond within 60 seconds". Back-patch to 9.4 (all supported versions). Reviewed by Tom Lane. Discussion: https://postgr.es/m/20181231172922.GA199150@gust.leadboat.com
2018-12-29pg_rewind: Add missing newline to error messagePeter Eisentraut
2018-12-27Have DISCARD ALL/TEMP remove leftover temp tablesAlvaro Herrera
Previously, it would only remove temp tables created in the same session; but if the session uses the BackendId of a previously crashed backend that left temp tables around, those would not get removed. Since autovacuum would not drop them either (because it sees that the BackendId is in use by the current session) these can cause annoying xid-wraparound warnings. Apply to branches 9.4 to 10. This is not a problem since version 11, because commit 943576bddcb5 added state tracking that makes autovacuum realize that those temp tables are not ours, so it removes them. This is useful to handle in DISCARD, because even though it does not handle all situations, it does handle the common one where a connection pooler keeps the same session open for an indefinitely long time. Discussion: https://postgr.es/m/20181226190834.wsk2wzott5yzrjiq@alvherre.pgsql Reviewed-by: Takayuki Tsunakawa, Michaël Paquier
2018-12-27Make autovacuum more selective about temp tables to keepAlvaro Herrera
When temp tables are in danger of XID wraparound, autovacuum drops them; however, it preserves those that are owned by a working session. This is desirable, except when the session is connected to a different database (because the temp tables cannot be from that session), so make it only keep the temp tables only if the backend is in the same database as the temp tables. This is not bulletproof: it fails to detect temp tables left by a session whose backend ID is reused in the same database but the new session does not use temp tables. Commit 943576bddcb5 fixes that case too, for branches 11 and up (which is why we don't apply this fix to those branches), but back-patching that one is not universally agreed on. Discussion: https://postgr.es/m/20181214162843.37g6h3txto43akrb@alvherre.pgsql Reviewed-by: Takayuki Tsunakawa, Michaël Paquier
2018-12-27Ignore inherited temp relations from other sessions when truncatingMichael Paquier
Inheritance trees can include temporary tables if the parent is permanent, which makes possible the presence of multiple temporary children from different sessions. Trying to issue a TRUNCATE on the parent in this scenario causes a failure, so similarly to any other queries just ignore such cases, which makes TRUNCATE work transparently. This makes truncation behave similarly to any other DML query working on the parent table with queries which need to be issues on children. A set of isolation tests is added to cover basic cases. Reported-by: Zhou Digoal Author: Amit Langote, Michael Paquier Discussion: https://postgr.es/m/15565-ce67a48d0244436a@postgresql.org Backpatch-through: 9.4
2018-12-26Fix portability failure introduced in commits d2b0b60e7 et al.Tom Lane
I made a frontend fprintf() format use %m, forgetting that that's only safe in HEAD not the back branches; prior to 96bf88d52 and d6c55de1f, it would work on glibc platforms but not elsewhere. Revert to using %s ... strerror(errno) as the code did before. We could have left HEAD as-is, but for code consistency across branches, I chose to apply this patch there too. Per Coverity and a few buildfarm members.
2018-12-24Prioritize history files when archivingMichael Paquier
At the end of recovery for the post-promotion process, a new history file is created followed by the last partial segment of the previous timeline. Based on the timing, the archiver would first try to archive the last partial segment and then the history file. This can delay the detection of a new timeline taken, particularly depending on the time it takes to transfer the last partial segment as it delays the moment the history file of the new timeline gets archived. This can cause promoted standbys to use the same timeline as one already taken depending on the circumstances if multiple instances look at archives at the same location. This commit changes the order of archiving so as history files are archived in priority over other file types, which reduces the likelihood of the same timeline being taken (still not reducing the window to zero), and it makes the archiver behave more consistently with the startup process doing its post-promotion business. Author: David Steele Reviewed-by: Michael Paquier, Kyotaro Horiguchi Discussion: https://postgr.es/m/929068cf-69e1-bba2-9dc0-e05986aed471@pgmasters.net Backpatch-through: 9.5
2018-12-22Fix ancient compiler warnings and typos in !HAVE_SYMLINK codePeter Eisentraut
This has never been correct since this code was introduced.
2018-12-21Check for conflicting queries during replay of gistvacuumpage()Alexander Korotkov
013ebc0a7b implements so-called GiST microvacuum. That is gistgettuple() marks index tuples as dead when kill_prior_tuple is set. Later, when new tuple insertion claims page space, those dead index tuples are physically deleted from page. When this deletion is replayed on standby, it might conflict with read-only queries. But 013ebc0a7b doesn't handle this. That may lead to disappearance of some tuples from read-only snapshots on standby. This commit implements resolving of conflicts between replay of GiST microvacuum and standby queries. On the master we implement new WAL record type XLOG_GIST_DELETE, which comprises necessary information. On stable releases we've to be tricky to keep WAL compatibility. Information required for conflict processing is just appended to data of XLOG_GIST_PAGE_UPDATE record. So, PostgreSQL version, which doesn't know about conflict processing, will just ignore that. Reported-by: Andres Freund Diagnosed-by: Andres Freund Discussion: https://postgr.es/m/20181212224524.scafnlyjindmrbe6%40alap3.anarazel.de Author: Alexander Korotkov Backpatch-through: 9.6
2018-12-19Fix ADD IF NOT EXISTS used in conjunction with ALTER TABLE ONLYGreg Stark
The flag for IF NOT EXISTS was only being passed down in the normal recursing case. It's been this way since originally added in 9.6 in commit 2cd40adb85 so backpatch back to 9.6.
2018-12-18Fix ancient thinko in mergejoin cost estimation.Tom Lane
"rescanratio" was computed as 1 + rescanned-tuples / total-inner-tuples, which is sensible if it's to be multiplied by total-inner-tuples or a cost value corresponding to scanning all the inner tuples. But in reality it was (mostly) multiplied by inner_rows or a related cost, numbers that take into account the possibility of stopping short of scanning the whole inner relation thanks to a limited key range in the outer relation. This'd still make sense if we could expect that stopping short would result in a proportional decrease in the number of tuples that have to be rescanned. It does not, however. The argument that establishes the validity of our estimate for that number is independent of whether we scan all of the inner relation or stop short, and experimentation also shows that stopping short doesn't reduce the number of rescanned tuples. So the correct calculation is 1 + rescanned-tuples / inner_rows, and we should be sure to multiply that by inner_rows or a corresponding cost value. Most of the time this doesn't make much difference, but if we have both a high rescan rate (due to lots of duplicate values) and an outer key range much smaller than the inner key range, then the error can be significant, leading to a large underestimate of the cost associated with rescanning. Per report from Vijaykumar Jain. This thinko appears to go all the way back to the introduction of the rescan estimation logic in commit 70fba7043, so back-patch to all supported branches. Discussion: https://postgr.es/m/CAE7uO5hMb_TZYJcZmLAgO6iD68AkEK6qCe7i=vZUkCpoKns+EQ@mail.gmail.com
2018-12-17Fix use-after-free bug when renaming constraintsMichael Paquier
This is an oversight from recent commit b13fd344. While on it, tweak the previous test with a better name for the renamed primary key. Detected by buildfarm member prion which forces relation cache release with -DRELCACHE_FORCE_RELEASE. Back-patch down to 9.4 as the previous commit.
2018-12-17Make constraint rename issue relcache invalidation on target relationMichael Paquier
When a constraint gets renamed, it may have associated with it a target relation (for example domain constraints don't have one). Not invalidating the target relation cache when issuing the renaming can result in issues with subsequent commands that refer to the old constraint name using the relation cache, causing various failures. One pattern spotted was using CREATE TABLE LIKE after a constraint renaming. Reported-by: Stuart <sfbarbee@gmail.com> Author: Amit Langote Reviewed-by: Michael Paquier Discussion: https://postgr.es/m/2047094.V130LYfLq4@station53.ousa.org
2018-12-16Make error handling in parallel pg_upgrade less bogus.Tom Lane
reap_child() basically ignored the possibility of either an error in waitpid() itself or a child process failure on signal. We don't really need to do more than report and crash hard, but proceeding as though nothing is wrong is definitely Not Acceptable. The error report for nonzero child exit status was pretty off-point, as well. Noted while fooling around with child-process failure detection logic elsewhere. It's been like this a long time, so back-patch to all supported branches.
2018-12-13Prevent GIN deleted pages from being reclaimed too earlyAlexander Korotkov
When GIN vacuum deletes a posting tree page, it assumes that no concurrent searchers can access it, thanks to ginStepRight() locking two pages at once. However, since 9.4 searches can skip parts of posting trees descending from the root. That leads to the risk that page is deleted and reclaimed before concurrent search can access it. This commit prevents the risk of above by waiting for every transaction, which might wait to reference this page, to finish. Due to binary compatibility we can't change GinPageOpaqueData to store corresponding transaction id. Instead we reuse page header pd_prune_xid field, which is unused in index pages. Discussion: https://postgr.es/m/31a702a.14dd.166c1366ac1.Coremail.chjischj%40163.com Author: Andrey Borodin, Alexander Korotkov Reviewed-by: Alexander Korotkov Backpatch-through: 9.4
2018-12-13Prevent deadlock in ginRedoDeletePage()Alexander Korotkov
On standby ginRedoDeletePage() can work concurrently with read-only queries. Those queries can traverse posting tree in two ways. 1) Using rightlinks by ginStepRight(), which locks the next page before unlocking its left sibling. 2) Using downlinks by ginFindLeafPage(), which locks at most one page at time. Original lock order was: page, parent, left sibling. That lock order can deadlock with ginStepRight(). In order to prevent deadlock this commit changes lock order to: left sibling, page, parent. Note, that position of parent in locking order seems insignificant, because we only lock one page at time while traversing downlinks. Reported-by: Chen Huajun Diagnosed-by: Chen Huajun, Peter Geoghegan, Andrey Borodin Discussion: https://postgr.es/m/31a702a.14dd.166c1366ac1.Coremail.chjischj%40163.com Author: Alexander Korotkov Backpatch-through: 9.4
2018-12-12Repair bogus EPQ plans generated for postgres_fdw foreign joins.Tom Lane
postgres_fdw's postgresGetForeignPlan() assumes without checking that the outer_plan it's given for a join relation must have a NestLoop, MergeJoin, or HashJoin node at the top. That's been wrong at least since commit 4bbf6edfb (which could cause insertion of a Sort node on top) and it seems like a pretty unsafe thing to Just Assume even without that. Through blind good fortune, this doesn't seem to have any worse consequences today than strange EXPLAIN output, but it's clearly trouble waiting to happen. To fix, test the node type explicitly before touching Join-specific fields, and avoid jamming the new tlist into a node type that can't do projection. Export a new support function from createplan.c to avoid building low-level knowledge about the latter into FDWs. Back-patch to 9.6 where the faulty coding was added. Note that the associated regression test cases don't show any changes before v11, apparently because the tests back-patched with 4bbf6edfb don't actually exercise the problem case before then (there's no top-level Sort in those plans). Discussion: https://postgr.es/m/8946.1544644803@sss.pgh.pa.us
2018-12-12Repair bogus handling of multi-assignment Params in upper plan levels.Tom Lane
Our support for multiple-set-clauses in UPDATE assumes that the Params referencing a MULTIEXPR_SUBLINK SubPlan will appear before that SubPlan in the targetlist of the plan node that calculates the updated row. (Yeah, it's a hack...) In some PG branches it's possible that a Result node gets inserted between the primary calculation of the update tlist and the ModifyTable node. setrefs.c did the wrong thing in this case and left the upper-level Params as Params, causing a crash at runtime. What it should do is replace them with "outer" Vars referencing the child plan node's output. That's a result of careless ordering of operations in fix_upper_expr_mutator, so we can fix it just by reordering the code. Fix fix_join_expr_mutator similarly for consistency, even though join nodes could never appear in such a context. (In general, it seems likely to be a bit cheaper to use Vars than Params in such situations anyway, so this patch might offer a tiny performance improvement.) The hazard extends back to 9.5 where the MULTIEXPR_SUBLINK stuff was introduced, so back-patch that far. However, this may be a live bug only in 9.6.x and 10.x, as the other branches don't seem to want to calculate the final tlist below the Result node. (That plan shape change between branches might be a mini-bug in itself, but I'm not really interested in digging into the reasons for that right now. Still, add a regression test memorializing what we expect there, so we'll notice if it changes again.) Per bug report from Eduards Bezverhijs. Discussion: https://postgr.es/m/b6cd572a-3e44-8785-75e9-c512a5a17a73@tieto.com
2018-12-11Fix test_rls_hooks to assign expression collations properly.Tom Lane
This module overlooked this necessary fixup step on the results of transformWhereClause(). It accidentally worked anyway, because the constructed expression involved type "name" which is not collatable, but it fell over while I was experimenting with changing "name" to be collatable. Back-patch, not because there's any live bug here in back branches, but because somebody might use this code as a model for some real application and then not understand why it doesn't work.
2018-12-10Raise some timeouts to 180s, in test code.Noah Misch
Slow runs of buildfarm members chipmunk, hornet and mandrill saw the shorter timeouts expire. The 180s timeout in poll_query_until has been trouble-free since 2a0f89cd717ce6d49cdc47850577823682167e87 introduced it two years ago, so use 180s more widely. Back-patch to 9.6, where the first of these timeouts was introduced. Reviewed by Michael Paquier. Discussion: https://postgr.es/m/20181209001601.GC2973271@rfd.leadboat.com
2018-12-10Add stack depth checks to key recursive functions in backend/nodes/*.c.Tom Lane
Although copyfuncs.c has a check_stack_depth call in its recursion, equalfuncs.c, outfuncs.c, and readfuncs.c lacked one. This seems unwise. Likewise fix planstate_tree_walker(), in branches where that exists. Discussion: https://postgr.es/m/30253.1544286631@sss.pgh.pa.us
2018-12-07Fix misapplication of pgstat_count_truncate to wrong relation.Tom Lane
The stanza of ExecuteTruncate[Guts] that truncates a target table's toast relation re-used the loop local variable "rel" to reference the toast rel. This was safe enough when written, but commit d42358efb added code below that that supposed "rel" still pointed to the parent table. Therefore, the stats counter update was applied to the wrong relcache entry (the toast rel not the user rel); and if we were unlucky and that relcache entry had been flushed during reindex_relation, very bad things could ensue. (I'm surprised that CLOBBER_CACHE_ALWAYS testing hasn't found this. I'm even more surprised that the problem wasn't detected during the development of d42358efb; it must not have been tested in any case with a toast table, as the incorrect stats counts are very obvious.) To fix, replace use of "rel" in that code branch with a more local variable. Adjust test cases added by d42358efb so that some of them use tables with toast tables. Per bug #15540 from Pan Bian. Back-patch to 9.5 where d42358efb came in. Discussion: https://postgr.es/m/15540-01078812338195c0@postgresql.org
2018-12-06Improve our response to invalid format strings, and detect more cases.Tom Lane
Places that are testing for *printf failure ought to include the format string in their error reports, since bad-format-string is one of the more likely causes of such failure. This both makes it easier to find and repair the mistake, and provides at least some useful info to the user who stumbles across such a problem. Also, tighten snprintf.c to report EINVAL for an invalid flag or final character in a format %-spec (including the case where the %-spec is missing a final character altogether). This seems like better project policy, and it also allows removing an instruction or two from the hot code path. Back-patch the error reporting change in pvsnprintf, since it should be harmless and may be helpful; but not the snprintf.c change. Per discussion of bug #15511 from Ertuğrul Kahveci, which reported an invalid translated format string. These changes don't fix that error, but they should improve matters next time we make such a mistake. Discussion: https://postgr.es/m/15511-1d8b6a0bc874112f@postgresql.org
2018-11-29Ensure static libraries have correct mod time even if ranlib messes it up.Tom Lane
In at least Apple's version of ranlib, the output file is updated to have a mod time equal to the max of the timestamps of its components, and that data only has seconds precision. On a filesystem with sub-second file timestamp precision --- say, APFS --- this can result in the finished static library appearing older than its input files, which causes useless rebuilds and possible outright failures in parallel makes. We've only seen this reported in the field from people using Apple's ranlib with a non-Apple make, because Apple's make doesn't know about sub-second timestamps either so it doesn't decide rebuilds are needed. But Apple's ranlib presumably shares code with at least some BSDen, so it's not that unlikely that the same problem could arise elsewhere. To fix, just "touch" the output file after ranlib finishes. We seem to need this in only one place. There are other calls of ranlib in our makefiles, but they are working on intermediate files whose timestamps are not actually important, or else on an installed static library for which sub-second timestamp precision is unlikely to matter either. (Also, so far as I can tell, Apple's ranlib doesn't mess up the file timestamp in the latter usage anyhow.) In passing, change "ranlib" to "$(RANLIB)" in one place that was bypassing the make macro for no good reason. Per bug #15525 from Jack Kelly (via Alyssa Ross). Back-patch to all supported branches. Discussion: https://postgr.es/m/15525-a30da084f17a1faa@postgresql.org
2018-11-29Fix handling of synchronous replication for stopping WAL sendersMichael Paquier
This fixes an oversight from c6c3334 which forgot that if a subset of WAL senders are stopping and in a sync state, other WAL senders could still be waiting for a WAL position to be synced while committing a transaction. However the subset of stopping senders would not release waiters, potentially breaking synchronous replication guarantees. This commit makes sure that even WAL senders stopping are able to release waiters and are tracked properly. On 9.4, this can also trigger an assertion failure when setting for example max_wal_senders to 1 where a WAL sender is not able to find itself as in synchronous state when the instance stops. Reported-by: Paul Guo Author: Paul Guo, Michael Paquier Discussion: https://postgr.es/m/CAEET0ZEv8VFqT3C-cQm6byOB4r4VYWcef1J21dOX-gcVhCSpmA@mail.gmail.com Backpatch-through: 9.4
2018-11-28Don't set PAM_RHOST for Unix sockets.Thomas Munro
Since commit 2f1d2b7a we have set PAM_RHOST to "[local]" for Unix sockets. This caused Linux PAM's libaudit integration to make DNS requests for that name. It's not exactly clear what value PAM_RHOST should have in that case, but it seems clear that we shouldn't set it to an unresolvable name, so don't do that. Back-patch to 9.6. Bug #15520. Author: Thomas Munro Reviewed-by: Peter Eisentraut Reported-by: Albert Schabhuetl Discussion: https://postgr.es/m/15520-4c266f986998e1c5%40postgresql.org
2018-11-28Do not decode TOAST data for table rewritesTomas Vondra
During table rewrites (VACUUM FULL and CLUSTER), the main heap is logged using XLOG / FPI records, and thus (correctly) ignored in decoding. But the associated TOAST table is WAL-logged as plain INSERT records, and so was logically decoded and passed to reorder buffer. That has severe consequences with TOAST tables of non-trivial size. Firstly, reorder buffer has to keep all those changes, possibly spilling them to a file, incurring I/O costs and disk space. Secondly, ReoderBufferCommit() was stashing all those TOAST chunks into a hash table, which got discarded only after processing the row from the main heap. But as the main heap is not decoded for rewrites, this never happened, so all the TOAST data accumulated in memory, resulting either in excessive memory consumption or OOM. The fix is simple, as commit e9edc1ba already introduced infrastructure (namely HEAP_INSERT_NO_LOGICAL flag) to skip logical decoding of TOAST tables, but it only applied it to system tables. So simply use it for all TOAST data in raw_heap_insert(). That would however solve only the memory consumption issue - the TOAST changes would still be decoded and added to the reorder buffer, and spilled to disk (although without TOAST tuple data, so much smaller). But we can solve that by tweaking DecodeInsert() to just ignore such INSERT records altogether, using XLH_INSERT_CONTAINS_NEW_TUPLE flag, instead of skipping them later in ReorderBufferCommit(). Review: Masahiko Sawada Discussion: https://www.postgresql.org/message-id/flat/1a17c643-e9af-3dba-486b-fbe31bc1823a%402ndquadrant.com Backpatch: 9.4-, where logical decoding was introduced
2018-11-26Fix ac218aa4f6 to work on versions before 9.5.Andres Freund
Unfortunately ac218aa4f6 missed the fact that a reference to 'pg_catalog.regnamespace'::regclass wouldn't work before that type is known. Fix that, by replacing the regtype usage with a join to pg_type. Reported-By: Tom Lane Author: Andres Freund Discussion: https://postgr.es/m/8863.1543297423@sss.pgh.pa.us Backpatch: 9.5-, like ac218aa4f6
2018-11-26Update pg_upgrade test for reg* to include regrole and regnamespace.Andres Freund
When the regrole (0c90f6769) and regnamespace (cb9fa802b) types were added in 9.5, pg_upgrade's check for reg* types wasn't updated. While regrole currently is safe, regnamespace is not. It seems unlikely that anybody uses regnamespace inside catalog tables across a pg_upgrade, but the tests should be correct nevertheless. While at it, reorder the types checked in the query to be alphabetical. Otherwise it's annoying to compare existing and tested for types. Author: Andres Freund Discussion: https://postgr.es/m/037e152a-cb25-3bcb-4f35-bdc9988f8204@2ndQuadrant.com Backpatch: 9.5-, as regrole/regnamespace
2018-11-26Fix translation of special characters in psql's LaTeX output modes.Tom Lane
latex_escaped_print() mistranslated \ and failed to provide any translation for # ^ and ~, all of which would typically lead to LaTeX document syntax errors. In addition it didn't translate < > and |, which would typically render as unexpected characters. To some extent this represents shortcomings in ancient versions of LaTeX, which if memory serves had no easy way to render these control characters as ASCII text. But that's been fixed for, um, decades. In any case there is no value in emitting guaranteed-to-fail output for these characters. Noted while fooling with test cases added by commit 9a98984f4. Back-patch the code change to all supported versions.
2018-11-24Update additional float4/8 expected-output files.Tom Lane
I forgot that the back branches have more variant files than HEAD :-(. Per buildfarm. Discussion: https://postgr.es/m/15519-4fc785b483201ff1@postgresql.org
2018-11-24Fix float-to-integer coercions to handle edge cases correctly.Tom Lane
ftoi4 and its sibling coercion functions did their overflow checks in a way that looked superficially plausible, but actually depended on an assumption that the MIN and MAX comparison constants can be represented exactly in the float4 or float8 domain. That fails in ftoi4, ftoi8, and dtoi8, resulting in a possibility that values near the MAX limit will be wrongly converted (to negative values) when they need to be rejected. Also, because we compared before rounding off the fractional part, the other three functions threw errors for values that really ought to get rounded to the min or max integer value. Fix by doing rint() first (requiring an assumption that it handles NaN and Inf correctly; but dtoi8 and ftoi8 were assuming that already), and by comparing to values that should coerce to float exactly, namely INTxx_MIN and -INTxx_MIN. Also remove some random cosmetic discrepancies between these six functions. This back-patches commits cbdb8b4c0 and 452b637d4. In the 9.4 branch, also back-patch the portion of 62e2a8dc2 that added PG_INTnn_MIN and related constants to c.h, so that these functions can rely on them. Per bug #15519 from Victor Petrovykh. Patch by me; thanks to Andrew Gierth for analysis and discussion. Discussion: https://postgr.es/m/15519-4fc785b483201ff1@postgresql.org
2018-11-19PANIC on fsync() failure.Thomas Munro
On some operating systems, it doesn't make sense to retry fsync(), because dirty data cached by the kernel may have been dropped on write-back failure. In that case the only remaining copy of the data is in the WAL. A subsequent fsync() could appear to succeed, but not have flushed the data. That means that a future checkpoint could apparently complete successfully but have lost data. Therefore, violently prevent any future checkpoint attempts by panicking on the first fsync() failure. Note that we already did the same for WAL data; this change extends that behavior to non-temporary data files. Provide a GUC data_sync_retry to control this new behavior, for users of operating systems that don't eject dirty data, and possibly forensic/testing uses. If it is set to on and the write-back error was transient, a later checkpoint might genuinely succeed (on a system that does not throw away buffers on failure); if the error is permanent, later checkpoints will continue to fail. The GUC defaults to off, meaning that we panic. Back-patch to all supported releases. There is still a narrow window for error-loss on some operating systems: if the file is closed and later reopened and a write-back error occurs in the intervening time, but the inode has the bad luck to be evicted due to memory pressure before we reopen, we could miss the error. A later patch will address that with a scheme for keeping files with dirty data open at all times, but we judge that to be too complicated to back-patch. Author: Craig Ringer, with some adjustments by Thomas Munro Reported-by: Craig Ringer Reviewed-by: Robert Haas, Thomas Munro, Andres Freund Discussion: https://postgr.es/m/20180427222842.in2e4mibx45zdth5%40alap3.anarazel.de
2018-11-19Don't forget about failed fsync() requests.Thomas Munro
If fsync() fails, md.c must keep the request in its bitmap, so that future attempts will try again. Back-patch to all supported releases. Author: Thomas Munro Reviewed-by: Amit Kapila Reported-by: Andrew Gierth Discussion: https://postgr.es/m/87y3i1ia4w.fsf%40news-spur.riddles.org.uk
2018-11-18Add valgrind suppressions for wcsrtombs optimizationsTomas Vondra
wcsrtombs (called through wchar2char from common functions like lower, upper, etc.) uses various optimizations that may look like access to uninitialized data, triggering valgrind reports. For example AVX2 instructions load data in 256-bit chunks, and gconv does something similar with 32-bit chunks. This is faster than accessing the bytes one by one, and the uninitialized part of the buffer is not actually used. So suppress the bogus reports. The exact stack depends on possible optimizations - it might be AVX, SSE (as in the report by Aleksander Alekseev) or something else. Hence the last frame is wildcarded, to deal with this. Backpatch all the way back to 9.4. Author: Tomas Vondra Discussion: https://www.postgresql.org/message-id/flat/90ac0452-e907-e7a4-b3c8-15bd33780e62%402ndquadrant.com Discussion: https://www.postgresql.org/message-id/20180220150838.GD18315@e733.localdomain
2018-11-14Second try at fixing numeric data passed through an ECPG SQLDA.Tom Lane
In commit ecfd55795, I removed sqlda.c's checks for ndigits != 0 on the grounds that we should duplicate the state of the numeric value's digit buffer even when all the digits are zeroes. However, that still isn't quite right, because another possible state of the digit buffer is buf == digits == NULL (this occurs for a NaN). As the code now stands, it'll invoke memcpy with a NULL source address and zero bytecount, which we know a few platforms crash on. Hence, reinstate the no-copy short-circuit, but make it test specifically for buf != NULL rather than some other condition. In hindsight, the ndigits test (added by commit f2ae9f9c3) was almost certainly meant to fix the NaN case not the all-zeroes case as the associated thread alleged. As before, back-patch to all supported versions. Discussion: https://postgr.es/m/1803D792815FC24D871C00D17AE95905C71161@g01jpexmbkw24