summaryrefslogtreecommitdiff
path: root/src
AgeCommit message (Collapse)Author
2016-07-19Remove very-obsolete estimates of shmem usage from postgresql.conf.sample.Tom Lane
runtime.sgml used to contain a table of estimated shared memory consumption rates for max_connections and some other GUCs. Commit 390bfc643 removed that on the well-founded grounds that (a) we weren't maintaining the entries well and (b) it no longer mattered so much once we got out from under SysV shmem limits. But it missed that there were even-more-obsolete versions of some of those numbers in comments in postgresql.conf.sample. Remove those too. Back-patch to 9.3 where the aforesaid commit went in.
2016-07-19Fix MSVC build for changes in zic.Tom Lane
Ooops, I missed back-patching commit f5f15ea6a along with the other stuff.
2016-07-19Sync back-branch copies of the timezone code with IANA release tzcode2016c.Tom Lane
Back-patch commit 1c1a7cbd6a1600c9, along with subsequent portability fixes, into all active branches. Also, back-patch commits 696027727 and 596857043 (addition of zic -P option) into 9.1 and 9.2, just to reduce differences between the branches. src/timezone/ is now largely identical in all active branches, except that in 9.1, pgtz.c retains the initial-timezone-selection code that was moved over to initdb in 9.2. Ordinarily we wouldn't risk this much code churn in back branches, but it seems necessary in this case, because among the changes are two feature additions in the "zic" zone data file compiler (a larger limit on the number of allowed DST transitions, and addition of a "%z" escape in zone abbreviations). IANA have not yet started to use those features in their tzdata files, but presumably they will before too long. If we don't update then we'll be unable to adopt new timezone data. Also, installations built with --with-system-tzdata (which includes most distro-supplied builds, I believe) might fail even if we don't update our copies of the data files. There are assorted bug fixes too, mostly affecting obscure timezones or post-2037 dates. Discussion: <13601.1468868947@sss.pgh.pa.us>
2016-07-16Fix crash in close_ps() for NaN input coordinates.Tom Lane
The Assert() here seems unreasonably optimistic. Andreas Seltenreich found that it could fail with NaNs in the input geometries, and it seems likely to me that it might fail in corner cases due to roundoff error, even for ordinary input values. As a band-aid, make the function return SQL NULL instead of crashing. Report: <87d1md1xji.fsf@credativ.de>
2016-07-15Fix torn-page, unlogged xid and further risks from heap_update().Andres Freund
When heap_update needs to look for a page for the new tuple version, because the current one doesn't have sufficient free space, or when columns have to be processed by the tuple toaster, it has to release the lock on the old page during that. Otherwise there'd be lock ordering and lock nesting issues. To avoid concurrent sessions from trying to update / delete / lock the tuple while the page's content lock is released, the tuple's xmax is set to the current session's xid. That unfortunately was done without any WAL logging, thereby violating the rule that no XIDs may appear on disk, without an according WAL record. If the database were to crash / fail over when the page level lock is released, and some activity lead to the page being written out to disk, the xid could end up being reused; potentially leading to the row becoming invisible. There might be additional risks by not having t_ctid point at the tuple itself, without having set the appropriate lock infomask fields. To fix, compute the appropriate xmax/infomask combination for locking the tuple, and perform WAL logging using the existing XLOG_HEAP_LOCK record. That allows the fix to be backpatched. This issue has existed for a long time. There appears to have been partial attempts at preventing dangers, but these never have fully been implemented, and were removed a long time ago, in 11919160 (cf. HEAP_XMAX_UNLOGGED). In master / 9.6, there's an additional issue, namely that the visibilitymap's freeze bit isn't reset at that point yet. Since that's a new issue, introduced only in a892234f830, that'll be fixed in a separate commit. Author: Masahiko Sawada and Andres Freund Reported-By: Different aspects by Thomas Munro, Noah Misch, and others Discussion: CAEepm=3fWAbWryVW9swHyLTY4sXVf0xbLvXqOwUoDiNCx9mBjQ@mail.gmail.com Backpatch: 9.1/all supported versions
2016-07-15Make HEAP_LOCK/HEAP2_LOCK_UPDATED replay reset HEAP_XMAX_INVALID.Andres Freund
0ac5ad5 started to compress infomask bits in WAL records. Unfortunately the replay routines for XLOG_HEAP_LOCK/XLOG_HEAP2_LOCK_UPDATED forgot to reset the HEAP_XMAX_INVALID (and some other) hint bits. Luckily that's not problematic in the majority of cases, because after a crash/on a standby row locks aren't meaningful. Unfortunately that does not hold true in the presence of prepared transactions. This means that after a crash, or after promotion, row level locks held by a prepared, but not yet committed, prepared transaction might not be enforced. Discussion: 20160715192319.ubfuzim4zv3rqnxv@alap3.anarazel.de Backpatch: 9.3, the oldest branch on which 0ac5ad5 is present.
2016-07-15Avoid serializability errors when locking a tuple with a committed updateAlvaro Herrera
When key-share locking a tuple that has been not-key-updated, and the update is a committed transaction, in some cases we raised serializability errors: ERROR: could not serialize access due to concurrent update Because the key-share doesn't conflict with the update, the error is unnecessary and inconsistent with the case that the update hasn't committed yet. This causes problems for some usage patterns, even if it can be claimed that it's sufficient to retry the aborted transaction: given a steady stream of updating transactions and a long locking transaction, the long transaction can be starved indefinitely despite multiple retries. To fix, we recognize that HeapTupleSatisfiesUpdate can return HeapTupleUpdated when an updating transaction has committed, and that we need to deal with that case exactly as if it were a non-committed update: verify whether the two operations conflict, and if not, carry on normally. If they do conflict, however, there is a difference: in the HeapTupleBeingUpdated case we can just sleep until the concurrent transaction is gone, while in the HeapTupleUpdated case this is not possible and we must raise an error instead. Per trouble report from Olivier Dony. In addition to a couple of test cases that verify the changed behavior, I added a test case to verify the behavior that remains unchanged, namely that errors are raised when a update that modifies the key is used. That must still generate serializability errors. One pre-existing test case changes behavior; per discussion, the new behavior is actually the desired one. Discussion: https://www.postgresql.org/message-id/560AA479.4080807@odoo.com https://www.postgresql.org/message-id/20151014164844.3019.25750@wrigleys.postgresql.org Backpatch to 9.3, where the problem appeared.
2016-07-14Fix GiST index build for NaN values in geometric types.Tom Lane
GiST index build could go into an infinite loop when presented with boxes (or points, circles or polygons) containing NaN component values. This happened essentially because the code assumed that x == x is true for any "double" value x; but it's not true for NaNs. The looping behavior was not the only problem though: we also attempted to sort the items using simple double comparisons. Since NaNs violate the trichotomy law, qsort could (in principle at least) get arbitrarily confused and mess up the sorting of ordinary values as well as NaNs. And we based splitting choices on box size calculations that could produce NaNs, again resulting in undesirable behavior. To fix, replace all comparisons of doubles in this logic with float8_cmp_internal, which is NaN-aware and is careful to sort NaNs consistently, higher than any non-NaN. Also rearrange the box size calculation to not produce NaNs; instead it should produce an infinity for a box with NaN on one side and not-NaN on the other. I don't by any means claim that this solves all problems with NaNs in geometric values, but it should at least make GiST index insertion work reliably with such data. It's likely that the index search side of things still needs some work, and probably regular geometric operations too. But with this patch we're laying down a convention for how such cases ought to behave. Per bug #14238 from Guang-Dih Lei. Back-patch to 9.2; the code used before commit 7f3bd86843e5aad8 is quite different and doesn't lock up on my simple test case, nor on the submitter's dataset. Report: <20160708151747.1426.60150@wrigleys.postgresql.org> Discussion: <28685.1468246504@sss.pgh.pa.us>
2016-07-09Fix TAP tests and MSVC scripts for pathnames with spaces.Tom Lane
Change assorted places in our Perl code that did things like system("prog $path/file"); to do it more like system('prog', "$path/file"); which is safe against spaces and other special characters in the path variable. The latter was already the prevailing style, but a few bits of code hadn't gotten this memo. Back-patch to 9.4 as relevant. Michael Paquier, Kyotaro Horiguchi Discussion: <20160704.160213.111134711.horiguchi.kyotaro@lab.ntt.co.jp>
2016-07-01Be more paranoid in ruleutils.c's get_variable().Tom Lane
We were merely Assert'ing that the Var matched the RTE it's supposedly from. But if the user passes incorrect information to pg_get_expr(), the RTE might in fact not match; this led either to Assert failures or core dumps, as reported by Chris Hanks in bug #14220. To fix, just convert the Asserts to test-and-elog. Adjust an existing test-and-elog elsewhere in the same function to be consistent in wording. (If we really felt these were user-facing errors, we might promote them to ereport's; but I can't convince myself that they're worth translating.) Back-patch to 9.3; the problematic code doesn't exist before that, and a quick check says that 9.2 doesn't crash on such cases. Michael Paquier and Thomas Munro Report: <20160629224349.1407.32667@wrigleys.postgresql.org>
2016-06-30Fix typo in ReorderBufferIterTXNInit().Tom Lane
This looks like it would cause changes from subtransactions to be missed by the iterator being constructed, if those changes had been spilled to disk previously. This implies that large subtransactions might be lost (in whole or in part) by logical replication. Found and fixed by Petru-Florin Mihancea, per bug #14208. Report: <20160622144830.5791.22512@wrigleys.postgresql.org>
2016-06-27Fix CREATE MATVIEW/CREATE TABLE AS ... WITH NO DATA to not plan the query.Tom Lane
Previously, these commands always planned the given query and went through executor startup before deciding not to actually run the query if WITH NO DATA is specified. This behavior is problematic for pg_dump because it may cause errors to be raised that we would rather not see before a REFRESH MATERIALIZED VIEW command is issued. See for example bug #13907 from Marian Krucina. This change is not sufficient to fix that particular bug, because we also need to tweak pg_dump to issue the REFRESH later, but it's a necessary step on the way. A user-visible side effect of doing things this way is that the returned command tag for WITH NO DATA cases will now be "CREATE MATERIALIZED VIEW" or "CREATE TABLE AS", not "SELECT 0". We could preserve the old behavior but it would take more code, and arguably that was just an implementation artifact not intended behavior anyhow. In 9.5 and HEAD, also get rid of the static variable CreateAsReladdr, which was trouble waiting to happen; there is not any prohibition on nested CREATE commands. Back-patch to 9.3 where CREATE MATERIALIZED VIEW was introduced. Michael Paquier and Tom Lane Report: <20160202161407.2778.24659@wrigleys.postgresql.org>
2016-06-24Fix handling of multixacts predating pg_upgradeAlvaro Herrera
After pg_upgrade, it is possible that some tuples' Xmax have multixacts corresponding to the old installation; such multixacts cannot have running members anymore. In many code sites we already know not to read them and clobber them silently, but at least when VACUUM tries to freeze a multixact or determine whether one needs freezing, there's an attempt to resolve it to its member transactions by calling GetMultiXactIdMembers, and if the multixact value is "in the future" with regards to the current valid multixact range, an error like this is raised: ERROR: MultiXactId 123 has not been created yet -- apparent wraparound and vacuuming fails. Per discussion with Andrew Gierth, it is completely bogus to try to resolve multixacts coming from before a pg_upgrade, regardless of where they stand with regards to the current valid multixact range. It's possible to get from under this problem by doing SELECT FOR UPDATE of the problem tuples, but if tables are large, this is slow and tedious, so a more thorough solution is desirable. To fix, we realize that multixacts in xmax created in 9.2 and previous have a specific bit pattern that is never used in 9.3 and later (we already knew this, per comments and infomask tests sprinkled in various places, but we weren't leveraging this knowledge appropriately). Whenever the infomask of the tuple matches that bit pattern, we just ignore the multixact completely as if Xmax wasn't set; or, in the case of tuple freezing, we act as if an unwanted value is set and clobber it without decoding. This guarantees that no errors will be raised, and that the values will be progressively removed until all tables are clean. Most callers of GetMultiXactIdMembers are patched to recognize directly that the value is a removable "empty" multixact and avoid calling GetMultiXactIdMembers altogether. To avoid changing the signature of GetMultiXactIdMembers() in back branches, we keep the "allow_old" boolean flag but rename it to "from_pgupgrade"; if the flag is true, we always return an empty set instead of looking up the multixact. (I suppose we could remove the argument in the master branch, but I chose not to do so in this commit). This was broken all along, but the error-facing message appeared first because of commit 8e9a16ab8f7f and was partially fixed in a25c2b7c4db3. This fix, backpatched all the way back to 9.3, goes approximately in the same direction as a25c2b7c4db3 but should cover all cases. Bug analysis by Andrew Gierth and Álvaro Herrera. A number of public reports match this bug: https://www.postgresql.org/message-id/20140330040029.GY4582@tamriel.snowman.net https://www.postgresql.org/message-id/538F3D70.6080902@publicrelay.com https://www.postgresql.org/message-id/556439CF.7070109@pscs.co.uk https://www.postgresql.org/message-id/SG2PR06MB0760098A111C88E31BD4D96FB3540@SG2PR06MB0760.apcprd06.prod.outlook.com https://www.postgresql.org/message-id/20160615203829.5798.4594@wrigleys.postgresql.org
2016-06-22Make "postgres -C guc" print "" not "(null)" for null-valued GUCs.Tom Lane
Commit 0b0baf262 et al made this case print "(null)" on the grounds that that's what happened on platforms that didn't crash. But neither behavior was actually intentional. What we should print is just an empty string, for compatibility with the behavior of SHOW and other ways of examining string GUCs. Those code paths don't distinguish NULL from empty strings, so we should not here either. Per gripe from Alain Radix. Like the previous patch, back-patch to 9.2 where -C option was introduced. Discussion: <CA+YdpwxPUADrmxSD7+Td=uOshMB1KkDN7G7cf+FGmNjjxMhjbw@mail.gmail.com>
2016-06-20Add missing check for malloc failure in plpgsql_extra_checks_check_hook().Tom Lane
Per report from Andreas Seltenreich. Back-patch to affected versions. Report: <874m8nn0hv.fsf@elite.ansel.ydns.eu>
2016-06-17Finish up XLOG_HINT renamingAlvaro Herrera
Commit b8fd1a09f3 renamed XLOG_HINT to XLOG_FPI, but neglected two places. Backpatch to 9.3, like that commit.
2016-06-16Fix validation of overly-long IPv6 addresses.Tom Lane
The inet/cidr types sometimes failed to reject IPv6 inputs with too many colon-separated fields, instead translating them to '::/0'. This is the result of a thinko in the original ISC code that seems to be as yet unreported elsewhere. Per bug #14198 from Stefan Kaltenbrunner. Report: <20160616182222.5798.959@wrigleys.postgresql.org>
2016-06-16Avoid crash in "postgres -C guc" for a GUC with a null string value.Tom Lane
Emit "(null)" instead, which was the behavior all along on platforms that don't crash, eg OS X. Per report from Jehan-Guillaume de Rorthais. Back-patch to 9.2 where -C option was introduced. Michael Paquier Report: <20160615204036.2d35d86a@firost>
2016-06-15Widen buffer for headers in psql's \watch command.Tom Lane
This is to make sure there's enough room for translated versions of the message. HEAD's already addressed this issue, but back-patch a simple increase in the array size. Discussion: <20160612145532.GA22965@postgresql.kr>
2016-06-13Fix multiple minor infelicities in aclchk.c error reports.Tom Lane
pg_type_aclmask reported the wrong type's OID when complaining that it could not find a type's typelem. It also failed to provide a suitable errcode when the initially given OID doesn't exist (which is a user-facing error, since that OID can be user-specified). pg_foreign_data_wrapper_aclmask and pg_foreign_server_aclmask likewise lacked errcode specifications. Trivial cosmetic adjustments too. The wrong-type-OID problem was reported by Petru-Florin Mihancea in bug #14186; the other issues noted by me while reading the code. These errors all seem to be aboriginal in the respective routines, so back-patch as necessary. Report: <20160613163159.5798.52928@wrigleys.postgresql.org>
2016-06-09Clarify documentation of ceil/ceiling/floor functions.Tom Lane
Document these as "nearest integer >= argument" and "nearest integer <= argument", which will hopefully be less confusing than the old formulation. New wording is from Matlab via Dean Rasheed. I changed the pg_description entries as well as the SGML docs. In the back branches, this will only affect installations initdb'd in the future, but it should be harmless otherwise. Discussion: <CAEZATCW3yzJo-NMSiQs5jXNFbTsCEftZS-Og8=FvFdiU+kYuSA@mail.gmail.com>
2016-06-07nls-global.mk: search build dir for source files, tooAlvaro Herrera
In VPATH builds, the build directory was not being searched for files in GETTEXT_FILES, leading to failure to construct the .pot files. This has bit me all along, but never hard enough to get it fixed; I suppose not a lot of people uses VPATH and NLS-enabled builds, and those that do, don't do "make update-po" often. This is a longstanding problem, so backpatch all the way back.
2016-06-06Don't reset changes_since_analyze after a selective-columns ANALYZE.Tom Lane
If we ANALYZE only selected columns of a table, we should not postpone auto-analyze because of that; other columns may well still need stats updates. As committed, the counter is left alone if a column list is given, whether or not it includes all analyzable columns of the table. Per complaint from Tomasz Ostrowski. It's been like this a long time, so back-patch to all supported branches. Report: <ef99c1bd-ff60-5f32-2733-c7b504eb960c@ato.waw.pl>
2016-06-03Suppress -Wunused-result warnings about write(), again.Tom Lane
Adopt the same solution as in commit aa90e148ca70a235, but this time let's put the ugliness inside the write_stderr() macro, instead of expecting each call site to deal with it. Back-port that decision into psql/common.c where I got the macro from in the first place. Per gripe from Peter Eisentraut.
2016-06-02Redesign handling of SIGTERM/control-C in parallel pg_dump/pg_restore.Tom Lane
Formerly, Unix builds of pg_dump/pg_restore would trap SIGINT and similar signals and set a flag that was tested in various data-transfer loops. This was prone to errors of omission (cf commit 3c8aa6654); and even if the client-side response was prompt, we did nothing that would cause long-running SQL commands (e.g. CREATE INDEX) to terminate early. Also, the master process would effectively do nothing at all upon receipt of SIGINT; the only reason it seemed to work was that in typical scenarios the signal would also be delivered to the child processes. We should support termination when a signal is delivered only to the master process, though. Windows builds had no console interrupt handler, so they would just fall over immediately at control-C, again leaving long-running SQL commands to finish unmolested. To fix, remove the flag-checking approach altogether. Instead, allow the Unix signal handler to send a cancel request directly and then exit(1). In the master process, also have it forward the signal to the children. On Windows, add a console interrupt handler that behaves approximately the same. The main difference is that a single execution of the Windows handler can send all the cancel requests since all the info is available in one process, whereas on Unix each process sends a cancel only for its own database connection. In passing, fix an old problem that DisconnectDatabase tends to send a cancel request before exiting a parallel worker, even if nothing went wrong. This is at least a waste of cycles, and could lead to unexpected log messages, or maybe even data loss if it happened in pg_restore (though in the current code the problem seems to affect only pg_dump). The cause was that after a COPY step, pg_dump was leaving libpq in PGASYNC_BUSY state, causing PQtransactionStatus() to report PQTRANS_ACTIVE. That's normally harmless because the next PQexec() will silently clear the PGASYNC_BUSY state; but in a parallel worker we might exit without any additional SQL commands after a COPY step. So add an extra PQgetResult() call after a COPY to allow libpq to return to PGASYNC_IDLE state. This is a bug fix, IMO, so back-patch to 9.3 where parallel dump/restore were introduced. Thanks to Kyotaro Horiguchi for Windows testing and code suggestions. Original-Patch: <7005.1464657274@sss.pgh.pa.us> Discussion: <20160602.174941.256342236.horiguchi.kyotaro@lab.ntt.co.jp>
2016-06-01Clean up some minor inefficiencies in parallel dump/restore.Tom Lane
Parallel dump did a totally pointless query to find out the name of each table to be dumped, which it already knows. Parallel restore runs issued lots of redundant SET commands because _doSetFixedOutputState() was invoked once per TOC item rather than just once at connection start. While the extra queries are insignificant if you're dumping or restoring large tables, it still seems worth getting rid of them. Also, give the responsibility for selecting the right client_encoding for a parallel dump worker to setup_connection() where it naturally belongs, instead of having ad-hoc code for that in CloneArchive(). And fix some minor bugs like use of strdup() where pg_strdup() would be safer. Back-patch to 9.3, mostly to keep the branches in sync in an area that we're still finding bugs in. Discussion: <5086.1464793073@sss.pgh.pa.us>
2016-05-31Avoid useless closely-spaced writes of statistics files.Tom Lane
The original intent in the stats collector was that we should not write out stats data oftener than every PGSTAT_STAT_INTERVAL msec. Backends will not make requests at all if they see the existing data is newer than that, and the stats collector is supposed to disregard requests having a cutoff_time older than its most recently written data, so that close-together requests don't result in multiple writes. But the latter part of that got broken in commit 187492b6c2e8cafc, so that if two backends concurrently decide the existing stats are too old, the collector would write the data twice. (In principle the collector's logic would still merge requests as long as the second one arrives before we've actually written data ... but since the message collection loop would write data immediately after processing a single inquiry message, that never happened in practice, and in any case the window in which it might work would be much shorter than PGSTAT_STAT_INTERVAL.) To fix, improve pgstat_recv_inquiry so that it checks whether the cutoff time is too old, and doesn't add a request to the queue if so. This means that we do not need DBWriteRequest.request_time, because the decision is taken before making a queue entry. And that means that we don't really need the DBWriteRequest data structure at all; an OID list of database OIDs will serve and allow removal of some rather verbose and crufty code. In passing, improve the comments in this area, which have been rather neglected. Also change backend_read_statsfile so that it's not silently relying on MyDatabaseId to have some particular value in the autovacuum launcher process. It accidentally worked as desired because MyDatabaseId is zero in that process; but that does not seem like a dependency we want, especially with no documentation about it. Although this patch is mine, it turns out I'd rediscovered a known bug, for which Tomas Vondra had already submitted a patch that's functionally equivalent to the non-cosmetic aspects of this patch. Thanks to Tomas for reviewing this version. Back-patch to 9.3 where the bug was introduced. Prior-Discussion: <1718942738eb65c8407fcd864883f4c8@fuzzy.cz> Patch: <4625.1464202586@sss.pgh.pa.us>
2016-05-29Fix missing abort checks in pg_backup_directory.c.Tom Lane
Parallel restore from directory format failed to respond to control-C in a timely manner, because there were no checkAborting() calls in the code path that reads data from a file and sends it to the backend. If any worker was in the midst of restoring data for a large table, you'd just have to wait. This fix doesn't do anything for the problem of aborting a long-running server-side command, but at least it fixes things for data transfers. Back-patch to 9.3 where parallel restore was introduced.
2016-05-29Remove pg_dump/parallel.c's useless "aborting" flag.Tom Lane
This was effectively dead code, since the places that tested it could not be reached after we entered the on-exit-cleanup routine that would set it. It seems to have been a leftover from a design in which error abort would try to send fresh commands to the workers --- a design which could never have worked reliably, of course. Since the flag is not cross-platform, it complicates reasoning about the code's behavior, which we could do without. Although this is effectively just cosmetic, back-patch anyway, because there are some actual bugs in the vicinity of this behavior. Discussion: <15583.1464462418@sss.pgh.pa.us>
2016-05-28Lots of comment-fixing, and minor cosmetic cleanup, in pg_dump/parallel.c.Tom Lane
The commentary in this file was in extremely sad shape. The author(s) had clearly never heard of the project convention that a function header comment should provide an API spec of some sort for that function. Much of it was flat out wrong, too --- maybe it was accurate when written, but if so it had not been updated to track subsequent code revisions. Rewrite and rearrange to try to bring it up to speed, and annotate some of the places where more work is needed. (I've refrained from actually fixing anything of substance ... yet.) Also, rename a couple of functions for more clarity as to what they do, do some very minor code rearrangement, remove some pointless Asserts, fix an incorrect Assert in readMessageFromPipe, and add a missing socket close in one error exit from pgpipe(). The last would be a bug if we tried to continue after pgpipe() failure, but since we don't, it's just cosmetic at present. Although this is only cosmetic, back-patch to 9.3 where parallel.c was added. It's sufficiently invasive that it'll pose a hazard for future back-patching if we don't. Discussion: <25239.1464386067@sss.pgh.pa.us>
2016-05-27Clean up thread management in parallel pg_dump for Windows.Tom Lane
Since we start the worker threads with _beginthreadex(), we should use _endthreadex() to terminate them. We got this right in the normal-exit code path, but not so much during an error exit from a worker. In addition, be sure to apply CloseHandle to the thread handle after each thread exits. It's not clear that these oversights cause any user-visible problems, since the pg_dump run is about to terminate anyway. Still, it's clearly better to follow Microsoft's API specifications than ignore them. Also a few cosmetic cleanups in WaitForTerminatingWorkers(), including being a bit less random about where to cast between uintptr_t and HANDLE, and being sure to clear the worker identity field for each dead worker (not that false matches should be possible later, but let's be careful). Original observation and patch by Armin Schöffmann, cosmetic improvements by Michael Paquier and me. (Armin's patch also included closing sockets in ShutdownWorkersHard(), but that's been dealt with already in commit df8d2d8c4.) Back-patch to 9.3 where parallel pg_dump was introduced. Discussion: <zarafa.570306bd.3418.074bf1420d8f2ba2@root.aegaeon.de>
2016-05-27Be more predictable about reporting "lock timeout" vs "statement timeout".Tom Lane
If both timeout indicators are set when we arrive at ProcessInterrupts, we've historically just reported "lock timeout". However, some buildfarm members have been observed to fail isolationtester's timeouts test by reporting "lock timeout" when the statement timeout was expected to fire first. The cause seems to be that the process is allowed to sleep longer than expected (probably due to heavy machine load) so that the lock timeout happens before we reach the point of reporting the error, and then this arbitrary tiebreak rule does the wrong thing. We can improve matters by comparing the scheduled timeout times to decide which error to report. I had originally proposed greatly reducing the 1-second window between the two timeouts in the test cases. On reflection that is a bad idea, at least for the case where the lock timeout is expected to fire first, because that would assume that it takes negligible time to get from statement start to the beginning of the lock wait. Thus, this patch doesn't completely remove the risk of test failures on slow machines. Empirically, however, the case this handles is the one we are seeing in the buildfarm. The explanation may be that the other case requires the scheduler to take the CPU away from a busy process, whereas the case fixed here only requires the scheduler to not give the CPU back right away to a process that has been woken from a multi-second sleep (and, perhaps, has been swapped out meanwhile). Back-patch to 9.3 where the isolationtester timeouts test was added. Discussion: <8693.1464314819@sss.pgh.pa.us>
2016-05-26Make pg_dump behave more sanely when built without HAVE_LIBZ.Tom Lane
For some reason the code to emit a warning and switch to uncompressed output was placed down in the guts of pg_backup_archiver.c. This is definitely too late in the case of parallel operation (and I rather wonder if it wasn't too late for other purposes as well). Put it in pg_dump.c's option-processing logic, which seems a much saner place. Also, the default behavior with custom or directory output format was to emit the warning telling you the output would be uncompressed. This seems unhelpful, so silence that case. Back-patch to 9.3 where parallel dump was introduced. Kyotaro Horiguchi, adjusted a bit by me Report: <20160526.185551.242041780.horiguchi.kyotaro@lab.ntt.co.jp>
2016-05-26In Windows pg_dump, ensure idle workers will shut down during error exit.Tom Lane
The Windows coding of ShutdownWorkersHard() thought that setting termEvent was sufficient to make workers exit after an error. But that only helps if a worker is busy and passes through checkAborting(). An idle worker will just sit, resulting in pg_dump failing to exit until the user gives up and hits control-C. We should close the write end of the command pipe so that idle workers will see socket EOF and exit, as the Unix coding was already doing. Back-patch to 9.3 where parallel pg_dump was introduced. Kyotaro Horiguchi
2016-05-25Avoid hot standby cancels from VAC FREEZEAlvaro Herrera
VACUUM FREEZE generated false cancelations of standby queries on an otherwise idle master. Caused by an off-by-one error on cutoff_xid which goes back to original commit. Analysis and report by Marco Nenciarini Bug fix by Simon Riggs This is a correct backpatch of commit 66fbcb0d2e to branches 9.1 through 9.4. That commit was backpatched to 9.0 originally, but it was immediately reverted in 9.0-9.4 because it didn't compile.
2016-05-25Ensure that backends see up-to-date statistics for shared catalogs.Tom Lane
Ever since we split the statistics collector's reports into per-database files (commit 187492b6c2e8cafc), backends have been seeing stale statistics for shared catalogs. This is because the inquiry message only prompts the collector to write the per-database file for the requesting backend's own database. Stats for shared catalogs are in a separate file for "DB 0", which didn't get updated. In normal operation this was partially masked by the fact that the autovacuum launcher would send an inquiry message at least once per autovacuum_naptime that asked for "DB 0"; so the shared-catalog stats would never be more than a minute out of date. However the problem becomes very obvious with autovacuum disabled, as reported by Peter Eisentraut. To fix, redefine the semantics of inquiry messages so that both the specified DB and DB 0 will be dumped. (This might seem a bit inefficient, but we have no good way to know whether a backend's transaction will look at shared-catalog stats, so we have to read both groups of stats whenever we request stats. Sending two inquiry messages would definitely not be better.) Back-patch to 9.3 where the bug was introduced. Report: <56AD41AC.1030509@gmx.net>
2016-05-25Fix broken error handling in parallel pg_dump/pg_restore.Tom Lane
In the original design for parallel dump, worker processes reported errors by sending them up to the master process, which would print the messages. This is unworkably fragile for a couple of reasons: it risks deadlock if a worker sends an error at an unexpected time, and if the master has already died for some reason, the user will never get to see the error at all. Revert that idea and go back to just always printing messages to stderr. This approach means that if all the workers fail for similar reasons (eg, bad password or server shutdown), the user will see N copies of that message, not only one as before. While that's slightly annoying, it's certainly better than not seeing any message; not to mention that we shouldn't assume that only the first failure is interesting. An additional problem in the same area was that the master failed to disable SIGPIPE (at least until much too late), which meant that sending a command to an already-dead worker would cause the master to crash silently. That was bad enough in itself but was made worse by the total reliance on the master to print errors: even if the worker had reported an error, you would probably not see it, depending on timing. Instead disable SIGPIPE right after we've forked the workers, before attempting to send them anything. Additionally, the master relies on seeing socket EOF to realize that a worker has exited prematurely --- but on Windows, there would be no EOF since the socket is attached to the process that includes both the master and worker threads, so it remains open. Make archive_close_connection() close the worker end of the sockets so that this acts more like the Unix case. It's not perfect, because if a worker thread exits without going through exit_nicely() the closures won't happen; but that's not really supposed to happen. This has been wrong all along, so back-patch to 9.3 where parallel dump was introduced. Report: <2458.1450894615@sss.pgh.pa.us>
2016-05-24Fetch XIDs atomically during vac_truncate_clog().Tom Lane
Because vac_update_datfrozenxid() updates datfrozenxid and datminmxid in-place, it's unsafe to assume that successive reads of those values will give consistent results. Fetch each one just once to ensure sane behavior in the minimum calculation. Noted while reviewing Alexander Korotkov's patch in the same area. Discussion: <8564.1464116473@sss.pgh.pa.us>
2016-05-24Avoid consuming an XID during vac_truncate_clog().Tom Lane
vac_truncate_clog() uses its own transaction ID as the comparison point in a sanity check that no database's datfrozenxid has already wrapped around "into the future". That was probably fine when written, but in a lazy vacuum we won't have assigned an XID, so calling GetCurrentTransactionId() causes an XID to be assigned when otherwise one would not be. Most of the time that's not a big problem ... but if we are hard up against the wraparound limit, consuming XIDs during antiwraparound vacuums is a very bad thing. Instead, use ReadNewTransactionId(), which not only avoids this problem but is in itself a better comparison point to test whether wraparound has already occurred. Report and patch by Alexander Korotkov. Back-patch to all versions. Report: <CAPpHfdspOkmiQsxh-UZw2chM6dRMwXAJGEmmbmqYR=yvM7-s6A@mail.gmail.com>
2016-05-23Fix latent crash in do_text_output_multiline().Tom Lane
do_text_output_multiline() would fail (typically with a null pointer dereference crash) if its input string did not end with a newline. Such cases do not arise in our current sources; but it certainly could happen in future, or in extension code's usage of the function, so we should fix it. To fix, replace "eol += len" with "eol = text + len". While at it, make two cosmetic improvements: mark the input string const, and rename the argument from "text" to "txt" to dodge pgindent strangeness (since "text" is a typedef name). Even though this problem is only latent at present, it seems like a good idea to back-patch the fix, since it's a very simple/safe patch and it's not out of the realm of possibility that we might in future back-patch something that expects sane behavior from do_text_output_multiline(). Per report from Hao Lee. Report: <CAGoxFiFPAGyPAJLcFxTB5cGhTW2yOVBDYeqDugYwV4dEd1L_Ag@mail.gmail.com>
2016-05-21Fix missing initialization of buffer_std field in _bt_newroot().Tom Lane
This would only have any effect if the old root page needed to have a full-page image made (ie, this was the first mod to it since a checkpoint), *and* if the byte left uninitialized chanced to contain zero. In that case the WAL code would fail to remove the "hole" from the full-page image, which would bloat the WAL log a bit but not have any effect worse than that. Found by buildfarm member skink, whose valgrind run noticed the use of an uninitialized value. Apparently timing in the regression tests is such that the triggering condition is rare, or valgrind testing would have seen this before. Oversight in commit 40dae7ec537c5619fc93ad602c62f37be786d161. This bug affects only the 9.4 branch, since in later branches refactoring of the WAL-log-creation APIs fixed it. Report: <20160521203101.jp5yxquhhkabvo56@alap3.anarazel.de>
2016-05-12Fix obsolete commentAlvaro Herrera
2016-05-10Fix autovacuum for shared relationsAlvaro Herrera
The table-skipping logic in autovacuum would fail to consider that multiple workers could be processing the same shared catalog in different databases. This normally wouldn't be a problem: firstly because autovacuum workers not for wraparound would simply ignore tables in which they cannot acquire lock, and secondly because most of the time these tables are small enough that even if multiple for-wraparound workers are stuck in the same catalog, they would be over pretty quickly. But in cases where the catalogs are severely bloated it could become a problem. Backpatch all the way back, because the problem has been there since the beginning. Reported by Ondřej Světlík Discussion: https://www.postgresql.org/message-id/572B63B1.3030603%40flexibee.eu https://www.postgresql.org/message-id/572A1072.5080308%40flexibee.eu
2016-05-09Stamp 9.4.8.REL9_4_8Tom Lane
2016-05-09Translation updatesPeter Eisentraut
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git Source-Git-Hash: 1f2562b35928021c6463a1e5f82f1682486fb4cf
2016-05-07Distrust external OpenSSL clients; clear err queuePeter Eisentraut
OpenSSL has an unfortunate tendency to mix per-session state error handling with per-thread error handling. This can cause problems when programs that link to libpq with OpenSSL enabled have some other use of OpenSSL; without care, one caller of OpenSSL may cause problems for the other caller. Backend code might similarly be affected, for example when a third party extension independently uses OpenSSL without taking the appropriate precautions. To fix, don't trust other users of OpenSSL to clear the per-thread error queue. Instead, clear the entire per-thread queue ahead of certain I/O operations when it appears that there might be trouble (these I/O operations mostly need to call SSL_get_error() to check for success, which relies on the queue being empty). This is slightly aggressive, but it's pretty clear that the other callers have a very dubious claim to ownership of the per-thread queue. Do this is both frontend and backend code. Finally, be more careful about clearing our own error queue, so as to not cause these problems ourself. It's possibly that control previously did not always reach SSLerrmessage(), where ERR_get_error() was supposed to be called to clear the queue's earliest code. Make sure ERR_get_error() is always called, so as to spare other users of OpenSSL the possibility of similar problems caused by libpq (as opposed to problems caused by a third party OpenSSL library like PHP's OpenSSL extension). Again, do this is both frontend and backend code. See bug #12799 and https://bugs.php.net/bug.php?id=68276 Based on patches by Dave Vitek and Peter Eisentraut. From: Peter Geoghegan <pg@bowt.ie>
2016-05-06Fix pg_upgrade to not fail when new-cluster TOAST rules differ from old.Tom Lane
This patch essentially reverts commit 4c6780fd17aa43ed, in favor of a much simpler solution for the case where the new cluster would choose to create a TOAST table but the old cluster doesn't have one: just don't create a TOAST table. The existing code failed in at least two different ways if the situation arose: (1) ALTER TABLE RESET didn't grab an exclusive lock, so that the lock sanity check in create_toast_table failed; (2) pg_upgrade did not provide a pg_type OID for the new toast table, so that the crosscheck in TypeCreate failed. While both these problems were introduced by later patches, they show that the hack being used to cause TOAST table creation is overwhelmingly fragile (and untested). I also note that before the TypeCreate crosscheck was added, the code would have resulted in assigning an indeterminate pg_type OID to the toast table, possibly causing a later OID conflict in that catalog; so that it didn't really work even when committed. If we simply don't create a TOAST table, there will only be a problem if the code tries to store a tuple that's wider than a page, and field compression isn't sufficient to get it under a page. Given that the TOAST creation threshold is intended to be about a quarter of a page, it's very hard to believe that cross-version differences in the do-we-need-a-toast- table heuristic could result in an observable problem. So let's just follow the old version's conclusion about whether a TOAST table is needed. (If we ever do change needs_toast_table() so much that this conclusion doesn't apply, we can devise a solution at that time, and hopefully do it in a less klugy way than 4c6780fd17aa43ed did.) Back-patch to 9.3, like the previous patch. Discussion: <8110.1462291671@sss.pgh.pa.us>
2016-05-06Fix possible read past end of string in to_timestamp().Tom Lane
to_timestamp() handles the TH/th format codes by advancing over two input characters, whatever those are. It failed to notice whether there were two characters available to be skipped, making it possible to advance the pointer past the end of the input string and keep on parsing. A similar risk existed in the handling of "Y,YYY" format: it would advance over three characters after the "," whether or not three characters were available. In principle this might be exploitable to disclose contents of server memory. But the security team concluded that it would be very hard to use that way, because the parsing loop would stop upon hitting any zero byte, and TH/th format codes can't be consecutive --- they have to follow some other format code, which would have to match whatever data is there. So it seems impractical to examine memory very much beyond the end of the input string via this bug; and the input string will always be in local memory not in disk buffers, making it unlikely that anything very interesting is close to it in a predictable way. So this doesn't quite rise to the level of needing a CVE. Thanks to Wolf Roediger for reporting this bug.
2016-05-05Update time zone data files to tzdata release 2016d.Tom Lane
DST law changes in Russia (Magadan, Tomsk regions) and Venezuela. Historical corrections for Russia. There are new zone names Europe/Kirov and Asia/Tomsk reflecting the fact that these regions now have different time zone histories from adjacent regions.
2016-04-29Fix mishandling of equivalence-class tests in parameterized plans.Tom Lane
Given a three-or-more-way equivalence class, such as X.Y = Y.Y = Z.Z, it was possible for the planner to omit one of the quals needed to enforce that all members of the equivalence class are actually equal. This only happened in the case of a parameterized join node for two of the relations, that is a plan tree like Nested Loop -> Scan X -> Nested Loop -> Scan Y -> Scan Z Filter: Z.Z = X.X The eclass machinery normally expects to apply X.X = Y.Y when those two relations are joined, but in this shape of plan tree they aren't joined until the top node --- and, if the lower nested loop is marked as parameterized by X, the top node will assume that the relevant eclass condition(s) got pushed down into the lower node. On the other hand, the scan of Z assumes that it's only responsible for constraining Z.Z to match any one of the other eclass members. So one or another of the required quals sometimes fell between the cracks, depending on whether consideration of the eclass in get_joinrel_parampathinfo() for the lower nested loop chanced to generate X.X = Y.Y or X.X = Z.Z as the appropriate constraint there. If it generated the latter, it'd erroneously suppose that the Z scan would take care of matters. To fix, force X.X = Y.Y to be generated and applied at that join node when this case occurs. This is *extremely* hard to hit in practice, because various planner behaviors conspire to mask the problem; starting with the fact that the planner doesn't really like to generate a parameterized plan of the above shape. (It might have been impossible to hit it before we tweaked things to allow this plan shape for star-schema cases.) Many thanks to Alexander Kirkouski for submitting a reproducible test case. The bug can be demonstrated in all branches back to 9.2 where parameterized paths were introduced, so back-patch that far.