summaryrefslogtreecommitdiff
path: root/src
AgeCommit message (Collapse)Author
2015-05-18Add error-throwing wrappers for the printf family of functions.Noah Misch
All known standard library implementations of these functions can fail with ENOMEM. A caller neglecting to check for failure would experience missing output, information exposure, or a crash. Check return values within wrappers and code, currently just snprintf.c, that bypasses the wrappers. The wrappers do not return after an error, so their callers need not check. Back-patch to 9.0 (all supported versions). Popular free software standard library implementations do take pains to bypass malloc() in simple cases, but they risk ENOMEM for floating point numbers, positional arguments, large field widths, and large precisions. No specification demands such caution, so this commit regards every call to a printf family function as a potential threat. Injecting the wrappers implicitly is a compromise between patch scope and design goals. I would prefer to edit each call site to name a wrapper explicitly. libpq and the ECPG libraries would, ideally, convey errors to the caller rather than abort(). All that would be painfully invasive for a back-patched security fix, hence this compromise. Security: CVE-2015-3166
2015-05-18Permit use of vsprintf() in PostgreSQL code.Noah Misch
The next commit needs it. Back-patch to 9.0 (all supported versions).
2015-05-18Prevent a double free by not reentering be_tls_close().Noah Misch
Reentering this function with the right timing caused a double free, typically crashing the backend. By synchronizing a disconnection with the authentication timeout, an unauthenticated attacker could achieve this somewhat consistently. Call be_tls_close() solely from within proc_exit_prepare(). Back-patch to 9.0 (all supported versions). Benkocs Norbert Attila Security: CVE-2015-3165
2015-05-18Translation updatesPeter Eisentraut
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git Source-Git-Hash: 7da40afeb612deb39a77da4071ea1d5800706863
2015-05-17Fix typosPeter Eisentraut
2015-05-15Update time zone data files to tzdata release 2015d.Tom Lane
DST law changes in Egypt, Mongolia, Palestine. Historical corrections for Canada and Chile. Revised zone abbreviation for America/Adak (HST/HDT not HAST/HADT).
2015-05-13Fix RBM_ZERO_AND_LOCK mode to not acquire lock on local buffers.Heikki Linnakangas
Commit 81c45081 introduced a new RBM_ZERO_AND_LOCK mode to ReadBuffer, which takes a lock on the buffer before zeroing it. However, you cannot take a lock on a local buffer, and you got a segfault instead. The version of that patch committed to master included a check for !isLocalBuf, and therefore didn't crash, but oddly I missed that in the back-patched versions. This patch adds that check to the back-branches too. RBM_ZERO_AND_LOCK mode is only used during WAL replay, and in hash indexes. WAL replay only deals with shared buffers, so the only way to trigger the bug is with a temporary hash index. Reported by Artem Ignatyev, analysis by Tom Lane.
2015-05-11Fix incorrect checking of deferred exclusion constraint after a HOT update.Tom Lane
If a row that potentially violates a deferred exclusion constraint is HOT-updated later in the same transaction, the exclusion constraint would be reported as violated when the check finally occurs, even if the row(s) the new row originally conflicted with have since been removed. This happened because the wrong TID was passed to check_exclusion_constraint(), causing the live HOT-updated row to be seen as a conflicting row rather than recognized as the row-under-test. Per bug #13148 from Evan Martin. It's been broken since exclusion constraints were invented, so back-patch to all supported branches.
2015-05-07Properly send SCM status updates when shutting down service on WindowsMagnus Hagander
The Service Control Manager should be notified regularly during a shutdown that takes a long time. Previously we would increaes the counter, but forgot to actually send the notification to the system. The loop counter was also incorrectly initalized in the event that the startup of the system took long enough for it to increase, which could cause the shutdown process not to wait as long as expected. Krystian Bigaj, reviewed by Michael Paquier
2015-05-05Fix some problems with patch to fsync the data directory.Robert Haas
pg_win32_is_junction() was a typo for pgwin32_is_junction(). open() was used not only in a two-argument form, which breaks on Windows, but also where BasicOpenFile() should have been used. Per reports from Andrew Dunstan and David Rowley.
2015-05-04Recursively fsync() the data directory after a crash.Robert Haas
Otherwise, if there's another crash, some writes from after the first crash might make it to disk while writes from before the crash fail to make it to disk. This could lead to data corruption. Back-patch to all supported versions. Abhijit Menon-Sen, reviewed by Andres Freund and slightly revised by me.
2015-04-26Build libecpg with -DFRONTEND in all supported versions.Noah Misch
Fix an oversight in commit 151e74719b0cc5c040bd3191b51b95f925773dd1 by back-patching commit 44c5d387eafb4ba1a032f8d7b13d85c553d69181 to 9.0.
2015-04-25Prevent improper reordering of antijoins vs. outer joins.Tom Lane
An outer join appearing within the RHS of an antijoin can't commute with the antijoin, but somehow I missed teaching make_outerjoininfo() about that. In Teodor Sigaev's recent trouble report, this manifests as a "could not find RelOptInfo for given relids" error within eqjoinsel(); but I think silently wrong query results are possible too, if the planner misorders the joins and doesn't happen to trigger any internal consistency checks. It's broken as far back as we had antijoins, so back-patch to all supported branches.
2015-04-24Build every ECPG library with -DFRONTEND.Noah Misch
Each of the libraries incorporates src/port files, which often check FRONTEND. Build systems disagreed on whether to build libpgtypes this way. Only libecpg incorporates files that rely on it today. Back-patch to 9.0 (all supported versions) to forestall surprises.
2015-04-24Fix obsolete comment in set_rel_size().Tom Lane
The cross-reference to set_append_rel_pathlist() was obsoleted by commit e2fa76d80ba571d4de8992de6386536867250474, which split what had been set_rel_pathlist() and child routines into two sets of functions. But I (tgl) evidently missed updating this comment. Back-patch to 9.2 to avoid unnecessary divergence among branches. Amit Langote
2015-04-23Fix deadlock at startup, if max_prepared_transactions is too small.Heikki Linnakangas
When the startup process recovers transactions by scanning pg_twophase directory, it should clear MyLockedGxact after it's done processing each transaction. Like we do during normal operation, at PREPARE TRANSACTION. Otherwise, if the startup process exits due to an error, it will try to clear the locking_backend field of the last recovered transaction. That's usually harmless, but if the error happens in MarkAsPreparing, while holding TwoPhaseStateLock, the shmem-exit hook will try to acquire TwoPhaseStateLock again, and deadlock with itself. This fixes bug #13128 reported by Grant McAlister. The bug was introduced by commit bb38fb0d, so backpatch to all supported versions like that commit.
2015-04-14Fix typo in commentAlvaro Herrera
SLRU_SEGMENTS_PER_PAGE -> SLRU_PAGES_PER_SEGMENT I introduced this ancient typo in subtrans.c and later propagated it to multixact.c. I fixed the latter in f741300c, but only back to 9.3; backpatch to all supported branches for consistency.
2015-04-13Don't archive bogus recycled or preallocated files after timeline switch.Heikki Linnakangas
After a timeline switch, we would leave behind recycled WAL segments that are in the future, but on the old timeline. After promotion, and after they become old enough to be recycled again, we would notice that they don't have a .ready or .done file, create a .ready file for them, and archive them. That's bogus, because the files contain garbage, recycled from an older timeline (or prealloced as zeros). We shouldn't archive such files. This could happen when we're following a timeline switch during replay, or when we switch to new timeline at end-of-recovery. To fix, whenever we switch to a new timeline, scan the data directory for WAL segments on the old timeline, but with a higher segment number, and remove them. Those don't belong to our timeline history, and are most likely bogus recycled or preallocated files. They could also be valid files that we streamed from the primary ahead of time, but in any case, they're not needed to recover to the new timeline.
2015-04-12Remove duplicated words in comments.Heikki Linnakangas
David Rowley
2015-04-08Fix autovacuum launcher shutdown sequenceAlvaro Herrera
It was previously possible to have the launcher re-execute its main loop before shutting down if some other signal was received or an error occurred after getting SIGTERM, as reported by Qingqing Zhou. While investigating, Tom Lane further noticed that if autovacuum had been disabled in the config file, it would misbehave by trying to start a new worker instead of bailing out immediately -- it would consider itself as invoked in emergency mode. Fix both problems by checking the shutdown flag in a few more places. These problems have existed since autovacuum was introduced, so backpatch all the way back.
2015-04-07Fix assorted inconsistent function declarations.Tom Lane
While gcc doesn't complain if you declare a function "static" and then define it not-static, other compilers do; and in any case the code is highly misleading this way. Add the missing "static" keywords to a couple of recent patches. Per buildfarm member pademelon.
2015-04-04Fix incorrect matching of subexpressions in outer-join plan nodes.Tom Lane
Previously we would re-use input subexpressions in all expression trees attached to a Join plan node. However, if it's an outer join and the subexpression appears in the nullable-side input, this is potentially incorrect for apparently-matching subexpressions that came from above the outer join (ie, targetlist and qpqual expressions), because the executor will treat the subexpression value as NULL when maybe it should not be. The case is fairly hard to hit because (a) you need a non-strict subexpression (else NULL is correct), and (b) we don't usually compute expressions in the outputs of non-toplevel plan nodes. But we might do so if the expressions are sort keys for a mergejoin, for example. Probably in the long run we should make a more explicit distinction between Vars appearing above and below an outer join, but that will be a major planner redesign and not at all back-patchable. For the moment, just hack set_join_references so that it will not match any non-Var expressions coming from nullable inputs to expressions that came from above the join. (This is somewhat overkill, in that a strict expression could still be matched, but it doesn't seem worth the effort to check that.) Per report from Qingqing Zhou. The added regression test case is based on his example. This has been broken for a very long time, so back-patch to all active branches.
2015-04-03Remove unnecessary variables in _hash_splitbucket().Tom Lane
Commit ed9cc2b5df59fdbc50cce37399e26b03ab2c1686 made it unnecessary to pass start_nblkno to _hash_splitbucket(), and for that matter unnecessary to have the internal nblkno variable either. My compiler didn't complain about that, but some did. I also rearranged the use of oblkno a bit to make that case more parallel. Report and initial patch by Petr Jelinek, rearranged a bit by me. Back-patch to all branches, like the previous patch.
2015-04-01psql: fix \connect with URIs and conninfo stringsAlvaro Herrera
psql was already accepting conninfo strings as the first parameter in \connect, but the way it worked wasn't sane; some of the other parameters would get the previous connection's values, causing it to connect to a completely unexpected server or, more likely, not finding any server at all because of completely wrong combinations of parameters. Fix by explicitely checking for a conninfo-looking parameter in the dbname position; if one is found, use its complete specification rather than mix with the other arguments. Also, change tab-completion to not try to complete conninfo/URI-looking "dbnames" and document that conninfos are accepted as first argument. There was a weak consensus to backpatch this, because while the behavior of using the dbname as a conninfo is nowhere documented for \connect, it is reasonable to expect that it works because it does work in many other contexts. Therefore this is backpatched all the way back to 9.0. To implement this, routines previously private to libpq have been duplicated so that psql can decide what looks like a conninfo/URI string. In back branches, just duplicate the same code all the way back to 9.2, where URIs where introduced; 9.0 and 9.1 have a simpler version. In master, the routines are moved to src/common and renamed. Author: David Fetter, Andrew Dunstan. Some editorialization by me (probably earning a Gierth's "Sloppy" badge in the process.) Reviewers: Andrew Gierth, Erik Rijkers, Pavel Stěhule, Stephen Frost, Robert Haas, Andrew Dunstan.
2015-03-31Remove spurious semicolons.Heikki Linnakangas
Petr Jelinek
2015-03-30Run pg_upgrade and pg_resetxlog with restricted token on WindowsAndrew Dunstan
As with initdb these programs need to run with a restricted token, and if they don't pg_upgrade will fail when run as a user with Adminstrator privileges. Backpatch to all live branches. On the development branch the code is reorganized so that the restricted token code is now in a single location. On the stable bramches a less invasive change is made by simply copying the relevant code to pg_upgrade.c and pg_resetxlog.c. Patches and bug report from Muhammad Asif Naeem, reviewed by Michael Paquier, slightly edited by me.
2015-03-30Fix bogus concurrent use of _hash_getnewbuf() in bucket split code.Tom Lane
_hash_splitbucket() obtained the base page of the new bucket by calling _hash_getnewbuf(), but it held no exclusive lock that would prevent some other process from calling _hash_getnewbuf() at the same time. This is contrary to _hash_getnewbuf()'s API spec and could in fact cause failures. In practice, we must only call that function while holding write lock on the hash index's metapage. An additional problem was that we'd already modified the metapage's bucket mapping data, meaning that failure to extend the index would leave us with a corrupt index. Fix both issues by moving the _hash_getnewbuf() call to just before we modify the metapage in _hash_expandtable(). Unfortunately there's still a large problem here, which is that we could also incur ENOSPC while trying to get an overflow page for the new bucket. That would leave the index corrupt in a more subtle way, namely that some index tuples that should be in the new bucket might still be in the old one. Fixing that seems substantially more difficult; even preallocating as many pages as we could possibly need wouldn't entirely guarantee that the bucket split would complete successfully. So for today let's just deal with the base case. Per report from Antonin Houska. Back-patch to all active branches.
2015-03-29Add vacuum_delay_point call in compute_index_stats's per-sample-row loop.Tom Lane
Slow functions in index expressions might cause this loop to take long enough to make it worth being cancellable. Probably it would be enough to call CHECK_FOR_INTERRUPTS here, but for consistency with other per-sample-row loops in this file, let's use vacuum_delay_point. Report and patch by Jeff Janes. Back-patch to all supported branches.
2015-03-26Make SyncRepWakeQueue to a static functionTatsuo Ishii
It is only used in src/backend/replication/syncrep.c. Back-patch to all supported branches except 9.1 which declares the function as static.
2015-03-24Fix ExecOpenScanRelation to take a lock on a ROW_MARK_COPY relation.Tom Lane
ExecOpenScanRelation assumed that any relation listed in the ExecRowMark list has been locked by InitPlan; but this is not true if the rel's markType is ROW_MARK_COPY, which is possible if it's a foreign table. In most (possibly all) cases, failure to acquire a lock here isn't really problematic because the parser, planner, or plancache would have taken the appropriate lock already. In principle though it might leave us vulnerable to working with a relation that we hold no lock on, and in any case if the executor isn't depending on previously-taken locks otherwise then it should not do so for ROW_MARK_COPY relations. Noted by Etsuro Fujita. Back-patch to all active versions, since the inconsistency has been there a long time. (It's almost certainly irrelevant in 9.0, since that predates foreign tables, but the code's still wrong on its own terms.)
2015-03-14Remove workaround for ancient incompatibility between readline and libedit.Tom Lane
GNU readline defines the return value of write_history() as "zero if OK, else an errno code". libedit's version of that function used to have a different definition (to wit, "-1 if error, else the number of lines written to the file"). We tried to work around that by checking whether errno had become nonzero, but this method has never been kosher according to the published API of either library. It's reportedly completely broken in recent Ubuntu releases: psql bleats about "No such file or directory" when saving ~/.psql_history, even though the write worked fine. However, libedit has been following the readline definition since somewhere around 2006, so it seems all right to finally break compatibility with ancient libedit releases and trust that the return value is what readline specifies. (I'm not sure when the various Linux distributions incorporated this fix, but I did find that OS X has been shipping fixed versions since 10.5/Leopard.) If anyone is still using such an ancient libedit, they will find that psql complains it can't write ~/.psql_history at exit, even when the file was written correctly. This is no worse than the behavior we're fixing for current releases. Back-patch to all supported branches.
2015-03-14Fix integer overflow in debug message of walreceiverTatsuo Ishii
The message tries to tell the replication apply delay which fails if the first WAL record is not applied yet. Fix is, instead of telling overflowed minus numeric, showing "N/A" which indicates that the delay data is not yet available. Problem reported by me and patch by Fabrízio de Royes Mello. Back patched to 9.4, 9.3 and 9.2 stable branches (9.1 and 9.0 do not have the debug message).
2015-03-12Ensure tableoid reads correctly in EvalPlanQual-manufactured tuples.Tom Lane
The ROW_MARK_COPY path in EvalPlanQualFetchRowMarks() was just setting tableoid to InvalidOid, I think on the assumption that the referenced RTE must be a subquery or other case without a meaningful OID. However, foreign tables also use this code path, and they do have meaningful table OIDs; so failure to set the tuple field can lead to user-visible misbehavior. Fix that by fetching the appropriate OID from the range table. There's still an issue about whether CTID can ever have a meaningful value in this case; at least with postgres_fdw foreign tables, it does. But that is a different problem that seems to require a significantly different patch --- it's debatable whether postgres_fdw really wants to use this code path at all. Simplified version of a patch by Etsuro Fujita, who also noted the problem to begin with. The issue can be demonstrated in all versions having FDWs, so back-patch to 9.1.
2015-03-08Fix documentation for libpq's PQfn().Tom Lane
The SGML docs claimed that 1-byte integers could be sent or received with the "isint" options, but no such behavior has ever been implemented in pqGetInt() or pqPutInt(). The in-code documentation header for PQfn() was even less in tune with reality, and the code itself used parameter names matching neither the SGML docs nor its libpq-fe.h declaration. Do a bit of additional wordsmithing on the SGML docs while at it. Since the business about 1-byte integers is a clear documentation bug, back-patch to all supported branches.
2015-03-05Fix user mapping object descriptionAlvaro Herrera
We were using "user mapping for user XYZ" as description for user mappings, but that's ambiguous because users can have mappings on multiple foreign servers; therefore change it to "for user XYZ on server UVW" instead. Object identities for user mappings are also updated in the same way, in branches 9.3 and above. The incomplete description string was introduced together with the whole SQL/MED infrastructure by commit cae565e503 of 8.4 era, so backpatch all the way back.
2015-03-02Fix pg_dump handling of extension config tablesStephen Frost
Since 9.1, we've provided extensions with a way to denote "configuration" tables- tables created by an extension which the user may modify. By marking these as "configuration" tables, the extension is asking for the data in these tables to be pg_dump'd (tables which are not marked in this way are assumed to be entirely handled during CREATE EXTENSION and are not included at all in a pg_dump). Unfortunately, pg_dump neglected to consider foreign key relationships between extension configuration tables and therefore could end up trying to reload the data in an order which would cause FK violations. This patch teaches pg_dump about these dependencies, so that the data dumped out is done so in the best order possible. Note that there's no way to handle circular dependencies, but those have yet to be seen in the wild. The release notes for this should include a caution to users that existing pg_dump-based backups may be invalid due to this issue. The data is all there, but restoring from it will require extracting the data for the configuration tables and then loading them in the correct order by hand. Discussed initially back in bug #6738, more recently brought up by Gilles Darold, who provided an initial patch which was further reworked by Michael Paquier. Further modifications and documentation updates by me. Back-patch to 9.1 where we added the concept of extension configuration tables.
2015-03-01Unlink static libraries before rebuilding them.Noah Misch
When the library already exists in the build directory, "ar" preserves members not named on its command line. This mattered when, for example, a "configure" rerun dropped a file from $(LIBOBJS). libpgport carried the obsolete member until "make clean". Back-patch to 9.0 (all supported versions).
2015-02-28Fix planning of star-schema-style queries.Tom Lane
Part of the intent of the parameterized-path mechanism was to handle star-schema queries efficiently, but some overly-restrictive search limiting logic added in commit e2fa76d80ba571d4de8992de6386536867250474 prevented such cases from working as desired. Fix that and add a regression test about it. Per gripe from Marc Cousin. This is arguably a bug rather than a new feature, so back-patch to 9.2 where parameterized paths were introduced.
2015-02-26Reconsider when to wait for WAL flushes/syncrep during commit.Andres Freund
Up to now RecordTransactionCommit() waited for WAL to be flushed (if synchronous_commit != off) and to be synchronously replicated (if enabled), even if a transaction did not have a xid assigned. The primary reason for that is that sequence's nextval() did not assign a xid, but are worthwhile to wait for on commit. This can be problematic because sometimes read only transactions do write WAL, e.g. HOT page prune records. That then could lead to read only transactions having to wait during commit. Not something people expect in a read only transaction. This lead to such strange symptoms as backends being seemingly stuck during connection establishment when all synchronous replicas are down. Especially annoying when said stuck connection is the standby trying to reconnect to allow syncrep again... This behavior also is involved in a rather complicated <= 9.4 bug where the transaction started by catchup interrupt processing waited for syncrep using latches, but didn't get the wakeup because it was already running inside the same overloaded signal handler. Fix the issue here doesn't properly solve that issue, merely papers over the problems. In 9.5 catchup interrupts aren't processed out of signal handlers anymore. To fix all this, make nextval() acquire a top level xid, and only wait for transaction commit if a transaction both acquired a xid and emitted WAL records. If only a xid has been assigned we don't uselessly want to wait just because of writes to temporary/unlogged tables; if only WAL has been written we don't want to wait just because of HOT prunes. The xid assignment in nextval() is unlikely to cause overhead in real-world workloads. For one it only happens SEQ_LOG_VALS/32 values anyway, for another only usage of nextval() without using the result in an insert or similar is affected. Discussion: 20150223165359.GF30784@awork2.anarazel.de, 369698E947874884A77849D8FE3680C2@maumau, 5CF4ABBA67674088B3941894E22A0D25@maumau Per complaint from maumau and Thom Brown Backpatch all the way back; 9.0 doesn't have syncrep, but it seems better to be consistent behavior across all maintained branches.
2015-02-25Free SQLSTATE and SQLERRM no earlier than other PL/pgSQL variables.Noah Misch
"RETURN SQLERRM" prompted plpgsql_exec_function() to read from freed memory. Back-patch to 9.0 (all supported versions). Little code ran between the premature free and the read, so non-assert builds are unlikely to witness user-visible consequences.
2015-02-25Fix dumping of views that are just VALUES(...) but have column aliases.Tom Lane
The "simple" path for printing VALUES clauses doesn't work if we need to attach nondefault column aliases, because there's noplace to do that in the minimal VALUES() syntax. So modify get_simple_values_rte() to detect nondefault aliases and treat that as a non-simple case. This further exposes that the "non-simple" path never actually worked; it didn't produce valid syntax. Fix that too. Per bug #12789 from Curtis McEnroe, and analysis by Andrew Gierth. Back-patch to all supported branches. Before 9.3, this also requires back-patching the part of commit 092d7ded29f36b0539046b23b81b9f0bf2d637f1 that created get_simple_values_rte() to begin with; inserting the extra test into the old factorization of that logic would've been too messy.
2015-02-23Guard against spurious signals in LockBufferForCleanup.Andres Freund
When LockBufferForCleanup() has to wait for getting a cleanup lock on a buffer it does so by setting a flag in the buffer header and then wait for other backends to signal it using ProcWaitForSignal(). Unfortunately LockBufferForCleanup() missed that ProcWaitForSignal() can return for other reasons than the signal it is hoping for. If such a spurious signal arrives the wait flags on the buffer header will still be set. That then triggers "ERROR: multiple backends attempting to wait for pincount 1". The fix is simple, unset the flag if still set when retrying. That implies an additional spinlock acquisition/release, but that's unlikely to matter given the cost of waiting for a cleanup lock. Alternatively it'd have been possible to move responsibility for maintaining the relevant flag to the waiter all together, but that might have had negative consequences due to possible floods of signals. Besides being more invasive. This looks to be a very longstanding bug. The relevant code in LockBufferForCleanup() hasn't changed materially since its introduction and ProcWaitForSignal() was documented to return for unrelated reasons since 8.2. The master only patch series removing ImmediateInterruptOK made it much easier to hit though, as ProcSendSignal/ProcWaitForSignal now uses a latch shared with other tasks. Per discussion with Kevin Grittner, Tom Lane and me. Backpatch to all supported branches. Discussion: 11553.1423805224@sss.pgh.pa.us
2015-02-23Fix potential deadlock with libpq non-blocking mode.Heikki Linnakangas
If libpq output buffer is full, pqSendSome() function tries to drain any incoming data. This avoids deadlock, if the server e.g. sends a lot of NOTICE messages, and blocks until we read them. However, pqSendSome() only did that in blocking mode. In non-blocking mode, the deadlock could still happen. To fix, take a two-pronged approach: 1. Change the documentation to instruct that when PQflush() returns 1, you should wait for both read- and write-ready, and call PQconsumeInput() if it becomes read-ready. That fixes the deadlock, but applications are not going to change overnight. 2. In pqSendSome(), drain the input buffer before returning 1. This alleviates the problem for applications that only wait for write-ready. In particular, a slow but steady stream of NOTICE messages during COPY FROM STDIN will no longer cause a deadlock. The risk remains that the server attempts to send a large burst of data and fills its output buffer, and at the same time the client also sends enough data to fill its output buffer. The application will deadlock if it goes to sleep, waiting for the socket to become write-ready, before the server's data arrives. In practice, NOTICE messages and such that the server might be sending are usually short, so it's highly unlikely that the server would fill its output buffer so quickly. Backpatch to all supported versions.
2015-02-21Fix misparsing of empty value in conninfo_uri_parse_params().Tom Lane
After finding an "=" character, the pointer was advanced twice when it should only advance once. This is harmless as long as the value after "=" has at least one character; but if it doesn't, we'd miss the terminator character and include too much in the value. In principle this could lead to reading off the end of memory. It does not seem worth treating as a security issue though, because it would happen on client side, and besides client logic that's taking conninfo strings from untrusted sources has much worse security problems than this. Report and patch received off-list from Thomas Fanghaenel. Back-patch to 9.2 where the faulty code was introduced.
2015-02-18Fix failure to honor -Z compression level option in pg_dump -Fd.Tom Lane
cfopen() and cfopen_write() failed to pass the compression level through to zlib, so that you always got the default compression level if you got any at all. In passing, also fix these and related functions so that the correct errno is reliably returned on failure; the original coding supposes that free() cannot change errno, which is untrue on at least some platforms. Per bug #12779 from Christoph Berg. Back-patch to 9.1 where the faulty code was introduced. Michael Paquier
2015-02-17Remove code to match IPv4 pg_hba.conf entries to IPv4-in-IPv6 addresses.Tom Lane
In investigating yesterday's crash report from Hugo Osvaldo Barrera, I only looked back as far as commit f3aec2c7f51904e7 where the breakage occurred (which is why I thought the IPv4-in-IPv6 business was undocumented). But actually the logic dates back to commit 3c9bb8886df7d56a and was simply broken by erroneous refactoring in the later commit. A bit of archives excavation shows that we added the whole business in response to a report that some 2003-era Linux kernels would report IPv4 connections as having IPv4-in-IPv6 addresses. The fact that we've had no complaints since 9.0 seems to be sufficient confirmation that no modern kernels do that, so let's just rip it all out rather than trying to fix it. Do this in the back branches too, thus essentially deciding that our effective behavior since 9.0 is correct. If there are any platforms on which the kernel reports IPv4-in-IPv6 addresses as such, yesterday's fix would have made for a subtle and potentially security-sensitive change in the effective meaning of IPv4 pg_hba.conf entries, which does not seem like a good thing to do in minor releases. So let's let the post-9.0 behavior stand, and change the documentation to match it. In passing, I failed to resist the temptation to wordsmith the description of pg_hba.conf IPv4 and IPv6 address entries a bit. A lot of this text hasn't been touched since we were IPv4-only.
2015-02-17Improve pg_check_dir's handling of closedir() failures.Robert Haas
Avoid losing errno if readdir() fails and closedir() works. This also avoids leaking the directory handle when readdir() fails. Commit 6f03927fce038096f53ca67eeab9adb24938f8a6 introduced logic to better handle readdir() and closedir() failures, bu it missed these cases. Extracted from a larger patch by Marco Nenciarini.
2015-02-17Fix wrong merge resolution making pg_receivexlog fail in 9.2.Andres Freund
I bungled resolving a conflict while backpatching 2c0a48589 to 9.2, by passing mark_done = true to ReceiveXlogStream in pg_receivexlog.c (all the other branches are ok). Since pg_receivexlog doesn't use a archive directory that causes 'could not create archive status file "...": No such file or directory' errors. Until 9.2.11 is released this can be worked around by creating 'archive_directory' in pg_receivexlog's target directory. Found by Sergey Konoplev.
2015-02-16Fix misuse of memcpy() in check_ip().Tom Lane
The previous coding copied garbage into a local variable, pretty much ensuring that the intended test of an IPv6 connection address against a promoted IPv4 address from pg_hba.conf would never match. The lack of field complaints likely indicates that nobody realized this was supposed to work, which is unsurprising considering that no user-facing docs suggest it should work. In principle this could have led to a SIGSEGV due to reading off the end of memory, but since the source address would have pointed to somewhere in the function's stack frame, that's quite unlikely. What led to discovery of the bug is Hugo Osvaldo Barrera's report of a crash after an OS upgrade, which is probably because he is now running a system in which memcpy raises abort() upon detecting overlapping source and destination areas. (You'd have to additionally suppose some things about the stack frame layout to arrive at this conclusion, but it seems plausible.) This has been broken since the code was added, in commit f3aec2c7f51904e7, so back-patch to all supported branches.
2015-02-15Fix null-pointer-deref crash while doing COPY IN with check constraints.Tom Lane
In commit bf7ca15875988a88e97302e012d7c4808bef3ea9 I introduced an assumption that an RTE referenced by a whole-row Var must have a valid eref field. This is false for RTEs constructed by DoCopy, and there are other places taking similar shortcuts. Perhaps we should make all those places go through addRangeTableEntryForRelation or its siblings instead of having ad-hoc logic, but the most reliable fix seems to be to make the new code in ExecEvalWholeRowVar cope if there's no eref. We can reasonably assume that there's no need to insert column aliases if no aliases were provided. Add a regression test case covering this, and also verifying that a sane column name is in fact available in this situation. Although the known case only crashes in 9.4 and HEAD, it seems prudent to back-patch the code change to 9.2, since all the ingredients for a similar failure exist in the variant patch applied to 9.3 and 9.2. Per report from Jean-Pierre Pelletier.