| Age | Commit message (Collapse) | Author | 
|---|
|  | This fixes a pg_upgrade bug that could lead to query errors when
clog files are improperly removed.
Backpatch to 8.4, 9.0, 9.1. | 
|  | This patch fixes an oversight in my commit
7f3eba30c9d622d1981b1368f2d79ba0999cdff2 of 2008-10-23.  That patch
accounted for baserel restriction clauses that reduced the number of rows
coming out of a table (and hence the number of possibly-distinct values of
a join variable), but not for join restriction clauses that might have been
applied at a lower level of join.  To account for the latter, look up the
sizes of the min_lefthand and min_righthand inputs of the current join,
and clamp with those in the same way as for the base relations.
Noted while investigating a complaint from Ben Chobot, although this in
itself doesn't seem to explain his report.
Back-patch to 8.4; previous versions used different estimation methods
for which this heuristic isn't relevant. | 
|  | It is possible for VACUUM to scan no pages at all, if the visibility map
shows that all pages are all-visible.  In this situation VACUUM has no new
information to report about the relation's tuple density, so it wasn't
changing pg_class.reltuples ... but it updated pg_class.relpages anyway.
That's wrong in general, since there is no evidence to justify changing the
density ratio reltuples/relpages, but it's particularly bad if the previous
state was relpages=reltuples=0, which means "unknown tuple density".
We just replaced "unknown" with "zero".  ANALYZE would eventually recover
from this, but it could take a lot of repetitions of ANALYZE to do so if
the relation size is much larger than the maximum number of pages ANALYZE
will scan, because of the moving-average behavior introduced by commit
b4b6923e03f4d29636a94f6f4cc2f5cf6298b8c8.
The only known situation where we could have relpages=reltuples=0 and yet
the visibility map asserts everything's visible is immediately following
a pg_upgrade.  It might be advisable for pg_upgrade to try to preserve the
relpages/reltuples statistics; but in any case this code is wrong on its
own terms, so fix it.  Per report from Sergey Koposov.
Back-patch to 8.4, where the visibility map was introduced, same as the
previous change. | 
|  | On closer inspection, whining in restore_toc_entries_parallel is really
much too late for any user-facing error case.  The right place to do it
is at the start of RestoreArchive(), before we've done anything interesting
(suh as trying to DROP all the targets ...)
Back-patch to 8.4, where parallel restore was introduced. | 
|  | If we are unable to do a parallel restore because the input file is stdin
or is otherwise unseekable, we should complain and fail immediately, not
after having done some of the restore.  Complaining once per thread isn't
so cool either, and the messages should be worded to make it clear this is
an unsupported case not some weird race-condition bug.  Per complaint from
Lonni Friedman.
Back-patch to 8.4, where parallel restore was introduced. | 
|  | These days, such a response is far more likely to signify a server-side
problem, such as fork failure.  Reporting "server does not support SSL"
(in sslmode=require) could be quite misleading.  But the results could
be even worse in sslmode=prefer: if the problem was transient and the
next connection attempt succeeds, we'll have silently fallen back to
protocol version 2.0, possibly disabling features the user needs.
Hence, it seems best to just eliminate the assumption that backing off
to non-SSL/2.0 protocol is the way to recover from an "E" response, and
instead treat the server error the same as we would in non-SSL cases.
I tested this change against a pre-7.0 server, and found that there
was a second logic bug in the "prefer" path: the test to decide whether
to make a fallback connection attempt assumed that we must have opened
conn->ssl, which in fact does not happen given an "E" response.  After
fixing that, the code does indeed connect successfully to pre-7.0,
as long as you didn't set sslmode=require.  (If you did, you get
"Unsupported frontend protocol", which isn't completely off base
given the server certainly doesn't support SSL.)
Since there seems no reason to believe that pre-7.0 servers exist anymore
in the wild, back-patch to all supported branches. | 
|  | There are assorted situations wherein PQconnectPoll() will abandon a
connection attempt and try again with different parameters (eg, SSL versus
not SSL).  However, the code forgot to discard any pending data in libpq's
I/O buffers when doing this.  In at least one case (server returns E
message during SSL negotiation), there is unread input data which bollixes
the next connection attempt.  I have not checked to see whether this is
possible in the other cases where we close the socket and retry, but it
seems like a matter of good defensive programming to add explicit
buffer-flushing code to all of them.
This is one of several issues exposed by Daniel Farina's report of
misbehavior after a server-side fork failure.
This has been wrong since forever, so back-patch to all supported branches. | 
|  | tsvector_concat() allocated its result workspace using the "conservative"
estimate of the sum of the two input tsvectors' sizes.  Unfortunately that
wasn't so conservative as all that, because it supposed that the number of
pad bytes required could not grow.  Which it can, as per test case from
Jesper Krogh, if there's a mix of lexemes with positions and lexemes
without them in the input data.  The fix is to assume that we might add
a not-previously-present pad byte for each and every lexeme in the two
inputs; which really is conservative, but it doesn't seem worthwhile to
try to be more precise.
This is an aboriginal bug in tsvector_concat, so back-patch to all
versions containing it. | 
|  | Per previous experimentation, backtracking slows down lexing performance
significantly (by about a third).  It's usually pretty easy to avoid, just
need to have rules that accept an incomplete construct and do whatever the
lexer would have done otherwise.
The backtracking was introduced by the patch that added quoted variable
substitution.  Back-patch to 9.0 where that was added. | 
|  | Shigeru Hanada | 
|  | Due to tuple-slot mismanagement, evaluation of WHEN conditions for AFTER
ROW UPDATE triggers could crash if there had been a BEFORE ROW trigger
fired for the same update.  Fix by not trying to overload the use of
estate->es_trig_tuple_slot.  Per report from Yoran Heling.
Back-patch to 9.0, when trigger WHEN conditions were introduced. | 
|  | As pointed out by Sergey Koposov, repeated invocations of tbm_lossify can
make building a large tidbitmap into an O(N^2) operation.  To fix, make
sure we remove more than the minimum amount of information per call, and
add a fallback path to behave sanely if we're unable to fit the bitmap
within the requested amount of memory.
This has been wrong since the tidbitmap code was written, so back-patch
to all supported branches. | 
|  | Module initialization functions in Python 3 must have external
linkage, because PyMODINIT_FUNC does dllexport on Windows-like
platforms.  Without this change, the build with Python 3 fails on
Windows. | 
|  | The TID isn't stable enough: we might queue an sinval event before a VACUUM
FULL, and then process it afterwards, when the target tuple no longer has
the same TID.  So we must invalidate entries on the basis of hash value
only.  The old coding can be shown to result in various bizarre,
hard-to-reproduce errors in the presence of concurrent VACUUM FULLs on
system catalogs, and could easily result in permanent catalog corruption,
up to and including complete loss of tables.
This commit is just a minimal fix that removes the unsafe comparison.
We should remove transmission of the tuple TID from sinval messages
altogether, and then arrange to suppress the extra message in the common
case of a heap_update that doesn't change the key hashvalue.  But that's
going to be much more invasive, and will only produce a probably-marginal
performance gain, so it doesn't seem like material for a back-patch.
Back-patch to 9.0.  Before that, VACUUM FULL refused to do any tuple moving
if it found any INSERT_IN_PROGRESS or DELETE_IN_PROGRESS tuples (and
CLUSTER would give up altogether), so there was no risk of moving a tuple
that might be the subject of an unsent sinval message. | 
|  | We have to be sure that we have revalidated each nailed-in-cache relcache
entry before we try to use it to load data for some other relcache entry.
The introduction of "mapped relations" in 9.0 broke this, because although
we updated the state kept in relmapper.c early enough, we failed to
propagate that information into relcache entries soon enough; in
particular, we could try to fetch pg_class rows out of pg_class before
we'd updated its relcache entry's rd_node.relNode value from the map.
This bug accounts for Dave Gould's report of failures after "vacuum full
pg_class", and I believe that there is risk for other system catalogs
as well.
The core part of the fix is to copy relmapper data into the relcache
entries during "phase 1" in RelationCacheInvalidate(), before they'll be
used in "phase 2".  To try to future-proof the code against other similar
bugs, I also rearranged the order in which nailed relations are visited
during phase 2: now it's pg_class first, then pg_class_oid_index, then
other nailed relations.  This should ensure that RelationClearRelation can
apply RelationReloadIndexInfo to all nailed indexes without risking use
of not-yet-revalidated relcache entries.
Back-patch to 9.0 where the relation mapper was introduced. | 
|  | This works around the problem that a catalog cache entry might contain a
toast pointer that we try to dereference just as a VACUUM FULL completes
on that catalog.  We will see the sinval message on the cache entry when
we acquire lock on the toast table, but by that point we've already told
tuptoaster.c "here's the pointer to fetch", so it's difficult from a code
structural standpoint to update the pointer before we use it.  Much less
painful to ensure that toast pointers are not invalidated in the first
place.  We have to add a bit of code to deal with the case that a value
that previously wasn't toasted becomes so; but that should be a
seldom-exercised corner case, so the inefficiency shouldn't be significant.
Back-patch to 9.0.  In prior versions, we didn't allow CLUSTER on system
catalogs, and VACUUM FULL didn't result in reassignment of toast OIDs, so
there was no problem. | 
|  | The previous code tried to synchronize by unlinking the init file twice,
but that doesn't actually work: it leaves a window wherein a third process
could read the already-stale init file but miss the SI messages that would
tell it the data is stale.  The result would be bizarre failures in catalog
accesses, typically "could not read block 0 in file ..." later during
startup.
Instead, hold RelCacheInitLock across both the unlink and the sending of
the SI messages.  This is more straightforward, and might even be a bit
faster since only one unlink call is needed.
This has been wrong since it was put in (in 2002!), so back-patch to all
supported releases. | 
|  | When updating or deleting a system catalog tuple, it's necessary to acquire
RowExclusiveLock on the catalog before looking up the tuple; otherwise a
concurrent VACUUM FULL on the catalog might move the tuple to a different
TID before we can apply the update.  Coding patterns that find the tuple
via a table scan aren't at risk here, but when obtaining the tuple from a
catalog cache, correct ordering is important; and several routines in
foreigncmds.c got it wrong.  Noted while running the regression tests in
parallel with VACUUM FULL of assorted system catalogs.
For consistency I moved all the heap_open calls to the starts of their
functions, including a couple for which there was no actual bug.
Back-patch to 8.4 where foreigncmds.c was added. | 
|  | The statement start timestamp was not set before initiating the transaction
that is used to look up client authentication information in pg_authid.
In consequence, enable_sig_alarm computed a wrong value (far in the past)
for statement_fin_time.  That didn't have any immediate effect, because the
timeout alarm was set without reference to statement_fin_time; but if we
subsequently blocked on a lock for a short time, CheckStatementTimeout
would consult the bogus value when we cancelled the lock timeout wait,
and then conclude we'd timed out, leading to immediate failure of the
connection attempt.  Thus an innocent "vacuum full pg_authid" would cause
failures of concurrent connection attempts.  Noted while testing other,
more serious consequences of vacuum full on system catalogs.
We should set the statement timestamp before StartTransactionCommand(),
so that the transaction start timestamp is also valid.  I'm not sure if
there are any non-cosmetic effects of it not being valid, but the xact
timestamp is at least sent to the statistics machinery.
Back-patch to 9.0.  Before that, the client authentication timeout was done
outside any transaction and did not depend on this state to be valid. | 
|  | A PlaceHolderVar's expression might contain another, lower-level
PlaceHolderVar.  If the outer PlaceHolderVar is used, the inner one
certainly will be also, and so we have to make sure that both of them get
into the placeholder_list with correct ph_may_need values during the
initial pre-scan of the query (before deconstruct_jointree starts).
We did this correctly for PlaceHolderVars appearing in the query quals,
but overlooked the issue for those appearing in the top-level targetlist;
with the result that nested placeholders referenced only in the targetlist
did not work correctly, as illustrated in bug #6154.
While at it, add some error checking to find_placeholder_info to ensure
that we don't try to create new placeholders after it's too late to do so;
they have to all be created before deconstruct_jointree starts.
Back-patch to 8.4 where the PlaceHolderVar mechanism was introduced. | 
|  | This kluge was inserted in a spot apparently chosen at random: the lock
manager's state is not yet fully set up for the wait, and in particular
LockWaitCancel hasn't been armed by setting lockAwaited, so the ProcLock
will not get cleaned up if the ereport is thrown.  This seems to not cause
any observable problem in trivial test cases, because LockReleaseAll will
silently clean up the debris; but I was able to cause failures with tests
involving subtransactions.
Fixes breakage induced by commit c85c941470efc44494fd7a5f426ee85fc65c268c.
Back-patch to all affected branches. | 
|  | It was initialized in the wrong place and to the wrong value.  With bad
luck this could result in incorrect query-cancellation failures in hot
standby sessions, should a HS backend be holding pin on buffer number 1
while trying to acquire a lock. | 
|  | This fixes bug #6139 reported by Hitoshi Harada. | 
|  | pg_backup_db.c contained a mini SQL lexer with which it tried to identify
boundaries between SQL commands, but that code was not designed to cope
with standard_conforming_strings, and would get the wrong answer if a
backslash immediately precedes a closing single quote in such a string,
as per report from Julian Mehnle.  The bug only affects direct-to-database
restores from archive files made with standard_conforming_strings = on.
Rather than complicating the code some more to try to fix that, let's just
rip it all out.  The only reason it was needed was to cope with COPY data
embedded into ordinary archive entries, which was a layout that was used
only for about the first three weeks of the archive format's existence,
and never in any production release of pg_dump.  Instead, just rely on the
archive file layout to tell us whether we're printing COPY data or not.
This bug represents a data corruption hazard in all releases in which
standard_conforming_strings can be turned on, ie 8.2 and later, so
back-patch to all supported branches. | 
|  |  | 
|  | On balance, the need to cover this case changes my mind in favor of pushing
all error-message generation duties into the two fe-secure.c routines.
So do it that way. | 
|  | In many cases, pqsecure_read/pqsecure_write set up useful error messages,
which were then overwritten with useless ones by their callers.  Fix this
by defining the responsibility to set an error message to be entirely that
of the lower-level function when using SSL.
Back-patch to 8.3; the code is too different in 8.2 to be worth the
trouble. | 
|  | This disables an entirely unnecessary "sanity check" that causes failures
in nonblocking mode, because OpenSSL complains if we move or compact the
write buffer.  The only actual requirement is that we not modify pending
data once we've attempted to send it, which we don't.  Per testing and
research by Martin Pihlak, though this fix is a lot simpler than his patch.
I put the same change into the backend, although it's less clear whether
it's necessary there.  We do use nonblock mode in some situations in
streaming replication, so seems best to keep the same behavior in the
backend as in libpq.
Back-patch to all supported releases. | 
|  | PQsetvalue unnecessarily duplicated the logic in pqAddTuple, and didn't
duplicate it exactly either --- pqAddTuple does not care what is in the
tuple-pointer array positions beyond the last valid entry, whereas the
code in PQsetvalue assumed such positions would contain NULL.  This led
to possible crashes if PQsetvalue was applied to a PGresult that had
previously been enlarged with pqAddTuple, for instance one built from a
server query.  Fix by relying on pqAddTuple instead of duplicating logic,
and not assuming anything about the contents of res->tuples[res->ntups].
Back-patch to 8.4, where PQsetvalue was introduced.
Andrew Chernow | 
|  |  | 
|  | Patch originally by Akira Kurosawa <kurosawa-akira@mxc.nes.nec.co.jp>. | 
|  | This fixes SSPI login failures showing "The function
requested is not supported", often showing up when connecting
to localhost. The reason was not properly updating the SSPI
handle when multiple roundtrips were required to complete the
authentication sequence.
Report and analysis by Ahmed Shinwari, patch by Magnus Hagander | 
|  | First, when following a right-link, we incorrectly marked the current page
as the parent of the right sibling. In reality, the parent of the right page
is the same as the parent of the current page (or some page to the right of
it, gistFindCorrectParent() will sort that out).
Secondly, when we follow a right-link, we must prepend, not append, the right
page to our list of pages to visit. That's because we assume that once we
hit a leaf page in the list, all the rest are leaf pages too, and give up.
To hit these bugs, you need concurrent actions and several unlucky accidents.
Another backend must split the root page, while you're in process of
splitting a lower-level page. Furthermore, while you scan the internal nodes
to re-find the parent, another backend needs to again split some more internal
pages. Even then, the bugs don't necessarily manifest as user-visible errors
or index corruption.
While we're at it, make the error reporting a bit better if gistFindPath()
fails to re-find the parent. It used to be an assertion, but an elog() seems
more appropriate.
Backpatch to all supported branches. | 
|  | There's a heuristic in estimate_rel_size() to clamp the minimum size
estimate for a table to 10 pages, unless we can see that vacuum or analyze
has been run (and set relpages to something nonzero, so this will always
happen for a table that's actually empty).  However, it would be better
not to do this for inheritance parent tables, which very commonly are
really empty and can be expected to stay that way.  Per discussion of a
recent pgsql-performance report from Anish Kejariwal.  Also prevent it
from happening for indexes (although this is more in the nature of
documentation, since CREATE INDEX normally initializes relpages to
something nonzero anyway).
Back-patch to 9.0, because the ability to collect statistics across a
whole inheritance tree has improved the planner's estimates to the point
where this relatively small error makes a significant difference.  In the
referenced report, merge or hash joins were incorrectly estimated as
cheaper than a nestloop with inner indexscan on the inherited table.
That was less likely before 9.0 because the lack of inherited stats would
have resulted in a default (and rather pessimistic) estimate of the cost
of a merge or hash join. | 
|  | We were using GetConfigOption to collect the old value of each setting,
overlooking the possibility that it didn't exist yet.  This does happen
in the case of adding a new entry within a custom variable class, as
exhibited in bug #6097 from Maxim Boguk.
To fix, add a missing_ok parameter to GetConfigOption, but only in 9.1
and HEAD --- it seems possible that some third-party code is using that
function, so changing its API in a minor release would cause problems.
In 9.0, create a near-duplicate function instead. | 
|  | handleCopyIn incremented pset.lineno for each line of COPY data read from
a file.  This is correct when reading from the current script file (i.e.,
we are doing COPY FROM STDIN followed by in-line data), but it's wrong if
the data is coming from some other file.  Per bug #6083 from Steve Haslam.
Back-patch to all supported versions. | 
|  |  | 
|  | It is possible for a NestLoop plan node to pass an OUTER Var into an
"inner indexscan" that is an Append construct (derived from an inheritance
tree or UNION ALL subquery).  The OUTER tuple is then passed down at
runtime to the leaf indexscan node(s) where it will actually be used.
EXPLAIN has to likewise pass the information about the nestloop's outer
subplan down through the Append node, else it will fail to print the
outer-reference Vars (with complaints like "bogus varno: 65001").
However, there was a case missed in all this: we could also have gating
Result nodes that were inserted into the appendrel plan tree to deal with
pseudoconstant qual conditions.  So EXPLAIN has to pass down the outer plan
node to a Result's subplan, too.  Per example from Jon Nelson.
The problem is gone in 9.1 because we replaced the nestloop outer-tuple
kluge with a Param-based data transfer mechanism.  Also, so far as I can
tell, the case can't happen before 8.4 because of restrictions on what
sorts of appendrel members could be pulled up into the parent query.
So this patch is only needed for 8.4 and 9.0. | 
|  | Such a condition is unsatisfiable in combination with any other type of
btree-indexable condition (since we assume btree operators are always
strict).  8.3 and 8.4 had an explicit test for this, which I removed in
commit 29c4ad98293e3c5cb3fcdd413a3f4904efff8762, mistakenly thinking that
the case would be subsumed by the more general handling of IS (NOT) NULL
added in that patch.  Put it back, and improve the comments about it, and
add a regression test case.
Per bug #6079 from Renat Nasyrov, and analysis by Dean Rasheed. | 
|  | Per bug #6082, reported by Steve Haslam | 
|  | WAL records of type XLOG_BTREE_REUSE_PAGE were generated using a
latestRemovedXid one higher than actually needed because xid used was
page opaque->btpo.xact rather than an actually removed xid.
Noticed on an otherwise quiet system by Noah Misch.
Noah Misch and Simon Riggs | 
|  | When recursing after an optimization in pull_up_sublinks_qual_recurse, the
available_rels value passed down must include only the relations that are
in the righthand side of the new SEMI or ANTI join; it's incorrect to pull
up a sub-select that refers to other relations, as seen in the added test
case.  Per report from BangarRaju Vadapalli.
While at it, rethink the idea of recursing below a NOT EXISTS.  That is
essentially the same situation as pulling up ANY/EXISTS sub-selects that
are in the ON clause of an outer join, and it has the same disadvantage:
we'd force the two joins to be evaluated according to the syntactic nesting
order, because the lower join will most likely not be able to commute with
the ANTI join.  That could result in having to form a rather large join
product, whereas the handling of a correlated subselect is not quite that
dumb.  So until we can handle those cases better, #ifdef NOT_USED that
case.  (I think it's okay to pull up in the EXISTS/ANY cases, because SEMI
joins aren't so inflexible about ordering.)
Back-patch to 8.4, same as for previous patch in this area.  Fortunately
that patch hadn't made it into any shipped releases yet. | 
|  | Applied patch by Christoph Berg <cb@df7cb.de> to replace placeholder "%s" by
correct string. | 
|  | I mis-simplified the test where ANALYZE decided if it could get away
without doing anything: under the new regime, that's never allowed.  Per
bug #6068 from Jeff Janes.  Back-patch to 8.4, just like previous patch. | 
|  | For some reason, when we (I) added table lock acquisition to pg_dump,
we didn't think about making it happen as soon as possible after the
start of the transaction.  What with subsequent additions, there was
actually quite a lot going on before we got around to that; which sort
of defeats the purpose.  Rearrange the order of calls in dumpSchema()
to close the risk window as much as we easily can.  Back-patch to all
supported branches. | 
|  | The previous code went into an infinite loop after overflow.  In fact,
an overflow is not really an error; it just means that the current
value is the last one we need to return.  So, just arrange to stop
immediately when overflow is detected.
Back-patch all the way. | 
|  | Btree pages were recycled after VACUUM deletes all records on a
page and then a subsequent VACUUM occurs after the RecentXmin
horizon is reached. Using RecentXmin meant that we did not respond
correctly to the user controls provide to avoid Hot Standby
conflicts and so spurious conflicts could be generated in some
workload combinations. We now reuse pages only when we reach
RecentGlobalXmin, which can be much later in the presence of long
running queries and is also controlled by vacuum_defer_cleanup_age.
Noah Misch and Simon Riggs | 
|  | This oversight could result in a tuplestore using much more than the
intended amount of memory.  It would only happen in a code path that loaded
a tuplestore via tuplestore_putvalues(), and many of those won't emit huge
amounts of data; but cases such as holdable cursors and plpgsql's RETURN
NEXT command could have the problem.  The fix ensures that the tuplestore
will switch to write-to-disk mode when it overruns work_mem.
The potential overrun was finite, because we would still count the space
used by the tuple pointer array, so the tuplestore code would eventually
flip into write-to-disk mode anyway.  When storing wide tuples we would
go far past the expected work_mem usage before that happened; but this
may account for the lack of prior reports.
Back-patch to 8.4, where tuplestore_putvalues was introduced.
Per bug #6061 from Yann Delorme. | 
|  | Apparently there is no buildfarm critter exercising this case after all,
because it fails in several places.  With this patch, build, install,
check-world, and installcheck-world pass for me on OS X. | 
|  | Spotted by Jaime Casanova |