| Age | Commit message (Collapse) | Author |
|
None of the other bison parsers contains this directive, and it gives
rise to some unfortunate and impenetrable messages, so just remove it.
Backpatch to release 12, where it was introduced.
Per gripe from Erik Rijkers
Discussion: https://postgr.es/m/ba069ce2-a98f-dc70-dc17-2ccf2a9bf7c7@xs4all.nl
|
|
TransactionIdIsInProgress had a fast path to return 'false' if the
single-item CLOG cache said that the transaction was known to be
committed. However, that was wrong, because a transaction is first
marked as committed in the CLOG but doesn't become visible to others
until it has removed its XID from the proc array. That could lead to an
error:
ERROR: t_xmin is uncommitted in tuple to be updated
or for an UPDATE to go ahead without blocking, before the previous
UPDATE on the same row was made visible.
The window is usually very short, but synchronous replication makes it
much wider, because the wait for synchronous replica happens in that
window.
Another thing that makes it hard to hit is that it's hard to get such
a commit-in-progress transaction into the single item CLOG cache.
Normally, if you call TransactionIdIsInProgress on such a transaction,
it determines that the XID is in progress without checking the CLOG
and without populating the cache. One way to prime the cache is to
explicitly call pg_xact_status() on the XID. Another way is to use a
lot of subtransactions, so that the subxid cache in the proc array is
overflown, making TransactionIdIsInProgress rely on pg_subtrans and
CLOG checks.
This has been broken ever since it was introduced in 2008, but the race
condition is very hard to hit, especially without synchronous
replication. There were a couple of reports of the error starting from
summer 2021, but no one was able to find the root cause then.
TransactionIdIsKnownCompleted() is now unused. In 'master', remove it,
but I left it in place in backbranches in case it's used by extensions.
Also change pg_xact_status() to check TransactionIdIsInProgress().
Previously, it only checked the CLOG, and returned "committed" before
the transaction was actually made visible to other queries. Note that
this also means that you cannot use pg_xact_status() to reproduce the
bug anymore, even if the code wasn't fixed.
Report and analysis by Konstantin Knizhnik. Patch by Simon Riggs, with
the pg_xact_status() change added by me.
Author: Simon Riggs
Reviewed-by: Andres Freund
Discussion: https://www.postgresql.org/message-id/flat/4da7913d-398c-e2ad-d777-f752cf7f0bbb%40garret.ru
|
|
SPI_commit previously left it up to the caller to recover from any error
occurring during commit. Since that's complicated and requires use of
low-level xact.c facilities, it's not too surprising that no caller got
it right. Let's move the responsibility for cleanup into spi.c. Doing
that requires redefining SPI_commit as starting a new transaction, so
that it becomes equivalent to SPI_commit_and_chain except that you get
default transaction characteristics instead of preserving the prior
transaction's characteristics. We can make this pretty transparent
API-wise by redefining SPI_start_transaction() as a no-op. Callers
that expect to do something in between might be surprised, but
available evidence is that no callers do so.
Having made that API redefinition, we can fix this mess by having
SPI_commit[_and_chain] trap errors and start a new, clean transaction
before re-throwing the error. Likewise for SPI_rollback[_and_chain].
Some cleanup is also needed in AtEOXact_SPI, which was nowhere near
smart enough to deal with SPI contexts nested inside a committing
context.
While plperl and pltcl need no changes beyond removing their now-useless
SPI_start_transaction() calls, plpython needs some more work because it
hadn't gotten the memo about catching commit/rollback errors in the
first place. Such an error resulted in longjmp'ing out of the Python
interpreter, which leaks Python stack entries at present and is reported
to crash Python 3.11 altogether. Add the missing logic to catch such
errors and convert them into Python exceptions.
This is a back-patch of commit 2e517818f. That's now aged long enough
to reduce the concerns about whether it will break something, and we
do need to ensure that supported branches will work with Python 3.11.
Peter Eisentraut and Tom Lane
Discussion: https://postgr.es/m/3375ffd8-d71c-2565-e348-a597d6e739e3@enterprisedb.com
Discussion: https://postgr.es/m/17416-ed8fe5d7213d6c25@postgresql.org
|
|
We hadn't noticed this because (a) few people feed invalid
timezone abbreviation files to the server, and (b) in typical
scenarios guc.c would throw ereport(ERROR) and then transaction
abort handling would silently clean up the leaked file reference.
However, it was possible to observe file leakage warnings if one
breaks an already-active abbreviation file, because guc.c does
not throw ERROR when loading supposedly-validated settings during
session start or SIGHUP processing.
Report and fix by Kyotaro Horiguchi (cosmetic adjustments by me)
Discussion: https://postgr.es/m/20220530.173740.748502979257582392.horikyota.ntt@gmail.com
|
|
If a short description is specified as NULL in one of the various
DefineCustomXXXVariable() functions available to external modules to
define a custom parameter, SHOW ALL would crash. This change teaches
SHOW ALL to properly handle NULL short descriptions, as well as any code
paths that manipulate it, to gain in flexibility. Note that
help_config.c was already able to do that, when describing a set of GUCs
for postgres --describe-config.
Author: Steve Chavez
Reviewed by: Nathan Bossart, Andres Freund, Michael Paquier, Tom Lane
Discussion: https://postgr.es/m/CAGRrpzY6hO-Kmykna_XvsTv8P2DshGiU6G3j8yGao4mk0CqjHA%40mail.gmail.com
Backpatch-through: 10
|
|
ruleutils.c was coded to suppress the AS label for a SELECT output
expression if the column name is "?column?", which is the parser's
fallback if it can't think of something better. This is fine, and
avoids ugly clutter, so long as (1) nothing further up in the parse
tree relies on that column name or (2) the same fallback would be
assigned when the rule or view definition is reloaded. Unfortunately
(2) is far from certain, both because ruleutils.c might print the
expression in a different form from how it was originally written
and because FigureColname's rules might change in future releases.
So we shouldn't rely on that.
Detecting exactly whether there is any outer-level use of a SELECT
column name would be rather expensive. This patch takes the simpler
approach of just passing down a flag indicating whether there *could*
be any outer use; for example, the output column names of a SubLink
are not referenceable, and we also do not care about the names exposed
by the right-hand side of a setop. This is sufficient to suppress
unwanted clutter in all but one case in the regression tests. That
seems like reasonable evidence that it won't be too much in users'
faces, while still fixing the cases we need to fix.
Per bug #17486 from Nicolas Lutic. This issue is ancient, so
back-patch to all supported branches.
Discussion: https://postgr.es/m/17486-1ad6fd786728b8af@postgresql.org
|
|
This reverts commit eafdf9de06e9b60168f5e47cedcfceecdc6d4b5f
and its back-branch counterparts. Corey Huinker pointed out that
we'd discussed this exact change back in 2016 and rejected it,
on the grounds that there's at least one usage pattern with LIMIT
where an infinite endpoint can usefully be used. Perhaps that
argument needs to be re-litigated, but there's no time left before
our back-branch releases. To keep our options open, restore the
status quo ante; if we do end up deciding to change things, waiting
one more quarter won't hurt anything.
Rather than just doing a straight revert, I added a new test case
demonstrating the usage with LIMIT. That'll at least remind us of
the issue if we forget again.
Discussion: https://postgr.es/m/3603504.1652068977@sss.pgh.pa.us
Discussion: https://postgr.es/m/CADkLM=dzw0Pvdqp5yWKxMd+VmNkAMhG=4ku7GnCZxebWnzmz3Q@mail.gmail.com
|
|
When a feature enumerates relations and runs functions associated with
all found relations, the feature's user shall not need to trust every
user having permission to create objects. BRIN-specific functionality
in autovacuum neglected to account for this, as did pg_amcheck and
CLUSTER. An attacker having permission to create non-temp objects in at
least one schema could execute arbitrary SQL functions under the
identity of the bootstrap superuser. CREATE INDEX (not a
relation-enumerating operation) and REINDEX protected themselves too
late. This change extends to the non-enumerating amcheck interface.
Back-patch to v10 (all supported versions).
Sergey Shinderuk, reviewed (in earlier versions) by Alexander Lakhin.
Reported by Alexander Lakhin.
Security: CVE-2022-1552
|
|
CLUSTER sort won't use the datum1 SortTuple field when clustering
against an index whose leading key is an expression. This makes it
unsafe to use the abbreviated keys optimization, which was missed by the
logic that sets up SortSupport state. Affected tuplesorts output tuples
in a completely bogus order as a result (the wrong SortSupport based
comparator was used for the leading attribute).
This issue is similar to the bug fixed on the master branch by recent
commit cc58eecc5d. But it's a far older issue, that dates back to the
introduction of the abbreviated keys optimization by commit 4ea51cdfe8.
Backpatch to all supported versions.
Author: Peter Geoghegan <pg@bowt.ie>
Author: Thomas Munro <thomas.munro@gmail.com>
Discussion: https://postgr.es/m/CA+hUKG+bA+bmwD36_oDxAoLrCwZjVtST2fqe=b4=qZcmU7u89A@mail.gmail.com
Backpatch: 10-
|
|
Such cases will lead to infinite loops, so they're of no practical
value. The numeric variant of generate_series() already threw error
for this, so borrow its message wording.
Per report from Richard Wesley. Back-patch to all supported branches.
Discussion: https://postgr.es/m/91B44E7B-68D5-448F-95C8-B4B3B0F5DEAF@duckdblabs.com
|
|
Noticed via -fsanitize=undefined. Introduced when a few columns in
GetConfigOptionByNum() / pg_settings started to be translated in 72be8c29a /
PG 12.
Backpatch to all affected branches, for the same reasons as 46ab07ffda9.
Discussion: https://postgr.es/m/20220323173537.ll7klrglnp4gn2um@alap3.anarazel.de
Backpatch: 12-
|
|
Noticed via -fsanitize=undefined.
Backpatch to all branches, for the same reasons as 46ab07ffda9.
Discussion: https://postgr.es/m/20220323173537.ll7klrglnp4gn2um@alap3.anarazel.de
Backpatch: 10-
|
|
The planner needs to treat GroupingFunc like Aggref for many purposes,
in particular with respect to processing of the argument expressions,
which are not to be evaluated at runtime. A few places hadn't gotten
that memo, notably including subselect.c's processing of outer-level
aggregates. This resulted in assertion failures or wrong plans for
cases in which a GROUPING() construct references an outer aggregation
level.
Also fix missing special cases for GroupingFunc in cost_qual_eval
(resulting in wrong cost estimates for GROUPING(), although it's
not clear that that would affect plan shapes in practice) and in
ruleutils.c (resulting in excess parentheses in pretty-print mode).
Per bug #17088 from Yaoguang Chen. Back-patch to all supported
branches.
Richard Guo, Tom Lane
Discussion: https://postgr.es/m/17088-e33882b387de7f5c@postgresql.org
|
|
The output of table_to_xmlschema() and allied functions includes
a regex describing valid values for these types ... but the regex
was itself invalid, as it failed to escape a literal "+" sign.
Report and fix by Renan Soares Lopes. Back-patch to all
supported branches.
Discussion: https://postgr.es/m/7f6fabaa-3f8f-49ab-89ca-59fbfe633105@me.com
|
|
Most of these are cases where we could call memcpy() or other libc
functions with a NULL pointer and a zero count, which is forbidden
by POSIX even though every production version of libc allows it.
We've fixed such things before in a piecemeal way, but apparently
never made an effort to try to get them all. I don't claim that
this patch does so either, but it gets every failure I observe in
check-world, using clang 12.0.1 on current RHEL8.
numeric.c has a different issue that the sanitizer doesn't like:
"ln(-1.0)" will compute log10(0) and then try to assign the
resulting -Inf to an integer variable. We don't actually use the
result in such a case, so there's no live bug.
Back-patch to all supported branches, with the idea that we might
start running a buildfarm member that tests this case. This includes
back-patching c1132aae3 (Check the size in COPY_POINTER_FIELD),
which previously silenced some of these issues in copyfuncs.c.
Discussion: https://postgr.es/m/CALNJ-vT9r0DSsAOw9OXVJFxLENoVS_68kJ5x0p44atoYH+H4dg@mail.gmail.com
|
|
GCC 12 complains that set_stack_base is storing the address of
a local variable in a long-lived pointer. This is an entirely
reasonable warning (indeed, it just helped us find a bug);
but that behavior is intentional here. We can work around it
by using __builtin_frame_address(0) instead of a specific local
variable; that produces an address a dozen or so bytes different,
in my testing, but we don't care about such a small difference.
Maybe someday a compiler lacking that function will start to issue
a similar warning, but we'll worry about that when it happens.
Patch by me, per a suggestion from Andres Freund. Back-patch to
v12, which is as far back as the patch will go without some pain.
(Recently-established project policy would permit a back-patch as
far as 9.2, but I'm disinclined to expend the work until GCC 12
is much more widespread.)
Discussion: https://postgr.es/m/3773792.1645141467@sss.pgh.pa.us
|
|
The back-patch of commit fdd965d074d46765c295223b119ca437dbcac973 broke
CLOBBER_CACHE_ALWAYS for v9.6 through v13. It updated the
InvalidateSystemCaches() call for CLOBBER_CACHE_RECURSIVELY, neglecting
the one for CLOBBER_CACHE_ALWAYS. Back-patch to v13, v12, v11, and v10.
Reviewed by Tomas Vondra. Reported by Tomas Vondra.
Discussion: https://postgr.es/m/df7b4c0b-7d92-f03f-75c4-9e08b269a716@enterprisedb.com
|
|
Commit 8431e296ea reworked ProcArrayApplyRecoveryInfo to sort XIDs
before adding them to KnownAssignedXids. But the XIDs are sorted using
xidComparator, which compares the XIDs simply as uint32 values, not
logically. KnownAssignedXidsAdd() however expects XIDs in logical order,
and calls TransactionIdFollowsOrEquals() to enforce that. If there are
XIDs for which the two orderings disagree, an error is raised and the
recovery fails/restarts.
Hitting this issue is fairly easy - you just need two transactions, one
started before the 4B limit (e.g. XID 4294967290), the other sometime
after it (e.g. XID 1000). Logically (4294967290 <= 1000) but when
compared using xidComparator we try to add them in the opposite order.
Which makes KnownAssignedXidsAdd() fail with an error like this:
ERROR: out-of-order XID insertion in KnownAssignedXids
This only happens during replica startup, while processing RUNNING_XACTS
records to build the snapshot. Once we reach STANDBY_SNAPSHOT_READY, we
skip these records. So this does not affect already running replicas,
but if you restart (or create) a replica while there are transactions
with XIDs for which the two orderings disagree, you may hit this.
Long-running transactions and frequent replica restarts increase the
likelihood of hitting this issue. Once the replica gets into this state,
it can't be started (even if the old transactions are terminated).
Fixed by sorting the XIDs logically - this is fine because we're dealing
with normal XIDs (because it's XIDs assigned to backends) and from the
same wraparound epoch (otherwise the backends could not be running at
the same time on the primary node). So there are no problems with the
triangle inequality, which is why xidComparator compares raw values.
Investigation and root cause analysis by Abhijit Menon-Sen. Patch by me.
This issue is present in all releases since 9.4, however releases up to
9.6 are EOL already so backpatch to 10 only.
Reviewed-by: Abhijit Menon-Sen
Reviewed-by: Alvaro Herrera
Backpatch-through: 10
Discussion: https://postgr.es/m/36b8a501-5d73-277c-4972-f58a4dce088a%40enterprisedb.com
|
|
Commit 859b3003de disabled building of extended stats for inheritance
trees, to prevent updating the same catalog row twice. While that
resolved the issue, it also means there are no extended stats for
declaratively partitioned tables, because there are no data in the
non-leaf relations.
That also means declaratively partitioned tables were not affected by
the issue 859b3003de addressed, which means this is a regression
affecting queries that calculate estimates for the whole inheritance
tree as a whole (which includes e.g. GROUP BY queries).
But because partitioned tables are empty, we can invert the condition
and build statistics only for the case with inheritance, without losing
anything. And we can consider them when calculating estimates.
It may be necessary to run ANALYZE on partitioned tables, to collect
proper statistics. For declarative partitioning there should no prior
statistics, and it might take time before autoanalyze is triggered. For
tables partitioned by inheritance the statistics may include data from
child relations (if built 859b3003de), contradicting the current code.
Report and patch by Justin Pryzby, minor fixes and cleanup by me.
Backpatch all the way back to PostgreSQL 10, where extended statistics
were introduced (same as 859b3003de).
Author: Justin Pryzby
Reported-by: Justin Pryzby
Backpatch-through: 10
Discussion: https://postgr.es/m/20210923212624.GI831%40telsasoft.com
|
|
Since commit 859b3003de we only build extended statistics for individual
relations, ignoring the child relations. This resolved the issue with
updating catalog tuple twice, but we still tried to use the statistics
when calculating estimates for the whole inheritance tree. When the
relations contain very distinct data, it may produce bogus estimates.
This is roughly the same issue 427c6b5b9 addressed ~15 years ago, and we
fix it the same way - by ignoring extended statistics when calculating
estimates for the inheritance tree as a whole. We still consider
extended statistics when calculating estimates for individual child
relations, of course.
This may result in plan changes due to different estimates, but if the
old statistics were not describing the inheritance tree particularly
well it's quite likely the new plans is actually better.
Report and patch by Justin Pryzby, minor fixes and cleanup by me.
Backpatch all the way back to PostgreSQL 10, where extended statistics
were introduced (same as 859b3003de).
Author: Justin Pryzby
Reported-by: Justin Pryzby
Backpatch-through: 10
Discussion: https://postgr.es/m/20210923212624.GI831%40telsasoft.com
|
|
Commit 7745bc352 intended to ensure that whole-row Vars would be
printed with "::type" decoration in all contexts where plain
"var.*" notation would result in star-expansion, notably in
ROW() and VALUES() constructs. However, it missed the case of
INSERT with a single-row VALUES, as reported by Timur Khanjanov.
Nosing around ruleutils.c, I found a second oversight: the
code for RowCompareExpr generates ROW() notation without benefit
of an actual RowExpr, and naturally it wasn't in sync :-(.
(The code for FieldStore also does this, but we don't expect that
to generate strictly parsable SQL anyway, so I left it alone.)
Back-patch to all supported branches.
Discussion: https://postgr.es/m/efaba6f9-4190-56be-8ff2-7a1674f9194f@intrans.baku.az
|
|
The purpose of commit 8a54e12a38d1545d249f1402f66c8cde2837d97c was to
fix this, and it sufficed when the PREPARE TRANSACTION completed before
the CIC looked for lock conflicts. Otherwise, things still broke. As
before, in a cluster having used CIC while having enabled prepared
transactions, queries that use the resulting index can silently fail to
find rows. It may be necessary to reindex to recover from past
occurrences; REINDEX CONCURRENTLY suffices. Fix this for future index
builds by making CIC wait for arbitrarily-recent prepared transactions
and for ordinary transactions that may yet PREPARE TRANSACTION. As part
of that, have PREPARE TRANSACTION transfer locks to its dummy PGPROC
before it calls ProcArrayClearTransaction(). Back-patch to 9.6 (all
supported versions).
Andrey Borodin, reviewed (in earlier versions) by Andres Freund.
Discussion: https://postgr.es/m/01824242-AA92-4FE9-9BA7-AEBAFFEA3D0C@yandex-team.ru
|
|
CIC and REINDEX CONCURRENTLY assume backends see their catalog changes
no later than each backend's next transaction start. That failed to
hold when a backend absorbed a relevant invalidation in the middle of
running RelationBuildDesc() on the CIC index. Queries that use the
resulting index can silently fail to find rows. Fix this for future
index builds by making RelationBuildDesc() loop until it finishes
without accepting a relevant invalidation. It may be necessary to
reindex to recover from past occurrences; REINDEX CONCURRENTLY suffices.
Back-patch to 9.6 (all supported versions).
Noah Misch and Andrey Borodin, reviewed (in earlier versions) by Andres
Freund.
Discussion: https://postgr.es/m/20210730022548.GA1940096@gust.leadboat.com
|
|
This fixes a loss of precision that occurs when the first input is
very close to 1, so that its logarithm is very small.
Formerly, during the initial low-precision calculation to estimate the
result weight, the logarithm was computed to a local rscale that was
capped to NUMERIC_MAX_DISPLAY_SCALE (1000). However, the base may be
as close as 1e-16383 to 1, hence its logarithm may be as small as
1e-16383, and so the local rscale needs to be allowed to exceed 16383,
otherwise all precision is lost, leading to a poor choice of rscale
for the full-precision calculation.
Fix this by removing the cap on the local rscale during the initial
low-precision calculation, as we already do in the full-precision
calculation. This doesn't change the fact that the initial calculation
is a low-precision approximation, computing the logarithm to around 8
significant digits, which is very fast, especially when the base is
very close to 1.
Patch by me, reviewed by Alvaro Herrera.
Discussion: https://postgr.es/m/CAEZATCV-Ceu%2BHpRMf416yUe4KKFv%3DtdgXQAe5-7S9tD%3D5E-T1g%40mail.gmail.com
|
|
get_variable_range() would incautiously believe that statistics
containing only an MCV list are sufficient to derive a range estimate.
That's okay for an enum-like column that contains only MCVs, but
otherwise the estimate could be pretty bad. Make it report that the
range is indeterminate unless the MCVs plus nullfrac account for
the whole table.
I don't think this needs a dedicated test case, since a quick code
coverage check verifies that the existing regression tests traverse
all the alternatives. There is room to doubt that a future-proof
test case could be built anyway, given that the submitted example
accidentally doesn't fail before v11.
Per bug #17207 from Simon Perepelitsa. Back-patch to v10.
In principle this has been broken all along, but I'm hesitant to
make such changes in 9.6, since if anyone is unhappy with 9.6.24's
behavior there will be no second chance to fix it.
Discussion: https://postgr.es/m/17207-5265aefa79e333b4@postgresql.org
|
|
Commit 84f5c2908 forgot to consider the possibility that
EnsurePortalSnapshotExists could run inside a subtransaction with
lifespan shorter than the Portal's. In that case, the new active
snapshot would be popped at the end of the subtransaction, leaving
a dangling pointer in the Portal, with mayhem ensuing.
To fix, make sure the ActiveSnapshot stack entry is marked with
the same subtransaction nesting level as the associated Portal.
It's certainly safe to do so since we won't be here at all unless
the stack is empty; hence we can't create an out-of-order stack.
Let's also apply this logic in the case where PortalRunUtility
sets portalSnapshot, just to be sure that path can't cause similar
problems. It's slightly less clear that that path can't create
an out-of-order stack, so add an assertion guarding it.
Report and patch by Bertrand Drouvot (with kibitzing by me).
Back-patch to v11, like the previous commit.
Discussion: https://postgr.es/m/ff82b8c5-77f4-3fe7-6028-fcf3303e82dd@amazon.com
|
|
timetz_zone() delivered completely wrong answers if the zone was
specified by a dynamic TZ abbreviation, because it failed to account
for the difference between the POSIX conventions for field values in
struct pg_tm and the conventions used in PG-specific datetime code.
As a stopgap fix, just adjust the tm_year and tm_mon fields to match
PG conventions. This is fixed in a different way in HEAD (388e71af8)
but I don't want to back-patch the change of reference point.
Discussion: https://postgr.es/m/CAJ7c6TOMG8zSNEZtCn5SPe+cCk3Lfxb71ZaQwT2F4T7PJ_t=KA@mail.gmail.com
|
|
The distance in phrase operator must be an integer value between zero
and MAXENTRYPOS inclusive. But previously the error message about
its valid value included the information about its upper limit
but not lower limit (i.e., zero). This commit improves the error message
so that it also includes the information about its lower limit.
Back-patch to v9.6 where full-text phrase search was supported.
Author: Kyotaro Horiguchi
Reviewed-by: Fujii Masao
Discussion: https://postgr.es/m/20210819.170315.1413060634876301811.horikyota.ntt@gmail.com
|
|
Formerly, the numeric code tested whether an integer value of a larger
type would fit in a smaller type by casting it to the smaller type and
then testing if the reverse conversion produced the original value.
That's perfectly fine, except that it caused a test failure on
buildfarm animal castoroides, most likely due to a compiler bug.
Instead, do these tests by comparing against PG_INT16/32_MIN/MAX. That
matches existing code in other places, such as int84(), which is more
widely tested, and so is less likely to go wrong.
While at it, add regression tests covering the numeric-to-int8/4/2
conversions, and adjust the recently added tests to the style of
434ddfb79a (on the v11 branch) to make failures easier to diagnose.
Per buildfarm via Tom Lane, reviewed by Tom Lane.
Discussion: https://postgr.es/m/2394813.1628179479%40sss.pgh.pa.us
|
|
|
|
This fixes a long-standing bug when using to_char() to format a
numeric value in scientific notation -- if the value's exponent is
less than -NUMERIC_MAX_DISPLAY_SCALE-1 (-1001), it produced a
division-by-zero error.
The reason for this error was that get_str_from_var_sci() divides its
input by 10^exp, which it produced using power_var_int(). However, the
underflow test in power_var_int() causes it to return zero if the
result scale is too small. That's not a problem for power_var_int()'s
only other caller, power_var(), since that limits the rscale to 1000,
but in get_str_from_var_sci() the exponent can be much smaller,
requiring a much larger rscale. Fix by introducing a new function to
compute 10^exp directly, with no rscale limit. This also allows 10^exp
to be computed more efficiently, without any numeric multiplication,
division or rounding.
Discussion: https://postgr.es/m/CAEZATCWhojfH4whaqgUKBe8D5jNHB8ytzemL-PnRx+KCTyMXmg@mail.gmail.com
|
|
This fixes a couple of related problems that arise when raising
numbers to very large powers.
Firstly, when raising a negative number to a very large integer power,
the result should be well-defined, but the previous code would only
cope if the exponent was small enough to go through power_var_int().
Otherwise it would throw an internal error, attempting to take the
logarithm of a negative number. Fix this by adding suitable handling
to the general case in power_var() to cope with negative bases,
checking for integer powers there.
Next, when raising a (positive or negative) number whose absolute
value is slightly less than 1 to a very large power, the result should
approach zero as the power is increased. However, in some cases, for
sufficiently large powers, this would lose all precision and return 1
instead of 0. This was due to the way that the local_rscale was being
calculated for the final full-precision calculation:
local_rscale = rscale + (int) val - ln_dweight + 8
The first two terms on the right hand side are meant to give the
number of significant digits required in the result ("val" being the
estimated result weight). However, this failed to account for the fact
that rscale is clipped to a maximum of NUMERIC_MAX_DISPLAY_SCALE
(1000), and the result weight might be less then -1000, causing their
sum to be negative, leading to a loss of precision. Fix this by
forcing the number of significant digits calculated to be nonnegative.
It's OK for it to be zero (when the result weight is less than -1000),
since the local_rscale value then includes a few extra digits to
ensure an accurate result.
Finally, add additional underflow checks to exp_var() and power_var(),
so that they consistently return zero for cases like this where the
result is indistinguishable from zero. Some paths through this code
already returned zero in such cases, but others were throwing overflow
errors.
Dean Rasheed, reviewed by Yugo Nagata.
Discussion: http://postgr.es/m/CAEZATCW6Dvq7+3wN3tt5jLj-FyOcUgT5xNoOqce5=6Su0bCR0w@mail.gmail.com
|
|
This changes the behavior of examining the pg_file_settings view after
changing a config option that requires restart. The user needs to know
that any change of such options does not take effect until a restart,
and this worked correctly if the line is edited without removing it.
However, for the case where the line is removed altogether, the flag
doesn't get set, because a flag was only set in set_config_option, but
that's not called for lines removed. Repair.
(Ref.: commits 62d16c7fc561 and a486e35706ea)
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/202107262302.xsfdfc5sb7sh@alvherre.pgsql
|
|
The error messages using the word "non-negative" are confusing
because it's ambiguous about whether it accepts zero or not.
This commit improves those error messages by replacing it with
less ambiguous word like "greater than zero" or
"greater than or equal to zero".
Also this commit added the note about the word "non-negative" to
the error message style guide, to help writing the new error messages.
When postgres_fdw option fetch_size was set to zero, previously
the error message "fetch_size requires a non-negative integer value"
was reported. This error message was outright buggy. Therefore
back-patch to all supported versions where such buggy error message
could be thrown.
Reported-by: Hou Zhijie
Author: Bharath Rupireddy
Reviewed-by: Kyotaro Horiguchi, Fujii Masao
Discussion: https://postgr.es/m/OS0PR01MB5716415335A06B489F1B3A8194569@OS0PR01MB5716.jpnprd01.prod.outlook.com
|
|
In postgresql.conf, memory and file size GUCs can be specified with "B"
(bytes) as of b06d8e58b. Likewise, time GUCs can be specified with "us"
(microseconds) as of caf626b2c. Update postgres.conf.sample to reflect
that fact.
Pavel Luzanov
Backpatch to v12, which is the earliest version that allows both of
these units. A separate commit will document the "B" case for v11.
Discussion: https://www.postgresql.org/message-id/flat/f10d16fc-8fa0-1b3c-7371-cb3a35a13b7a%40postgrespro.ru
|
|
41469253e went to the trouble of removing a theoretical bug from
free_sort_tuple by checking if the tuple was NULL before freeing it. Let's
make this a little more robust by also setting the tuple to NULL so that
should we be called again we won't end up doing a pfree on the already
pfree'd tuple. Per advice from Tom Lane.
Discussion: https://postgr.es/m/3188192.1626136953@sss.pgh.pa.us
Backpatch-through: 9.6, same as 41469253e
|
|
This fixes a theoretical bug in tuplesort.c which, if a bounded sort was
used in combination with a byval Datum sort (tuplesort_begin_datum), when
switching the sort to a bounded heap in make_bounded_heap(), we'd call
free_sort_tuple(). The problem was that when sorting Datums of a byval
type, the tuple is NULL and free_sort_tuple() would free the memory for it
regardless of that. This would result in a crash.
Here we fix that simply by adding a check to see if the tuple is NULL
before trying to disassociate and free any memory belonging to it.
The reason this bug is only theoretical is that nowhere in the current
code base do we do tuplesort_set_bound() when performing a Datum sort.
However, let's backpatch a fix for this as if any extension uses the code
in this way then it's likely to cause problems.
Author: Ronan Dunklau
Discussion: https://postgr.es/m/CAApHDvpdoqNC5FjDb3KUTSMs5dg6f+XxH4Bg_dVcLi8UYAG3EQ@mail.gmail.com
Backpatch-through: 9.6, oldest supported version
|
|
If an error occurred in the wrong place, it was possible to leave an
unintialized entry in the hash table, leading to a crash. Fixed.
Also, be more careful about the order of operations so that an
allocation error doesn't leak memory in CacheMemoryContext or
unnecessarily advance NextRecordTypmod.
Backpatch through version 11. Earlier versions (prior to 35ea75632a5)
do not exhibit the problem, because an uninitialized hash entry
contains a valid empty list.
Author: Sait Talha Nisanci <Sait.Nisanci@microsoft.com>
Reviewed-by: Andres Freund
Discussion: https://postgr.es/m/HE1PR8303MB009069D476225B9A9E194B8891779@HE1PR8303MB0090.EURPRD83.prod.outlook.com
Backpatch-through: 11
|
|
This fixes an overflow error when using the numeric * operator if the
result has more than 16383 digits after the decimal point by rounding
the result. Overflow errors should only occur if the result has too
many digits *before* the decimal point.
Discussion: https://postgr.es/m/CAEZATCUmeFWCrq2dNzZpRj5+6LfN85jYiDoqm+ucSXhb9U2TbA@mail.gmail.com
|
|
I accidentally missed adding this when adjusting 55fe60938 for back
patching. This adjustment was made for 9.6 to 13. 14 and master are not
affected.
Discussion: https://postgr.es/m/CAApHDvp=twCsGAGQG=A=cqOaj4mpknPBW-EZB-sd+5ZS5gCTtA@mail.gmail.com
|
|
Due to how pg_size_pretty(bigint) was implemented, it's possible that when
given a negative number of bytes that the returning value would not match
the equivalent positive return value when given the equivalent positive
number of bytes. This was due to two separate issues.
1. The function used bit shifting to convert the number of bytes into
larger units. The rounding performed by bit shifting is not the same as
dividing. For example -3 >> 1 = -2, but -3 / 2 = -1. These two
operations are only equivalent with positive numbers.
2. The half_rounded() macro rounded towards positive infinity. This meant
that negative numbers rounded towards zero and positive numbers rounded
away from zero.
Here we fix #1 by dividing the values instead of bit shifting. We fix #2
by adjusting the half_rounded macro always to round away from zero.
Additionally, adjust the pg_size_pretty(numeric) function to be more
explicit that it's using division rather than bit shifting. A casual
observer might have believed bit shifting was used due to a static
function being named numeric_shift_right. However, that function was
calculating the divisor from the number of bits and performed division.
Here we make that more clear. This change is just cosmetic and does not
affect the return value of the numeric version of the function.
Here we also add a set of regression tests both versions of
pg_size_pretty() which test the values directly before and after the
function switches to the next unit.
This bug was introduced in 8a1fab36a. Prior to that negative values were
always displayed in bytes.
Author: Dean Rasheed, David Rowley
Discussion: https://postgr.es/m/CAEZATCXnNW4HsmZnxhfezR5FuiGgp+mkY4AzcL5eRGO4fuadWg@mail.gmail.com
Backpatch-through: 9.6, where the bug was introduced.
|
|
Commit 03ffc4d6d added logic to bypass all caching behavior in
LookupOpclassInfo when CLOBBER_CACHE_ALWAYS is enabled. It doesn't
look like I stopped to think much about what that would cost, but
recent investigation shows that the cost is enormous: it roughly
doubles the time needed for cache-clobber test runs.
There does seem to be value in this behavior when trying to test
the opclass-cache loading logic itself, but for other purposes the
cost is excessive. Hence, let's back off to doing this only when
debug_invalidate_system_caches_always is at least 3; or in older
branches, when CLOBBER_CACHE_RECURSIVELY is defined.
While here, clean up some other minor issues in LookupOpclassInfo.
Re-order the code so we aren't left with broken cache entries (leading
to later core dumps) in the unlikely case that we suffer OOM while
trying to allocate space for a new entry. (That seems to be my
oversight in 03ffc4d6d.) Also, in >= v13, stop allocating one array
entry too many. That's evidently left over from sloppy reversion in
851b14b0c.
Back-patch to all supported branches, mainly to reduce the runtime
of cache-clobbering buildfarm animals.
Discussion: https://postgr.es/m/1370856.1625428625@sss.pgh.pa.us
|
|
In previous commit, I missed that relmap_redo() was also not acquiring the
RelationMappingLock. Thanks to Thomas Munro for pointing that out.
Backpatch-through: 9.6, like previous commit.
Discussion: https://www.postgresql.org/message-id/CA%2BhUKGLev%3DPpOSaL3WRZgOvgk217et%2BbxeJcRr4eR-NttP1F6Q%40mail.gmail.com
|
|
Contrary to the comment here, POSIX does not guarantee atomicity of a
read(), if another process calls write() concurrently. Or at least Linux
does not. Add locking to load_relmap_file() to avoid the race condition.
Fixes bug #17064. Thanks to Alexander Lakhin for the report and test case.
Backpatch-through: 9.6, all supported versions.
Discussion: https://www.postgresql.org/message-id/17064-bb0d7904ef72add3@postgresql.org
|
|
The fast default code added in Release 11 omitted to check that the
table a fast default was being added to was a plain table. Thus one
could be added to a foreign table, which predicably blows up. Here we
perform that check.
In addition, on the back branches, since some of these might have
escaped into the wild, if we encounter a missing value for
an attribute of something other than a plain table we ignore it.
Fixes bug #17056
Backpatch to release 11,
Reviewed by: Andres Freund, Álvaro Herrera and Tom Lane
|
|
Previously, a zero value for the relfilenode resulted in
a confusing error message about "unexpected duplicate".
This function returns NULL for other invalid relfilenode
values, so zero should be treated likewise.
It's been like this all along, so back-patch to all supported
branches.
Justin Pryzby
Discussion: https://postgr.es/m/20210612023324.GT16435@telsasoft.com
|
|
This case should be disallowed, just as FOR UPDATE with a plain
GROUP BY is disallowed; FOR UPDATE only makes sense when each row
of the query result can be identified with a single table row.
However, we missed teaching CheckSelectLocking() to check
groupingSets as well as groupClause, so that it would allow
degenerate grouping sets. That resulted in a bad plan and
a null-pointer dereference in the executor.
Looking around for other instances of the same bug, the only one
I found was in examine_simple_variable(). That'd just lead to
silly estimates, but it should be fixed too.
Per private report from Yaoguang Chen.
Back-patch to all supported branches.
|
|
Parallel query processes that called BlessTupleDesc() for identical
tuple descriptors at the same moment could crash. There was code to
handle that rare case, but it dereferenced a bogus DSA pointer. Repair.
Back-patch to 11, where commit cc5f8136 added support for sharing tuple
descriptors in parallel queries.
Reported-by: Eric Thinnes <e.thinnes@gmx.de>
Discussion: https://postgr.es/m/99aaa2eb-e194-bf07-c29a-1a76b4f2bcf9%40gmx.de
|
|
COMMIT/ROLLBACK necessarily destroys all snapshots within the session.
The original implementation of intra-procedure transactions just
cavalierly did that, ignoring the fact that this left us executing in
a rather different environment than normal. In particular, it turns
out that handling of toasted datums depends rather critically on there
being an outer ActiveSnapshot: otherwise, when SPI or the core
executor pop whatever snapshot they used and return, it's unsafe to
dereference any toasted datums that may appear in the query result.
It's possible to demonstrate "no known snapshots" and "missing chunk
number N for toast value" errors as a result of this oversight.
Historically this outer snapshot has been held by the Portal code,
and that seems like a good plan to preserve. So add infrastructure
to pquery.c to allow re-establishing the Portal-owned snapshot if it's
not there anymore, and add enough bookkeeping support that we can tell
whether it is or not.
We can't, however, just re-establish the Portal snapshot as part of
COMMIT/ROLLBACK. As in normal transaction start, acquiring the first
snapshot should wait until after SET and LOCK commands. Hence, teach
spi.c about doing this at the right time. (Note that this patch
doesn't fix the problem for any PLs that try to run intra-procedure
transactions without using SPI to execute SQL commands.)
This makes SPI's no_snapshots parameter rather a misnomer, so in HEAD,
rename that to allow_nonatomic.
replication/logical/worker.c also needs some fixes, because it wasn't
careful to hold a snapshot open around AFTER trigger execution.
That code doesn't use a Portal, which I suspect someday we're gonna
have to fix. But for now, just rearrange the order of operations.
This includes back-patching the recent addition of finish_estate()
to centralize the cleanup logic there.
This also back-patches commit 2ecfeda3e into v13, to improve the
test coverage for worker.c (it was that test that exposed that
worker.c's snapshot management is wrong).
Per bug #15990 from Andreas Wicht. Back-patch to v11 where
intra-procedure COMMIT was added.
Discussion: https://postgr.es/m/15990-eee2ac466b11293d@postgresql.org
|
|
While we were (mostly) careful about ensuring that the dimensions of
arrays aren't large enough to cause integer overflow, the lower bound
values were generally not checked. This allows situations where
lower_bound + dimension overflows an integer. It seems that that's
harmless so far as array reading is concerned, except that array
elements with subscripts notionally exceeding INT_MAX are inaccessible.
However, it confuses various array-assignment logic, resulting in a
potential for memory stomps.
Fix by adding checks that array lower bounds aren't large enough to
cause lower_bound + dimension to overflow. (Note: this results in
disallowing cases where the last subscript position would be exactly
INT_MAX. In principle we could probably allow that, but there's a lot
of code that computes lower_bound + dimension and would need adjustment.
It seems doubtful that it's worth the trouble/risk to allow it.)
Somewhat independently of that, array_set_element() was careless
about possible overflow when checking the subscript of a fixed-length
array, creating a different route to memory stomps. Fix that too.
Security: CVE-2021-32027
|