Age | Commit message (Collapse) | Author |
|
For pass-by-reference types, the code added in 0b053e78b, which aimed to
resolve a memory leak, was overly aggressive in resetting the per-tuple
memory context which could result in pfree'd memory being accessed
resulting in failing to find previously cached results in the hash
table.
What was happening was prepare_probe_slot() was switching to the
per-tuple memory context and calling ExecEvalExpr(). ExecEvalExpr() may
have required a memory allocation. Both MemoizeHash_hash() and
MemoizeHash_equal() were aggressively resetting the per-tuple context
and after determining the hash value, the context would have gotten reset
before MemoizeHash_equal() was called. This could have resulted in
MemoizeHash_equal() looking at pfree'd memory.
This is less likely to have caused issues on a production build as some
other allocation would have had to have reused the pfree'd memory to
overwrite it. Otherwise, the original contents would have been intact.
However, this clearly caused issues on MEMORY_CONTEXT_CHECKING builds.
Author: Tender Wang, Andrei Lepikhov
Reported-by: Tender Wang (using SQLancer)
Reviewed-by: Andrei Lepikhov, Richard Guo, David Rowley
Discussion: https://postgr.es/m/CAHewXNnT6N6UJkya0z-jLFzVxcwGfeRQSfhiwA+NyLg-x8iGew@mail.gmail.com
Backpatch-through: 14, where Memoize was added
|
|
Backpatch changes from d57b7cc333, 75bcba6cbd to all supported branches per
proposal of Egor Chindyaskin.
Discussion: https://postgr.es/m/DE5FD776-A8CD-4378-BCFA-3BF30F1F6D60%40mail.ru
|
|
In OpenSSL 3.0.0 and later, ERR_reason_error_string randomly refuses
to provide a string for error codes representing system errno values
(e.g., "No such file or directory"). There is a poorly-documented way
to extract the errno from the SSL error code in this case, so do that
and apply strerror, rather than falling back to reporting the error
code's numeric value as we were previously doing.
Problem reported by David Zhang, although this is not his proposed
patch; it's instead based on a suggestion from Heikki Linnakangas.
Back-patch to all supported branches, since any of them are likely
to be used with recent OpenSSL.
Discussion: https://postgr.es/m/b6fb018b-f05c-4afd-abd3-318c649faf18@highgo.ca
|
|
This reverts commit eae7be600be7, following a discussion with Tom Lane,
due to concerns that this impacts the decisions made by the planner for
the number of workers spawned based on the inlining and const-folding of
index expressions and predicate for cases that would have worked until
this commit.
Discussion: https://postgr.es/m/162802.1709746091@sss.pgh.pa.us
Backpatch-through: 12
|
|
In the corner case where a function returning RECORD has been
simplified to a RECORD constant or an inlined ROW() expression,
ExecInitFunctionScan failed to cross-check the function's result
rowtype against the coldeflist provided by the calling query.
That happened because get_expr_result_type is able to extract a
tupdesc from such expressions, which led ExecInitFunctionScan to
ignore the coldeflist. (Instead, it used the extracted tupdesc
to check the function's output, which of course always succeeds.)
I have not been able to demonstrate any really serious consequences
from this, because if some column of the result is of the wrong
type and is directly referenced by a Var of the calling query,
CheckVarSlotCompatibility will catch it. However, we definitely do
fail to report the case where the function returns more columns than
the coldeflist expects, and in the converse case where it returns
fewer columns, we get an assert failure (but, seemingly, no worse
results in non-assert builds).
To fix, always build the expected tupdesc from the coldeflist if there
is one, and consult get_expr_result_type only when there isn't one.
Also remove the failing Assert, even though it is no longer reached
after this fix. It doesn't seem to be adding anything useful, since
later checking will deal with cases with the wrong number of columns.
The only other place I could find that is doing something similar
is inline_set_returning_function. There's no live bug there because
we cannot be looking at a Const or RowExpr, but for consistency
change that code to agree with ExecInitFunctionScan.
Per report from PetSerAl. After some debate I've concluded that
this should be back-patched. There is a small risk that somebody
has been relying on such a case not throwing an error, but I judge
this outweighed by the risk that I've missed some way in which the
failure to cross-check has worse consequences than sketched above.
Discussion: https://postgr.es/m/CAKygsHSerA1eXsJHR9wft3Gn3wfHQ5RfP8XHBzF70_qcrrRvEg@mail.gmail.com
|
|
As coded, the planner logic that calculates the number of parallel
workers to use for a parallel index build uses expressions and
predicates from the relcache, which are flattened for the planner by
eval_const_expressions().
As reported in the bug, an immutable parallel-unsafe function flattened
in the relcache would become a Const, which would be considered as
parallel-safe, even if the predicate or the expressions including the
function are not safe in parallel workers. Depending on the expressions
or predicate used, this could cause the parallel build to fail.
Tests are included that check parallel index builds with parallel-unsafe
predicate and expressions. Two routines are added to lsyscache.h to be
able to retrieve expressions and predicate of an index from its pg_index
data.
Reported-by: Alexander Lakhin
Author: Tender Wang
Reviewed-by: Jian He, Michael Paquier
Discussion: https://postgr.es/m/CAHewXN=UaAaNn9ruHDH3Os8kxLVmtWqbssnf=dZN_s9=evHUFA@mail.gmail.com
Backpatch-through: 12
|
|
The error message(s) were reporting the stats kind of 'f', which is not
correct as that's for the "dependencies" statistics kind.
Reported-by: Horst Reiterer
Reviewed-by: Richard Guo
Discussion: https://postgr.es/m/18375-ba99383eb9062d6a@postgresql.org
Backpatch-through: 12, where MCV extended stats were added.
|
|
dsa_dump would print a large negative number instead of zero for
segment bin 0. Fix by explicitly checking for underflow and add
special case for bin 0. Backpatch to all supported versions.
Author: Ian Ilyasov <ianilyasov@outlook.com>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Discussion: https://postgr.es/m/GV1P251MB1004E0D09D117D3CECF9256ECD502@GV1P251MB1004.EURP251.PROD.OUTLOOK.COM
Backpatch-through: v12
|
|
In the case where the target timestamp is before the origin timestamp
and their difference is already an exact multiple of the stride, the
code incorrectly subtracted the stride anyway.
Also detect several integer-overflow cases that previously produced
bogus results. (The submitted patch tried to avoid overflow, but
I'm not convinced it's right, and problematic cases are so far out of
the plausibly-useful range that they don't seem worth sweating over.
Let's just use overflow-detecting arithmetic and throw errors.)
timestamp_bin() and timestamptz_bin() are basically identical and
so had identical bugs. Fix both.
Report and patch by Moaaz Assali, adjusted some by me. Back-patch
to v14 where date_bin() was introduced.
Discussion: https://postgr.es/m/CALkF+nvtuas-2kydG-WfofbRSJpyODAJWun==W-yO5j2R4meqA@mail.gmail.com
|
|
When this assertion was installed (in commit d2f60a3ab), I thought
it was only for catching server logic errors that caused accesses to
catalogs that were undergoing index rebuilds. However, it will also
fire in case of a user-defined index expression that attempts to
access its own table. We occasionally see reports of people trying
to do that, and typically getting unintelligible low-level errors
as a result. We can provide a more on-point message by making this
a regular runtime check.
While at it, adjust the similar error check in
systable_beginscan_ordered to use the same message text. That one
is (probably) not reachable without a coding bug, but we might as
well use a translatable message if we have one.
Per bug #18363 from Alexander Lakhin. Back-patch to all supported
branches.
Discussion: https://postgr.es/m/18363-e3598a5a572d0699@postgresql.org
|
|
build_child_join_sjinfo creates a derived SpecialJoinInfo in
the short-lived GEQO context, but afterwards the semi_rhs_exprs
from that may be used in a UniquePath for a child base relation.
This breaks the expectation that all base-relation-level structures
are in the planning-lifespan context, leading to use of a dangling
pointer with probable ensuing crash later on in create_unique_plan.
To fix, copy the expression trees when making a UniquePath.
Per bug #18360 from Alexander Lakhin. This has been broken since
partitionwise joins were added, so back-patch to all supported
branches.
Discussion: https://postgr.es/m/18360-a23caf3157f34e62@postgresql.org
|
|
Partition pruning wrongly assumed that, for a table partitioned on a
boolean column, a clause in the form "boolcol IS NOT false" and "boolcol
IS NOT true" could be inverted to correspondingly become "boolcol IS true"
and "boolcol IS false". These are not equivalent as the NOT version
matches the opposite boolean value *and* NULLs. This incorrect assumption
meant that partition pruning pruned away partitions that could contain
NULL values.
Here we fix this by correctly not pruning partitions which could store
NULLs.
To be affected by this, the table must be partitioned by a NULLable boolean
column and queries would have to contain "boolcol IS NOT false" or "boolcol
IS NOT true". This could result in queries filtering out NULL values
with a LIST partitioned table and "ERROR: invalid strategy number 0"
for RANGE and HASH partitioned tables.
Reported-by: Alexander Lakhin
Bug: #18344
Discussion: https://postgr.es/m/18344-8d3f00bada6d09c6@postgresql.org
Backpatch-through: 12
|
|
intoasc(), a wrapper for PGTYPESinterval_to_asc that converts an
interval to its textual representation, used a plain memcpy() when
copying its result. This could miss a zero-termination in the result
string, leading to an incorrect result.
The routines in informix.c do not provide the length of their result
buffer, which would allow a replacement of strcpy() to safer strlcpy()
calls, but this requires an ABI breakage and that cannot happen in
back-branches.
Author: Oleg Tselebrovskiy
Reviewed-by: Ashutosh Bapat
Discussion: https://postgr.es/m/bf47888585149f83b276861a1662f7e4@postgrespro.ru
Backpatch-through: 12
|
|
Clarify comments associated with max_parallel_workers and
related settings.
Per bug #18343 from Christopher Kline.
Discussion: https://postgr.es/m/18343-3a5e903d1d3692ab@postgresql.org
|
|
Fixes bug #18341. Backpatch to all supported versions.
Discussion: https://www.postgresql.org/message-id/18341-ce16599e7fd6228c@postgresql.org
|
|
This reverts commit aeee173d229232f94acc61e7bfe81d40f56e478e.
Per failure reports from the buildfarm.
|
|
The macOS Finder application creates .DS_Store files in directories
when opened, which creates problems for serverside utilities which
expect all files to be PostgreSQL specific files. Skip these files
when encountered in pg_checksums, pg_rewind and pg_basebackup.
This was extracted from a larger patchset for skipping hidden files
and system files, where the concencus was to just skip these. Since
this is equally likely to happen in every version, backpatch to all
supported versions.
Reported-by: Mark Guertin <markguertin@gmail.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Tobias Bussmann <t.bussmann@gmx.net>
Discussion: https://postgr.es/m/E258CE50-AB0E-455D-8AAD-BB4FE8F882FB@gmail.com
Backpatch-through: v12
|
|
Since its introduction, pg_get_expr() has intended to silently
return NULL if called with an invalid relation OID, as can happen
when scanning the catalogs concurrently with relation drops.
However, there is a race condition: we check validity of the OID
at the start, but it could get dropped just afterward, leading to
failures. This is the cause of some intermittent instability we're
seeing in a proposed new test case, and presumably it's a hazard in
the field as well.
We can fix this by AccessShareLock-ing the target relation for the
duration of pg_get_expr(). Since we don't require any permissions
on the target relation, this is semantically a bit undesirable. But
it turns out that the set_relation_column_names() subroutine already
takes a transient AccessShareLock on that relation, and has done since
commit 2ffa740be in 2012. Given the lack of complaints about that, it
seems like there should be no harm in holding the lock a bit longer.
Back-patch to all supported branches.
Discussion: https://postgr.es/m/31ddcc01-a71b-4e8c-9948-01d1c47293ca@eisentraut.org
|
|
We previously supposed that it was okay for different threads to
call bindtextdomain() concurrently (cf. commit 1f655fdc3).
It now emerges that there's at least one gettext implementation
in which that triggers an abort() crash, so let's stop doing that.
Add mutexes guarding libpq's and ecpglib's calls, which are the
only ones that need worry about multithreaded callers.
Note: in libpq, we could perhaps have piggybacked on
default_threadlock() to avoid defining a new mutex variable.
I judge that not terribly safe though, since libpq_gettext could
be called from code that is holding the default mutex. If that
were the first such call in the process, it'd fail. An extra
mutex is cheap insurance against unforeseen interactions.
Per bug #18312 from Christian Maurer. Back-patch to all
supported versions.
Discussion: https://postgr.es/m/18312-bbbabc8113592b78@postgresql.org
Discussion: https://postgr.es/m/264860.1707163416@sss.pgh.pa.us
|
|
Fix pthread-win32.h and pthread-win32.c to provide a more complete
emulation of POSIX pthread mutexes: define PTHREAD_MUTEX_INITIALIZER
and make sure that pthread_mutex_lock() can operate on a mutex
object that's been initialized that way. Then we don't need the
duplicative platform-specific logic in default_threadlock() and
pgtls_init(), which we'd otherwise need yet a third copy of for
an upcoming bug fix.
Also, since default_threadlock() supposes that pthread_mutex_lock()
cannot fail, try to ensure that that's actually true, by getting
rid of the malloc call that was formerly involved in initializing
an emulated mutex. We can define an extra state for the spinlock
field instead.
Also, replace the similar code in ecpglib/misc.c with this version.
While ecpglib's version at least had a POSIX-compliant API, it
also had the potential of failing during mutex init (but here,
because of CreateMutex failure rather than malloc failure). Since
all of misc.c's callers ignore failures, it seems like a wise idea
to avoid failures here too.
A further improvement in this area could be to unify libpq's and
ecpglib's implementations into a src/port/pthread-win32.c file.
But that doesn't seem like a bug fix, so I'll desist for now.
In preparation for the aforementioned bug fix, back-patch to all
supported branches.
Discussion: https://postgr.es/m/264860.1707163416@sss.pgh.pa.us
|
|
The TransactionIdInRecentPast() should return false for all the transactions
older than TransamVariables->oldestClogXid. However, the function contains
a bug in comparison FullTransactionId to TransactionID allowing full
transactions between nextXid - 2^32 and oldestClogXid - 2^31.
This commit fixes TransactionIdInRecentPast() by turning the oldestClogXid into
FullTransactionId first, then performing the comparison.
Backpatch to all supported versions.
Reported-by: Egor Chindyaskin
Bug: 18212
Discussion: https://postgr.es/m/18212-547307f8adf57262%40postgresql.org
Author: Karina Litskevich
Reviewed-by: Kyotaro Horiguchi
Backpatch-through: 12
|
|
Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: 3d5a5afaec3fca7c19b107ca07a73145373fcd4a
|
|
When assertions are disabled, the built SQL statement is invalid and
you get a "syntax error". So this isn't a serious problem, but let's
avoid the assertion failure.
Backpatch to all supported versions.
Reviewed-by: Noah Misch
|
|
The internal commands in REFRESH MATERIALIZED VIEW CONCURRENTLY are
correctly executed in SECURITY_RESTRICTED_OPERATION mode, except for
creating the temporary "diff" table, because you cannot create
temporary tables in SRO mode. But creating the temporary "diff" table
is a pretty complex CTAS command that selects from another temporary
table created earlier in the command. If you can cajole that CTAS
command to execute code defined by the table owner, the table owner
can run code with the privileges of the user running the REFRESH
command.
The proof-of-concept reported to the security team relied on CREATE
RULE to convert the internally-built temp table to a view. That's not
possible since commit b23cd185fd, and I was not able to find a
different way to turn the SELECT on the temp table into code
execution, so as far as I know this is only exploitable in v15 and
below. That's a fiddly assumption though, so apply this patch to
master and all stable versions.
Thanks to Pedro Gallegos for the report.
Security: CVE-2023-5869
Reviewed-by: Noah Misch
|
|
Previously you got ERRCODE_INTERNAL_ERROR, which seems inappropriate,
especially given that we're trying to avoid emitting that in reachable
cases.
Alexander Kuzmenkov
Discussion: https://postgr.es/m/CALzhyqzgQph0BY8-hFRRGdHhF8CoqmmDHW9S=hMZ-HMzLxRqDQ@mail.gmail.com
|
|
DST law changes in Ittoqqortoormiit, Greenland (America/Scoresbysund),
Kazakhstan (Asia/Almaty and Asia/Qostanay) and Palestine; as well as
updates for the Antarctic stations Casey and Vostok.
Historical corrections for Vietnam, Toronto, and Miquelon.
|
|
Further fallout from commit 6ee26c6a4b. To keep code in sync and avoid
issues on older releases with different package names, simply use the
unqualified name like many other places in our code.
|
|
The path we wish to reparameterize is not a standalone object:
in particular, it implicitly references baserestrictinfo clauses
in the associated RelOptInfo, and if it's a SampleScan path then
there is also the TableSampleClause in the RTE to worry about.
Both of those could contain lateral references to the join partner
relation, which would need to be modified to refer to its child.
Since we aren't doing that, affected queries can give wrong answers,
or odd failures such as "variable not found in subplan target list",
or executor crashes. But we can't just summarily modify those
expressions, because they are shared with other paths for the rel.
We'd break things if we modify them and then end up using some
non-partitioned-join path.
In HEAD, we plan to fix this by postponing reparameterization
until create_plan(), when we know that those other paths are
no longer of interest, and then adjusting those expressions along
with the ones in the path itself. That seems like too big a change
for stable branches however. In the back branches, let's just detect
whether any troublesome lateral references actually exist in those
expressions, and fail reparameterization if so. This will result in
not performing a partitioned join in such cases. Given the lack of
field complaints, nobody's likely to miss the optimization.
Report and patch by Richard Guo. Apply to 12-16 only, since
the intended fix for HEAD looks quite different. We're not quite
ready to push the HEAD fix, but with back-branch releases coming
up soon, it seems wise to get this stopgap fix in place there.
Discussion: https://postgr.es/m/CAMbWs496+N=UAjOc=rcD3P7B6oJe4rZw08e_TZRUsWbPxZW3Tw@mail.gmail.com
|
|
This commit addresses a set of issues when changing token type mappings
in a text search configuration when using duplicated token names:
- ADD MAPPING would fail on insertion because of a constraint failure
after inserting the same mapping.
- ALTER MAPPING with an "overridden" configuration failed with "tuple
already updated by self" when the token mappings are removed.
- DROP MAPPING failed with "tuple already updated by self", like
previously, but in a different code path.
The code is refactored so the token names (with their numbers) are
handled as a List with unique members rather than an array with numbers,
ensuring that no duplicates mess up with the catalog inserts, updates
and deletes. The list is generated by getTokenTypes(), with the same
error handling as previously while duplicated tokens are discarded from
the list used to work on the catalogs.
Regression tests are expanded to cover much more ground for the cases
fixed by this commit, as there was no coverage for the code touched in
this commit. A bit more is done regarding the fact that a token name
not supported by a configuration's parser should result in an error even
if IF EXISTS is used in a DROP MAPPING clause. This is implied in the
code but there was no coverage for that, and it was very easy to miss.
These issues exist since at least their introduction in core with
140d4ebcb46e, so backpatch all the way down.
Reported-by: Alexander Lakhin
Author: Tender Wang, Michael Paquier
Discussion: https://postgr.es/m/18310-1eb233c5908189c8@postgresql.org
Backpatch-through: 12
|
|
File::Find converts backslashes to slashes in the newer Perl versions.
See: https://github.com/Perl/perl5/commit/414f14df98cb1c9a20f92c5c54948b67c09f072d
So, do the same conversion for Windows before comparing paths. To
support all Perl versions, always convert them on Windows regardless of
the Perl's version.
Author: Nazir Bilal Yavuz <byavuz81@gmail.com>
Backpatch to all live branches
|
|
We seem to have only documented a foreign key can reference the columns of
a primary key or unique constraint. Here we adjust the documentation
to mention columns in a non-partial unique index can be mentioned too.
The header comment for transformFkeyCheckAttrs() also didn't mention
unique indexes, so fix that too. In passing make that header comment
reflect reality in the various other aspects where it deviated from it.
Bug: 18295
Reported-by: Gilles PARC
Author: Laurenz Albe, David Rowley
Discussion: https://www.postgresql.org/message-id/18295-0ed0fac5c9f7b17b%40postgresql.org
Backpatch-through: 12
|
|
libxml2 changed the required signature of error handler callbacks
to make the passed xmlError struct "const". This is causing build
failures on buildfarm member caiman, and no doubt will start showing
up in the field quite soon. Add a version check to adjust the
declaration of xml_errorHandler() according to LIBXML_VERSION.
2.12.x also produces deprecation warnings for contrib/xml2/xpath.c's
assignment to xmlLoadExtDtdDefaultValue. I see no good reason for
that to still be there, seeing that we disabled external DTDs (at a
lower level) years ago for security reasons. Let's just remove it.
Back-patch to all supported branches, since they might all get built
with newer libxml2 once it gets a bit more popular. (The back
branches produce another deprecation warning about xpath.c's use of
xmlSubstituteEntitiesDefault(). We ought to consider whether to
back-patch all or part of commit 65c5864d7 to silence that. It's
less urgent though, since it won't break the buildfarm.)
Discussion: https://postgr.es/m/1389505.1706382262@sss.pgh.pa.us
|
|
ginFinishSplit() expects the caller to hold an exclusive lock on the
buffer, but when finishing an earlier "leftover" incomplete split of
an internal page, the caller held a shared lock. That caused an
assertion failure in MarkBufferDirty(). Without assertions, it could
lead to corruption if two backends tried to complete the split at the
same time.
On master, add a test case using the new injection point facility.
Report and analysis by Fei Changhong. Backpatch the fix to all
supported versions.
Reviewed-by: Fei Changhong, Michael Paquier
Discussion: https://www.postgresql.org/message-id/tencent_A3CE810F59132D8E230475A5F0F7A08C8307@qq.com
|
|
In commit 272248a0c, we fixed the catalog lookup due to the wrong snapshot
for transactions and subtransactions during decoding. We failed to
consider the case where top-level xact is already marked as containing
catalog change but its subtransaction is not yet marked as containing
catalog change even though it contained such a change.
This can happen when during decoding, none of the WAL records from the
subtransaction was decoded and top-level xact contains a DDL.
We fix it by marking the transaction and all its subtransactions as
containing catalog changes if the top-level xact contains any catalog
change and it is present in the initial running xacts array.
This fix is required only for 14 and 15 because in prior branches we
already always mark the transaction and all its subtransactions as
containing catalog changes in the same case. In 16 and above, we preserve
the list of transaction IDs and sub-transaction IDs, that have modified
catalogs and are running during snapshot serialization, to the serialized
snapshot (see commit 7f13ac8123).
Author: Fei Changhong
Reviewed-by: Amit Kapila, Hayato Kuroda, Andy Fan
Discussion: https://postgr.es/m/18280-4c8060178cb41750@postgresql.org
|
|
We perform addition of the days field of an interval via
arithmetic on the Julian-date representation of the timestamp's date.
This step is subject to int32 overflow, and we also should not let
the Julian date become very negative, for fear of weird results from
j2date. (In the timestamptz case, allow a Julian date of -1 to pass,
since it might convert back to zero after timezone rotation.)
The additions of the months and microseconds fields could also
overflow, of course. However, I believe we need no additional
checks there; the existing range checks should catch such cases.
The difficulty here is that j2date's magic modular arithmetic could
produce something that looks like it's in-range.
Per bug #18313 from Christian Maurer. This has been wrong for
a long time, so back-patch to all supported branches.
Discussion: https://postgr.es/m/18313-64d2c8952d81e84b@postgresql.org
|
|
A function was given a newly standard name from C++20 in LLVM 16. Then
LLVM 18 added a deprecation warning for the old name, and it is about to
ship, so it's time to adjust that.
Back-patch to all supported releases.
Discussion: https://www.postgresql.org/message-id/CA+hUKGLbuVhH6mqS8z+FwAn4=5dHs0bAWmEMZ3B+iYHWKC4-ZA@mail.gmail.com
|
|
This command, when used to add a column on a parent table with a complex
inheritance tree, tried to update multiple times the same tuple in
pg_attribute for a child table when incrementing attinhcount, causing
failures with "tuple already updated by self" because of a missing
CommandCounterIncrement() between two updates.
This exists for a rather long time, so backpatch all the way down.
Reported-by: Alexander Lakhin
Author: Tender Wang
Reviewed-by: Richard Guo
Discussion: https://postgr.es/m/18297-b04cd83a55b51e35@postgresql.org
Backpatch-through: 12
|
|
When a pipeline is opened with \startpipeline and not closed, pgbench
will either error on the next transaction with a "already in pipeline
mode" error or successfully end if this was the last transaction --
despite not sending anything that was piped in the pipeline.
Make it an error to reach end of script is reached while there's an
open pipeline.
Backpatch to 14, where pgbench got support for pipelines.
Author: Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>
Reported-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/Za4IObZkDjrO4TcS@paquier.xyz
|
|
plpgsql fails on new-style CREATE FUNCTION/PROCEDURE commands within
a routine or DO block, because make_execsql_stmt believes that a
semicolon token always terminates a SQL command. Now, that's actually
been wrong since the day it was written, because CREATE RULE has long
allowed multiple rule actions separated by semicolons. But there are
few enough people using multi-action rules that there was never an
attempt to fix it. New-style SQL functions, though, are popular.
psql has this same problem of "does this semicolon really terminate
the command?". It deals with CREATE RULE by counting parenthesis
nesting depth: a semicolon within parens doesn't end a command.
Commits e717a9a18 and 029c5ac03 created a similar heuristic to count
matching BEGIN/END pairs (but only within CREATEs, so as not to be
fooled by plain BEGIN). That's survived several releases now without
trouble reports, so let's just absorb those heuristics into plpgsql.
Per report from Samuel Dussault. Back-patch to v14 where new-style
SQL function syntax came in.
Discussion: https://postgr.es/m/YT2PR01MB88552C3E9AD40A6C038774A781722@YT2PR01MB8855.CANPRD01.PROD.OUTLOOK.COM
|
|
A REINDEX INDEX done on a partitioned index builds a list of the indexes
to work on before processing its partitions in individual transactions.
When combined with a DROP of the partitioned index, there was a window
where it was possible to see some unexpected "could not open relation
with OID", synonym of relation lookup error. The code was robust enough
to handle the case where the parent relation is missing, but not the
case where an index would be gone missing.
This is similar to 1d65416661bb.
Support for REINDEX on partitioned relations has been introduced in
a6642b3ae060, so backpatch down to 14.
Author: Fei Changhong
Discussion: https://postgr.es/m/tencent_6A52106095ACDE55333E3AD33F304C0C3909@qq.com
Backpatch-through: 14
|
|
try_index_open() is able to open an index if its relkind fits, except
that it would return NULL instead of generated an error if the relation
does not exist. This new routine will be used by an upcoming patch to
make REINDEX on partitioned relations more robust when an index in a
partition tree is dropped.
Extracted from a larger patch by the same author.
Author: Fei Changhong
Discussion: https://postgr.es/m/tencent_6A52106095ACDE55333E3AD33F304C0C3909@qq.com
Backpatch-through: 14
|
|
Until now LWLockDequeueSelf() sequentially searched the list of waiters to see
if the current proc is still is on the list of waiters, or has already been
removed. In extreme workloads, where the wait lists are very long, this leads
to a quadratic behavior. #backends iterating over a list #backends
long. Additionally, the likelihood of needing to call LWLockDequeueSelf() in
the first place also increases with the increased length of the wait queue, as
it becomes more likely that a lock is released while waiting for the wait list
lock, which is held for longer during lock release.
Due to the exponential back-off in perform_spin_delay() this is surprisingly
hard to detect. We should make that easier, e.g. by adding a wait event around
the pg_usleep() - but that's a separate patch.
The fix is simple - track whether a proc is currently waiting in the wait list
or already removed but waiting to be woken up in PGPROC->lwWaiting.
In some workloads with a lot of clients contending for a small number of
lwlocks (e.g. WALWriteLock), the fix can substantially increase throughput.
This has been originally fixed for 16~ with a4adc31f6902 without a
backpatch, and we have heard complaints from users impacted by this
quadratic behavior in older versions as well.
Author: Andres Freund <andres@anarazel.de>
Reviewed-by: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Discussion: https://postgr.es/m/20221027165914.2hofzp4cvutj6gin@awork3.anarazel.de
Discussion: https://postgr.es/m/CALj2ACXktNbG=K8Xi7PSqbofTZozavhaxjatVc14iYaLu4Maag@mail.gmail.com
Backpatch-through: 12
|
|
The comment was copy-pasted from the call to ProcSignalInit() in
AuxiliaryProcessMain(), which uses a similar scheme of having reserved
slots for aux processes after MaxBackends slots for backends. However,
ProcSignalInit() indexing starts from 1, whereas BackendStatusArray
starts from 0. The code is correct, but the comment was wrong.
Discussion: https://www.postgresql.org/message-id/f3ecd4cb-85ee-4e54-8278-5fabfb3a4ed0@iki.fi
Backpatch-through: v14
|
|
If configuring the newly created socket non-blocking fails we
error out and return INVALID_SOCKET, but the socket that had
been created wasn't closed. Fix by issuing closesocket in the
errorpath.
Backpatch to all supported branches.
Author: Ranier Vilela <ranier.vf@gmail.com>
Discussion: https://postgr.es/m/CAEudQApmU5CrKefH85VbNYE2y8H=-qqEJbg6RAPU65+vCe+89A@mail.gmail.com
Backpatch-through: v12
|
|
Commit b8ba7344e9eb added in PQgetResult a derefence to a pointer
returned by pqPrepareAsyncResult(), before some other code that was
already testing that pointer for nullness. But since commit
618c16707a6d (in Postgres 15), pqPrepareAsyncResult() doesn't ever
return NULL (a statically-allocated result is returned if OOM). So in
branches 15 and up, we can remove the redundant pointer check with no
harm done.
However, in branch 14, pqPrepareAsyncResult() can indeed return NULL if
it runs out of memory. Fix things there by adding a null pointer check
before dereferencing the pointer. This should hint Coverity that the
preexisting check is not redundant but necessary.
Backpatch to 14, like b8ba7344e9eb.
Per Coverity.
|
|
When ExecBRUpdateTriggers switches to a new target tuple as a result
of the EvalPlanQual logic, it must form a new proposed update tuple.
Since commit 86dc90056, that tuple (the result of
ExecGetUpdateNewTuple) has been a virtual tuple that might contain
pointers to by-ref fields of the new target tuple (in "oldslot").
However, immediately after that we materialize oldslot, causing it to
drop its buffer pin, whereupon the by-ref pointers are unsafe to use.
This is a live bug only when the new target tuple is in a different
page than the original target tuple, since we do still hold a pin on
the original one. (Before 86dc90056, there was no bug because the
EPQ plantree would hold a pin on the new target tuple; but now that's
not assured.) To fix, forcibly materialize the new tuple before we
materialize oldslot. This costs nothing since we would have done that
shortly anyway.
The real-world impact of this is probably minimal. A visible failure
could occur if the new target tuple's buffer were recycled for some
other page in the short interval before we materialize newslot within
the trigger-calling loop; but that's quite unlikely given that we'd
just touched that page. There's a larger hazard that some other
process could prune and repack that page within the window. We have
lock on the new target tuple, but that wouldn't prevent it being moved
on the page.
Alexander Lakhin and Tom Lane, per bug #17798 from Alexander Lakhin.
Back-patch to v14 where 86dc90056 came in.
Discussion: https://postgr.es/m/17798-0907404928dcf0dd@postgresql.org
|
|
Discussion: https://postgr.es/m/1393953.1698353013@sss.pgh.pa.us
Discussion: https://postgr.es/m/CAGjhLkOoBEC9mLsnB42d3CO1vcMx71MLSEuigeABbQ8oRdA6gw@mail.gmail.com
|
|
We've long had a policy that any toasted fields in a catalog tuple
should be pulled in-line before entering the tuple in a catalog cache.
However, that requires access to the catalog's toast table, and we'll
typically do AcceptInvalidationMessages while opening the toast table.
So it's possible that the catalog tuple is outdated by the time we
finish detoasting it. Since no cache entry exists yet, we can't
mark the entry stale during AcceptInvalidationMessages, and instead
we'll press forward and build an apparently-valid cache entry. The
upshot is that we have a race condition whereby an out-of-date entry
could be made in a backend's catalog cache, and persist there
indefinitely causing indeterminate misbehavior.
To fix, use the existing systable_recheck_tuple code to recheck
whether the catalog tuple is still up-to-date after we finish
detoasting it. If not, loop around and restart the process of
searching the catalog and constructing cache entries from the top.
The case is rare enough that this shouldn't create any meaningful
performance penalty, even in the SearchCatCacheList case where
we need to tear down and reconstruct the whole list.
Indeed, the case is so rare that AFAICT it doesn't occur during
our regression tests, and there doesn't seem to be any easy way
to build a test that would exercise it reliably. To allow
testing of the retry code paths, add logic (in USE_ASSERT_CHECKING
builds only) that randomly pretends that the recheck failed about
one time out of a thousand. This is enough to ensure that we'll
pass through the retry paths during most regression test runs.
By adding an extra level of looping, this commit creates a need
to reindent most of SearchCatCacheMiss and SearchCatCacheList.
I'll do that separately, to allow putting those changes in
.git-blame-ignore-revs.
Patch by me; thanks to Alexander Lakhin for having built a test
case to prove the bug is real, and to Xiaoran Wang for review.
Back-patch to all supported branches.
Discussion: https://postgr.es/m/1393953.1698353013@sss.pgh.pa.us
Discussion: https://postgr.es/m/CAGjhLkOoBEC9mLsnB42d3CO1vcMx71MLSEuigeABbQ8oRdA6gw@mail.gmail.com
|
|
This is similar to 9886744a361b, to prevent the execution of other
programs due to autorun configurations which could influence the
postmaster startup.
This was originally applied on HEAD as of 83c75ac7fb69 without a
backpatch, but the patch has survived CI and buildfarm cycles. I have
checked that cmd /d exists down to Windows XP, which should make this
change work correctly in the oldest branches still supported.
Discussion: https://postgr.es/m/20230922.161551.320043332510268554.horikyota.ntt@gmail.com
Backpatch-through: 12
|
|
On Windows, cmd.exe is used to launch the postmaster process to ease its
redirection setup. However, cmd.exe may execute other programs at
startup due to autorun configurations, which could influence the
postmaster startup. This patch adds /D flag to the launcher cmd.exe
command line to disable autorun settings written in the registry.
This was originally applied on HEAD as of 9886744a361b without a
backpatch, but the patch has survived CI and buildfarm cycles. I have
checked that cmd /d exists down to Windows XP, which should make this
change work correctly in the oldest branches still supported.
Reported-by: Hayato Kuroda
Author: Kyotaro Horiguchi
Reviewed-by: Robert Haas, Michael Paquier
Discussion: https://postgr.es/m/20230922.161551.320043332510268554.horikyota.ntt@gmail.com
Backpatch-through: 12
|