Age | Commit message (Collapse) | Author |
|
Since we introduced unlogged sequences in v15, identity sequences
have defaulted to having the same persistence as their owning table.
However, it is possible to change that with ALTER SEQUENCE, and
pg_dump tries to preserve the logged-ness of sequences when it doesn't
match (as indeed it wouldn't for an unlogged table from before v15).
The fly in the ointment is that ALTER SEQUENCE SET [UN]LOGGED fails
in binary-upgrade mode, because it needs to assign a new relfilenode
which we cannot permit in that mode. Thus, trying to pg_upgrade a
database containing a mismatching identity sequence failed.
To fix, add syntax to ADD/ALTER COLUMN GENERATED AS IDENTITY to allow
the sequence's persistence to be set correctly at creation, and use
that instead of ALTER SEQUENCE SET [UN]LOGGED in pg_dump. (I tried to
make SET [UN]LOGGED work without any pg_dump modifications, but that
seems too fragile to be a desirable answer. This way should be
markedly faster anyhow.)
In passing, document the previously-undocumented SEQUENCE NAME option
that pg_dump also relies on for identity sequences; I see no value
in trying to pretend it doesn't exist.
Per bug #18618 from Anthony Hsu.
Back-patch to v15 where we invented this stuff.
Discussion: https://postgr.es/m/18618-d4eb26d669ed110a@postgresql.org
|
|
check_agglevels_and_constraints() asserted that if we find an
aggregate function in an EXPR_KIND_FROM_SUBSELECT expression, the
expression must be in a LATERAL subquery. Alexander Lakhin found a
case where that's not so: because of the odd scoping rules for NEW/OLD
within a rule, a reference to NEW/OLD could cause an aggregate to be
considered top-level even though it's in an unmarked sub-select.
The error message that would be thrown seems sufficiently on-point,
so just remove the Assert. (Hence, this is not a bug for production
builds.)
This Assert was added by me in commit eaccfded9 (9.3 era). It looks
like I put it in to cross-check that the new logic for detecting
misplaced aggregates (using agglevelsup) caught the same cases that a
previous check on p_lateral_active did. So there might have been some
related misbehavior before eaccfded9 ... but that's very ancient
history by now, so I didn't dig any deeper.
Per bug #18608 from Alexander Lakhin. Back-patch to all supported
branches.
Discussion: https://postgr.es/m/18608-48de0717508ee429@postgresql.org
|
|
Since commit 2549f0661, we reject an identifier immediately following
a numeric literal (without separating whitespace), because that risks
ambiguity with hex/octal/binary integers. However, that patch used
token patterns like "{integer}{ident_start}", which is problematic
because {ident_start} matches only a single byte. If the first
character after the integer is a multibyte character, this ends up
with flex reporting an error message that includes a partial multibyte
character. That can cause assorted bad-encoding problems downstream,
both in the report to the client and in the postmaster log file.
To fix, use {identifier} not {ident_start} in the "junk" token
patterns, so that they will match complete multibyte characters.
This seems generally better user experience quite aside from the
encoding problem: for "123abc" the error message will now say that
the error appeared at or near "123abc" instead of "123a".
While at it, add some commentary about why these patterns exist
and how they work.
Report and patch by Karina Litskevich; review by Pavel Borisov.
Back-patch to v15 where the problem came in.
Discussion: https://postgr.es/m/CACiT8iZ_diop=0zJ7zuY3BXegJpkKK1Av-PU7xh0EDYHsa5+=g@mail.gmail.com
|
|
checkWellFormedRecursion would issue "missing recursive reference"
if a WITH RECURSIVE query contained a single self-reference but
that self-reference was inside a top-level WITH, ORDER BY, LIMIT,
etc, rather than inside the second arm of the UNION as expected.
We already intended to throw more-on-point errors for such cases,
but those error checks must be done before examining the UNION arm
in order to have the desired results. So this patch need only
move some code (and improve the comments).
Per bug #18536 from Alexander Lakhin. Back-patch to all supported
branches.
Discussion: https://postgr.es/m/18536-0a342ec07901203e@postgresql.org
|
|
transformTableLikeClause believed that it could process extended
statistics immediately because "the representation of CreateStatsStmt
doesn't depend on column numbers". That was true when extended stats
were first introduced, but it was falsified by the addition of
extended stats on expressions: the parsed expression tree is fed
forward by the LIKE option, and that will contain Vars. So if the
new table doesn't have attnums identical to the old one's (typically
because there are some dropped columns in the old one), that doesn't
work. The CREATE goes through, but it emits invalid statistics
objects that will cause problems later.
Fortunately, we already have logic that can adapt expression trees
to the possibly-new column numbering. To use it, we have to delay
processing of CREATE_TABLE_LIKE_STATISTICS into expandTableLikeClause,
just as for other LIKE options that involve expressions.
Per bug #18468 from Alexander Lakhin. Back-patch to v14 where
extended statistics on expressions were added.
Discussion: https://postgr.es/m/18468-f5add190e3fa5902@postgresql.org
|
|
Commit 2ed8f9a01 intended to institute a policy that if a
RangeTblFunction has a coldeflist, then the function return type is
certainly RECORD, and we should use the coldeflist as the source of
truth about what the columns of the record type are. When the
original function has been folded to a constant, inspection of the
constant might give a different answer. This situation will lead to
a tuple-type-mismatch error at execution, but up until that point we
need to consistently believe the coldeflist, or we'll have problems
from different bits of code reaching different conclusions.
expandRTE didn't get that memo though, and would try to produce a
tupdesc based on the constant in this situation, leading to an
assertion failure. (Desultory testing suggests that non-assert
builds often manage to give the expected error, although I also
saw a "cache lookup failed for type 0" error, and it seems at
least possible that a crash could happen.)
Some other callers of get_expr_result_type and get_expr_result_tupdesc
were also being incautious about this. While none of them seem to
have actual bugs, they're working harder than necessary in this case,
besides which it seems safest to have an explicit policy of not using
those functions on an RTE with a coldeflist. Adjust the code
accordingly, and add commentary to funcapi.c about this policy.
Also fix an obsolete comment that claimed "get_expr_result_type()
doesn't know how to extract type info from a RECORD constant".
That hasn't been true since commit d57534740.
Per bug #18422 from Alexander Lakhin.
As with the previous commit, back-patch to all supported branches.
Discussion: https://postgr.es/m/18422-89ca86c8eac5246d@postgresql.org
|
|
Commit a3c7a993d fixed some cases involving target columns that are
arrays or composites by applying transformAssignedExpr to the VALUES
entries, and then stripping off any assignment ArrayRefs or
FieldStores that the transformation added. But I forgot about domains
over arrays or composites :-(. Such cases would either fail with
surprising complaints about mismatched datatypes, or insert unexpected
coercions that could lead to odd results. To fix, extend the
stripping logic to get rid of CoerceToDomain if it's atop an ArrayRef
or FieldStore.
While poking at this, I realized that there's a poorly documented and
not-at-all-tested behavior nearby: we coerce each VALUES column to
the domain type separately, and rely on the rewriter to merge those
operations so that the domain constraints are checked only once.
If that merging did not happen, it's entirely possible that we'd get
unexpected domain constraint failures due to checking a
partially-updated container value. There's no bug there, but while
we're here let's improve the commentary about it and add some test
cases that explicitly exercise that behavior.
Per bug #18393 from Pablo Kharo. Back-patch to all supported
branches.
Discussion: https://postgr.es/m/18393-65fedb1a0de9260d@postgresql.org
|
|
Verify that a user running MERGE with a DO NOTHING clause has
privileges to read the table, even if no columns are referenced. Such
privileges were already required if the ON clause or any of the WHEN
conditions referenced any column at all, so there's no functional change
in practice.
This change fixes an assertion failure in the case where no column is
referenced by the command and the WHEN clauses are all DO NOTHING.
Backpatch to 15, where MERGE was introduced.
Reported-by: Alena Rybakina <a.rybakina@postgrespro.ru>
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Discussion: https://postgr.es/m/4d65a385-7efa-4436-a825-0869f89d9d92@postgrespro.ru
|
|
Fix for 344d62fb9a9: That commit introduced unlogged sequences and
made it so that identity/serial sequences automatically get the
persistence level of their owning table. But this works only for
CREATE TABLE and not for ALTER TABLE / ADD COLUMN. The latter would
always create the sequence as logged (default), independent of the
persistence setting of the table. This is fixed here.
Note: It is allowed to change the persistence of identity sequences
directly using ALTER SEQUENCE. So mistakes in existing databases can
be fixed manually.
Reviewed-by: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/c4b6e2ed-bcdf-4ea7-965f-e49761094827%40eisentraut.org
|
|
transformAggregateCall() captures the datatypes of the aggregate's
arguments immediately to construct the Aggref.aggargtypes list.
This seems reasonable because the arguments have already been
transformed --- but there is an edge case where they haven't been.
Specifically, if we have an unknown-type literal in an ANY argument
position, nothing will have been done with it earlier. But if we
also have DISTINCT, then addTargetToGroupList() converts the literal
to "text" type, resulting in the aggargtypes list not matching the
actual runtime type of the argument. The end result is that the
aggregate tries to interpret a "text" value as being of type
"unknown", that is a zero-terminated C string. If the text value
contains no zero bytes, this could result in disclosure of server
memory following the text literal value.
To fix, move the collection of the aggargtypes list to the end
of transformAggregateCall(), after DISTINCT has been handled.
This requires slightly more code, but not a great deal.
Our thanks to Jingzhou Fu for reporting this problem.
Security: CVE-2023-5868
|
|
expandRecordVariable() failed to adjust the parse nesting structure
correctly when recursing to inspect an outer-level Var. This could
result in assertion failures or core dumps in corner cases.
Likewise, get_name_for_var_field() failed to adjust the deparse
namespace stack correctly when recursing to inspect an outer-level
Var. In this case the likely result was a "bogus varno" error
while deparsing a view.
Per bug #18077 from Jingzhou Fu. Back-patch to all supported
branches.
Richard Guo, with some adjustments by me
Discussion: https://postgr.es/m/18077-b9db97c6e0ab45d8@postgresql.org
|
|
Revalidation of a plancache entry (after a cache invalidation event)
requires acquiring a snapshot. Normally that is harmless, but not
if the cached statement is one that needs to run without acquiring a
snapshot. We were already aware of that for TransactionStmts,
but for some reason hadn't extrapolated to the other statements that
PlannedStmtRequiresSnapshot() knows mustn't set a snapshot. This can
lead to unexpected failures of commands such as SET TRANSACTION
ISOLATION LEVEL. We can fix it in the same way, by excluding those
command types from revalidation.
However, we can do even better than that: there is no need to
revalidate for any statement type for which parse analysis, rewrite,
and plan steps do nothing interesting, which is nearly all utility
commands. To mechanize this, invent a parser function
stmt_requires_parse_analysis() that tells whether parse analysis does
anything beyond wrapping a CMD_UTILITY Query around the raw parse
tree. If that's what it does, then rewrite and plan will just
skip the Query, so that it is not possible for the same raw parse
tree to produce a different plan tree after cache invalidation.
stmt_requires_parse_analysis() is basically equivalent to the
existing function analyze_requires_snapshot(), except that for
obscure reasons that function omits ReturnStmt and CallStmt.
It is unclear whether those were oversights or intentional.
I have not been able to demonstrate a bug from not acquiring a
snapshot while analyzing these commands, but at best it seems mighty
fragile. It seems safer to acquire a snapshot for parse analysis of
these commands too, which allows making stmt_requires_parse_analysis
and analyze_requires_snapshot equivalent.
In passing this fixes a second bug, which is that ResetPlanCache
would exclude ReturnStmts and CallStmts from revalidation.
That's surely *not* safe, since they contain parsable expressions.
Per bug #18059 from Pavel Kulakov. Back-patch to all supported
branches.
Discussion: https://postgr.es/m/18059-79c692f036b25346@postgresql.org
|
|
There was some odd wording in corner-case gram.y error messages "some
error ... at or near", which appears to have been modeled after "syntax
error" messages. However, they don't work that way, and they're just
wrong. They're also uncovered by tests. Remove the trailing words,
and also add tests.
They were introduced with 5a2832465fd8; backpatch to 15.
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
|
|
CREATE SCHEMA AUTHORIZATION with appended schema elements can lead to
crashes when comparing the schema name of the query with the schemas
used in the qualification of some clauses in the elements' queries.
The origin of the problem is that the transformation routine for the
elements listed in a CREATE SCHEMA query uses as new, expected, schema
name the one listed in CreateSchemaStmt itself. However, depending on
the query, CreateSchemaStmt.schemaname may be NULL, being computed
instead from the role specification of the query given by the
AUTHORIZATION clause, that could be either:
- A user name string, with the new schema name being set to the same
value as the role given.
- Guessed from CURRENT_ROLE, SESSION_ROLE or CURRENT_ROLE, with a new
schema name computed from the security context where CREATE SCHEMA is
running.
Regression tests are added for CREATE SCHEMA with some appended elements
(some of them with schema qualifications), covering also some role
specification patterns.
While on it, this simplifies the context structure used during the
transformation of the elements listed in a CREATE SCHEMA query by
removing the fields for the role specification and the role type. They
were not used, and for the role specification this could be confusing as
the schema name may by extracted from that at the beginning of
CreateSchemaCommand().
This issue exists for a long time, so backpatch down to all the versions
supported.
Reported-by: Song Hongyu
Author: Michael Paquier
Reviewed-by: Richard Guo
Discussion: https://postgr.es/m/17909-f65c12dfc5f0451d@postgresql.org
Backpatch-through: 11
|
|
When calling generateSerialExtraStmts(), we would pass in the
constraint->options. In some cases, generateSerialExtraStmts() would
modify the referenced List to remove elements from it, but doing so is
invalid without assigning the list back to all variables that point to it.
In the particular reported problem case, the List became empty, in which
cases it became NIL, but the passed in constraint->options didn't get to
find out about that and was left pointing to free'd memory.
To fix this, just perform a list_copy() inside generateSerialExtraStmts().
We could just do a list_copy() just before we perform the delete from the
list, however, that seems less robust. Let's make sure the generated
CreateSeqStmt gets a completely different copy of the list to be safe.
Bug: #17879
Reported-by: Fei Changhong
Diagnosed-by: Fei Changhong
Discussion: https://postgr.es/m/17879-b7dfb5debee58ff5@postgresql.org
Backpatch-through: 11, all supported versions
|
|
check_agg_arguments_walker() supposed that it needn't descend into
the arguments of a lower-level aggregate function, but this is
just wrong in the presence of multiple levels of sub-select. The
oversight would lead to executor failures on queries that should
be rejected. (Prior to v11, they actually were rejected, thanks
to a "redundant" execution-time check.)
Per bug #17835 from Anban Company. Back-patch to all supported
branches.
Discussion: https://postgr.es/m/17835-4f29f3098b2d0ba4@postgresql.org
|
|
If a view is defined atop another view, and then CREATE OR REPLACE
VIEW is used to add columns to the lower view, then when the upper
view's referencing RTE is expanded by ApplyRetrieveRule we will have
a subquery RTE with fewer eref->colnames than output columns. This
confuses various code that assumes those lists are always in sync,
as they are in plain parser output.
We have seen such problems before (cf commit d5b760ecb), and now
I think the time has come to do what was speculated about in that
commit: let's make ApplyRetrieveRule synthesize some column names to
preserve the invariant that holds in parser output. Otherwise we'll
be chasing this class of bugs indefinitely. Moreover, it appears from
testing that this actually gives us better results in the test case
d5b760ecb added, and likely in other corner cases that we lack
coverage for.
In HEAD, I replaced d5b760ecb's hack to make expandRTE exit early with
an elog(ERROR) call, since the case is now presumably unreachable.
But it seems like changing that in back branches would bring more risk
than benefit, so there I just updated the comment.
Per bug #17811 from Alexander Lakhin. Back-patch to all supported
branches.
Discussion: https://postgr.es/m/17811-d31686b78f0dffc9@postgresql.org
|
|
The former code would only detect an unreachable WHEN clause if it had
an AND condition. Fix, so that unreachable unconditional WHEN clauses
are also detected.
Back-patch to v15, where MERGE was added.
Discussion: https://postgr.es/m/CAEZATCVQ=7E2z4cSBB49jjeGGsB6WeoYQY32NDeSvcHiLUZ=ow@mail.gmail.com
|
|
Such references failed with "cache lookup failed for type 0"
because we didn't resolve the type of the CYCLE column until after
analyzing the CTE's query. We can just move that processing
to before the recursive parse_sub_analyze call, though.
While here, invent a couple of local variables to make this
code less egregiously wider-than-80-columns.
Per bug #17723 from Vik Fearing. Back-patch to v14 where
the CYCLE feature was added.
Discussion: https://postgr.es/m/17723-2c4985ff111e7bba@postgresql.org
|
|
Use the relation's rd_rules structure to test whether it has rules,
rather than the relhasrules flag, which might be out of date.
Reviewed by Tom Lane.
Backpatch to 15, where MERGE was added.
Discussion: https://postgr.es/m/CAEZATCVkBVZABfw71sYvkcPf6tarcOFST5Bc6AOi-LFT9YdccQ%40mail.gmail.com
|
|
Commit aa0105141 repeated one of the oldest mistakes in our book:
thinking that OID is the same as int32. It isn't of course, and
unsurprisingly the first person who came along with a database
OID above 2 billion broke it. Repair.
Per bug #17677 from Sergey Pankov. Back-patch to v15.
Discussion: https://postgr.es/m/17677-a99fa067d7ed71c9@postgresql.org
|
|
Oversight in 7103ebb7aae8. Backpatch to 15.
Author: Richard Guo <guofenglinux@gmail.com>
Discussion: https://postgr.es/m/CAMbWs48gnDjZXq3-b56dVpQCNUJ5hD9kdtWN4QFwKCEapspNsA@mail.gmail.com
|
|
In the latest version of Apple's macOS SDK, <sys/socket.h>
fails to compile if "REF" is #define'd as something.
Apple may or may not agree that this is a bug, and even if
they do accept the bug report I filed, they probably won't
fix it very quickly. In the meantime, our back branches will all
fail to compile gram.y. v15 and HEAD currently escape the problem
thanks to the refactoring done in 98e93a1fc, but that's purely
accidental. Moreover, since that patch removed a widely-visible
inclusion of <netdb.h>, back-patching it seems too likely to break
third-party code.
Instead, change the token's code name to REF_P, following our usual
convention for naming parser tokens that are likely to have symbol
conflicts. The effects of that should be localized to the grammar
and immediately surrounding files, so it seems like a safer answer.
Per project policy that we want to keep recently-out-of-support
branches buildable on modern systems, back-patch all the way to 9.2.
Discussion: https://postgr.es/m/1803927.1665938411@sss.pgh.pa.us
|
|
This may be a bit too subtle, but removing that word from there makes
this clause no longer a perfect parallel of the GRANT variant "ALL
TABLES IN SCHEMA": indeed, for publications what we record is the schema
itself, not the tables therein, which means that any tables added to the
schema in the future are also published. This is completely different
to what GRANT does, which is affect only the tables that exist when the
command is executed.
There isn't resounding support for this change, but there are a few
positive votes and no opposition. Because the time to 15 RC1 is very
short, let's get this out now.
Backpatch to 15.
Discussion: https://postgr.es/m/2729c9e2-9aac-8cda-f2f4-34f2bcc18f4e
|
|
clang 15+ will issue a set-but-not-used warning when the only
use of a variable is in autoincrements (e.g., "foo++;").
That's perfectly sensible, but it detects a few more cases that
we'd not noticed before. Silence the warnings with our usual
methods, such as PG_USED_FOR_ASSERTS_ONLY, or in one case by
actually removing a useless variable.
One thing that we can't nicely get rid of is that with %pure-parser,
Bison emits "yynerrs" as a local variable that falls foul of this
warning. To silence those, I inserted "(void) yynerrs;" in the
top-level productions of affected grammars.
Per recently-established project policy, this is a candidate
for back-patching into out-of-support branches: it suppresses
annoying compiler warnings but changes no behavior. Hence,
back-patch to 9.5, which is as far as these patches go without
issues. (A preliminary check shows that the prior branches
need some other set-but-not-used cleanups too, so I'll leave
them for another day.)
Discussion: https://postgr.es/m/514615.1663615243@sss.pgh.pa.us
|
|
This appears to be a merge mistake in 96ef3237bf74. We could put it
back the way it was before JSON_TABLE and it'd be two lines shorter, but
it's likely that JSON_TABLE will be back and will prefer things this
way. It makes no other difference in practice.
Backpatch to 15.
Reported by Ranier Vilela
Discussion: https://postgr.es/m/CAEudQAr4nOcNQskC4oBEZN4S+4heJ=1ch_ZKOxU+_Ef-FQSf-g@mail.gmail.com
|
|
The reverts the following and makes some associated cleanups:
commit f79b803dc: Common SQL/JSON clauses
commit f4fb45d15: SQL/JSON constructors
commit 5f0adec25: Make STRING an unreserved_keyword.
commit 33a377608: IS JSON predicate
commit 1a36bc9db: SQL/JSON query functions
commit 606948b05: SQL JSON functions
commit 49082c2cc: RETURNING clause for JSON() and JSON_SCALAR()
commit 4e34747c8: JSON_TABLE
commit fadb48b00: PLAN clauses for JSON_TABLE
commit 2ef6f11b0: Reduce running time of jsonb_sqljson test
commit 14d3f24fa: Further improve jsonb_sqljson parallel test
commit a6baa4bad: Documentation for SQL/JSON features
commit b46bcf7a4: Improve readability of SQL/JSON documentation.
commit 112fdb352: Fix finalization for json_objectagg and friends
commit fcdb35c32: Fix transformJsonBehavior
commit 4cd8717af: Improve a couple of sql/json error messages
commit f7a605f63: Small cleanups in SQL/JSON code
commit 9c3d25e17: Fix JSON_OBJECTAGG uniquefying bug
commit a79153b7a: Claim SQL standard compliance for SQL/JSON features
commit a1e7616d6: Rework SQL/JSON documentation
commit 8d9f9634e: Fix errors in copyfuncs/equalfuncs support for JSON node types.
commit 3c633f32b: Only allow returning string types or bytea from json_serialize
commit 67b26703b: expression eval: Fix EEOP_JSON_CONSTRUCTOR and EEOP_JSONEXPR size.
The release notes are also adjusted.
Backpatch to release 15.
Discussion: https://postgr.es/m/40d2c882-bcac-19a9-754d-4299e1d87ac7@postgresql.org
|
|
Compiling with -Wshadow=compatible-local yields quite a few warnings about
local variables being shadowed by compatible local variables in an inner
scope. Of course, this is perfectly valid in C, but we have had bugs in
the past as a result of developers failing to notice this. af7d270dd is a
recent example.
Here we do a cleanup of warnings we receive from -Wshadow=compatible-local
for code which is new to PostgreSQL 15. We've yet to have the discussion
about if we actually ever want to run that as a standard compilation flag.
We'll need to at least get the number of warnings down to something easier
to manage before we can realistically consider if we want this or not.
This commit is the first step towards reducing the warnings.
The changes being made here are all fairly trivial. Because of that, and
the fact that v15 is still in beta, this is being back-patched into 15.
It seems more risky not to do this as the risk of future bugs is increased
by the additional conflicts that this commit could cause for any future
bug fixes touching the same areas as this commit.
Author: Justin Pryzby
Discussion: https://postgr.es/m/20220817145434.GC26426%40telsasoft.com
Backpatch-through: 15
|
|
Most parts of the parser can expect that the stack overflow check
in transformExprRecurse() will trigger before things get desperate.
However, transformFromClauseItem() can recurse directly to self
without having analyzed any expressions, so it's possible to drive
it to a stack-overrun crash. Add a check to prevent that.
Per bug #17583 from Egor Chindyaskin. Back-patch to all supported
branches.
Richard Guo
Discussion: https://postgr.es/m/17583-33be55b9f981f75c@postgresql.org
|
|
The grammar added for MERGE inadvertently made it accepted syntax in
places that were not prepared to deal with it -- namely COPY and inside
CTEs, but invoking these things with MERGE currently causes assertion
failures or weird misbehavior in non-assertion builds. Protect those
places by checking for it explicitly until somebody decides to implement
it.
Reported-by: Alexey Borzov <borz_off@cs.msu.su>
Discussion: https://postgr.es/m/17579-82482cd7b267b862@postgresql.org
|
|
Previously, if an extension script did CREATE OR REPLACE and there was
an existing object not belonging to the extension, it would overwrite
the object and adopt it into the extension. This is problematic, first
because the overwrite is probably unintentional, and second because we
didn't change the object's ownership. Thus a hostile user could create
an object in advance of an expected CREATE EXTENSION command, and would
then have ownership rights on an extension object, which could be
modified for trojan-horse-type attacks.
Hence, forbid CREATE OR REPLACE of an existing object unless it already
belongs to the extension. (Note that we've always forbidden replacing
an object that belongs to some other extension; only the behavior for
previously-free-standing objects changes here.)
For the same reason, also fail CREATE IF NOT EXISTS when there is
an existing object that doesn't belong to the extension.
Our thanks to Sven Klemm for reporting this problem.
Security: CVE-2022-2625
|
|
I thought commit fd96d14d9 had plugged all the holes of this sort,
but no, function RTEs could produce oversize tuples too, either
via long coldeflists or just from multiple functions in one RTE.
(I'm pretty sure the other variants of base RTEs aren't a problem,
because they ultimately refer to either a table or a sub-SELECT,
whose widths are enforced elsewhere. But we explicitly allow join
RTEs to be overwidth, as long as you don't try to form their
tuple result.)
Per further discussion of bug #17561. As before, patch all branches.
Discussion: https://postgr.es/m/17561-80350151b9ad2ad4@postgresql.org
|
|
A RowExpr with more than MaxTupleAttributeNumber columns would fail at
execution anyway, since we cannot form a tuple datum with more than that
many columns. While heap_form_tuple() has a check for too many columns,
it emerges that there are some intermediate bits of code that don't
check and can be driven to failure with sufficiently many columns.
Checking this at parse time seems like the most appropriate place to
install a defense, since we already check SELECT list length there.
While at it, make the SELECT-list-length error use the same errcode
(TOO_MANY_COLUMNS) as heap_form_tuple does, rather than the generic
PROGRAM_LIMIT_EXCEEDED.
Per bug #17561 from Egor Chindyaskin. The given test case crashes
in all supported branches (and probably a lot further back),
so patch all.
Discussion: https://postgr.es/m/17561-80350151b9ad2ad4@postgresql.org
|
|
Due to lack of concern for the case in the dependency code, it's
possible to drop a column of a composite type even though stored
queries have references to the dropped column via functions-in-FROM
that return the composite type. There are "soft" references,
namely FROM-clause aliases for such columns, and "hard" references,
that is actual Vars referring to them. The right fix for hard
references is to add dependencies preventing the drop; something
we've known for many years and not done (and this commit still doesn't
address it). A "soft" reference shouldn't prevent a drop though.
We've been around on this before (cf. 9b35ddce9, 2c4debbd0), but
nobody had noticed that the current behavior can result in dump/reload
failures, because ruleutils.c can print more column aliases than the
underlying composite type now has. So we need to rejigger the
column-alias-handling code to treat such columns as dropped and not
print aliases for them.
Rather than writing new code for this, I used expandRTE() which already
knows how to figure out which function result columns are dropped.
I'd initially thought maybe we could use expandRTE() in all cases, but
that fails for EXPLAIN's purposes, because the planner strips a lot of
RTE infrastructure that expandRTE() needs. So this patch just uses it
for unplanned function RTEs and otherwise does things the old way.
If there is a hard reference (Var), then removing the column alias
causes us to fail to print the Var, since there's no longer a name
to print. Failing seems less desirable than printing a made-up
name, so I made it print "?dropped?column?" instead.
Per report from Timo Stolz. Back-patch to all supported branches.
Discussion: https://postgr.es/m/5c91267e-3b6d-5795-189c-d15a55d61dbb@nullachtvierzehn.de
|
|
Detail and hint messages should be full sentences and should end with a
period, but some of the messages newly-introduced in v15 did not follow
that.
Author: Justin Pryzby
Reviewed-by: Álvaro Herrera
Discussion: https://postgr.es/m/20220719120948.GF12702@telsasoft.com
Backpatch-through: 15
|
|
The error message introduced in 3c633f3 can share the same format string
with an existing message used for JSON(), reducing the translation
effort.
Author: Kyotaro Horiguchi
Discussion: https://postgr.es/m/20220708.154135.2123613118233840495.horikyota.ntt@gmail.com
Backpatch-through: 15
|
|
These are documented to be the allowed types for the RETURNING clause,
but the restriction was not being enforced, which caused a segfault if
another type was specified. Add some testing for this.
Per report from a.kozhemyakin
Backpatch to release 15.
|
|
When locking a specific named relation for a FOR [KEY] UPDATE/SHARE
clause, transformLockingClause() finds the relation to lock by
scanning the rangetable for an RTE with a matching eref->aliasname.
However, it failed to account for the visibility rules of a join RTE.
If a join RTE doesn't have a user-supplied alias, it will have a
generated eref->aliasname of "unnamed_join" that is not visible as a
relation name in the parse namespace. Such an RTE needs to be skipped,
otherwise it might be found in preference to a regular base relation
with a user-supplied alias of "unnamed_join", preventing it from being
locked.
In addition, if a join RTE doesn't have a user-supplied alias, but
does have a join_using_alias, then the RTE needs to be matched using
that alias rather than the generated eref->aliasname, otherwise a
misleading "relation not found" error will be reported rather than a
"join cannot be locked" error.
Backpatch all the way, except for the second part which only goes back
to 14, where JOIN USING aliases were added.
Dean Rasheed, reviewed by Tom Lane.
Discussion: https://postgr.es/m/CAEZATCUY_KOBnqxbTSPf=7fz9HWPnZ5Xgb9SwYzZ8rFXe7nb=w@mail.gmail.com
|
|
The output columns of JSON_TABLE should have the collations of their
data type. The existing implementation sets the default collation if
the type is collatable.
Reviewed-by: Andrew Dunstan <andrew@dunslane.net>
Discussion: https://www.postgresql.org/message-id/flat/9d75ce67-0121-5050-5bec-bf5009db55ce%40enterprisedb.com
|
|
Commit 1a36bc9db (SQL/JSON query functions) introduced STRING as a
type_func_name_keyword, thereby breaking applications that use
"string" as a table name, column name, function parameter name, etc.
That seems like a pretty bad thing, not least because the SQL spec
says that STRING is an unreserved keyword.
This is easy enough to fix so far as the core grammar is concerned.
However, doing so causes some ECPG test cases to fail, specifically
those that use "string" as a typedef name. It turns out this is
because portions of the ECPG grammar allow type_func_name_keywords
but not unreserved_keywords as typedef names. That's pretty horrid,
and it's mildly astonishing that we've not heard complaints about it
before. We can fix two of those uses trivially, but the ones in the
var_type production are less easy. As a stopgap, hard-code STRING as
an allowed alternative in var_type.
Per report from Alastair McKinley.
Discussion: https://postgr.es/m/3661437.1653855582@sss.pgh.pa.us
|
|
9d9c02ccd added code to allow the executor to take shortcuts when quals
on monotonic window functions guaranteed that once the qual became false
it could never become true again. When possible, baserestrictinfo quals
are converted to become these quals, which we call run conditions.
Unfortunately, in 9d9c02ccd, I forgot to update
remove_unused_subquery_outputs to teach it about these run conditions.
This could cause a WindowFunc column which was unused in the target list
but referenced by an upper-level WHERE clause to be removed from the
subquery when the qual in the WHERE clause was converted into a window run
condition. Because of this, the entire WindowClause would be removed from
the query resulting in additional rows making it into the resultset when
they should have been filtered out by the WHERE clause.
Here we fix this by recording which target list items in the subquery have
run conditions. That gets passed along to remove_unused_subquery_outputs
to tell it not to remove these items from the target list.
Bug: #17495
Reported-by: Jeremy Evans
Reviewed-by: Richard Guo
Discussion: https://postgr.es/m/17495-7ffe2fa0b261b9fa@postgresql.org
|
|
We weren't checking the length of the column list in the alias clause of
an XMLTABLE or JSON_TABLE function (a "tablefunc" RTE), and it was
possible to make the server crash by passing an overly long one. Fix it
by throwing an error in that case, like the other places that deal with
alias lists.
In passing, modify the equivalent test used for join RTEs to look like
the other ones, which was different for no apparent reason.
This bug came in when XMLTABLE was born in version 10; backpatch to all
stable versions.
Reported-by: Wang Ke <krking@zju.edu.cn>
Discussion: https://postgr.es/m/17480-1c9d73565bb28e90@postgresql.org
|
|
I started out with the intention to rename value_type to item_type to
avoid a collision with a typedef name that appears on some platforms.
Along the way, I noticed that the adjacent field "format" was not being
correctly handled by the backend/nodes/ infrastructure functions:
copyfuncs.c erroneously treated it as a scalar, while equalfuncs,
outfuncs, and readfuncs omitted handling it at all. This looks like
it might be cosmetic at the moment because the field is always NULL
after parse analysis; but that's likely a bug in itself, and the code's
certainly not very future-proof. Let's fix it while we can still do so
without forcing an initdb on beta testers.
Further study found a few other inconsistencies in the backend/nodes/
infrastructure for the recently-added JSON node types, so fix those too.
catversion bumped because of potential change in stored rules.
Discussion: https://postgr.es/m/526703.1652385613@sss.pgh.pa.us
|
|
In the style of pgindent, done semi-manually.
Discussion: https://www.postgresql.org/message-id/flat/7d062ecc-7444-23ec-a159-acd8adf9b586%40enterprisedb.com
|
|
Run pgindent, pgperltidy, and reformat-dat-files.
I manually fixed a couple of comments that pgindent uglified.
|
|
This fixes the grammar of some comments in a couple of tests (SQL and
TAP), and in some C files.
Author: Justin Pryzby
Discussion: https://postgr.es/m/20220511020334.GH19626@telsasoft.com
|
|
The parser code that transformed VALUES from row-oriented to
column-oriented lists failed if there were zero columns.
You can't write that straightforwardly (though probably you
should be able to), but the case can be reached by expanding
a "tab.*" reference to a zero-column table.
Per bug #17477 from Wang Ke. Back-patch to all supported branches.
Discussion: https://postgr.es/m/17477-0af3c6ac6b0a6ae0@postgresql.org
|
|
It doesn't seem very useful, and it's a bit in the way of the planned
node support automation.
Discussion: https://www.postgresql.org/message-id/202204191140.3wsbevfhqmu3@alvherre.pgsql
|
|
Don't try to look at the attidentity field of system attributes,
because they're not there in the TupleDescAttr array. Sometimes
this is harmless because we accidentally pick up a zero, but
otherwise we'll report "no owned sequence found" from an attempt
to alter a system attribute. (It seems possible that a SIGSEGV
could occur, too, though I've not seen it in testing.)
It's not in this function's charter to complain that you can't
alter a system column, so instead just hard-wire an assumption
that system attributes aren't identities. I didn't bother with
a regression test because the appearance of the bug is very
erratic.
Per bug #17465 from Roman Zharkov. Back-patch to all supported
branches. (There's not actually a live bug before v12, because
before that get_attidentity() did the right thing anyway.
But for consistency I changed the test in the older branches too.)
Discussion: https://postgr.es/m/17465-f2a554a6cb5740d3@postgresql.org
|
|
These are to keep Coverity happy. In one case remove a redundant NULL
check, and in another explicitly ignore a function result that is already
known.
|