Age | Commit message (Collapse) | Author |
|
There are some problems with the new way to handle these constraints
that were detected at the last minute, and require fixes that appear too
invasive to be doing this late in the cycle. Revert this (again) for
now, we'll try again with these problems fixed.
The following commits are reverted:
b0e96f311985 Catalog not-null constraints
9b581c534186 Disallow changing NO INHERIT status of a not-null constraint
d0ec2ddbe088 Fix not-null constraint test
ac22a9545ca9 Move privilege check to the right place
b0f7dd915bca Check stack depth in new recursive functions
3af721794272 Update information_schema definition for not-null constraints
c3709100be73 Fix propagating attnotnull in multiple inheritance
d9f686a72ee9 Fix restore of not-null constraints with inheritance
d72d32f52d26 Don't try to assign smart names to constraints
0cd711271d42 Better handle indirect constraint drops
13daa33fa5a6 Disallow NO INHERIT not-null constraints on partitioned tables
d45597f72fe5 Disallow direct change of NO INHERIT of not-null constraints
21ac38f498b3 Fix inconsistencies in error messages
Discussion: https://postgr.es/m/202405110940.joxlqcx4dogd@alvherre.pgsql
|
|
It turns out that we broke this in commit e5bc9454e, because
the code was assuming that no dependent types would appear
among the extension's direct dependencies, and now they do.
This isn't terribly hard to fix: just skip dependent types,
expecting that we will recurse to them when we process the parent
object (which should also be among the direct dependencies).
But a little bit of refactoring is needed so that we can avoid
duplicating logic about what is a dependent type.
Although there is some testing of ALTER EXTENSION SET SCHEMA,
it failed to cover interesting cases, so add more tests.
Discussion: https://postgr.es/m/930191.1715205151@sss.pgh.pa.us
|
|
Reported by Kyotaro Horiguchi
Also some comments mentioning wrong version numbers, spotted by Justin
Pryzby.
Discussion: https://postgr.es/m/20240507.171724.750916195320223609.horikyota.ntt@gmail.com
Discussion: https://postgr.es/m/Zh0aAH7tbZb-9HbC@pryzbyj2023
|
|
When changing the data type of a column of a partitioned table, craft
the ALTER SEQUENCE command only once. Partitions do not have identity
sequences of their own and thus do not need a ALTER SEQUENCE command
for each partition.
Fix getIdentitySequence() to fetch the identity sequence associated
with the top-level partitioned table when a Relation of a partition is
passed to it. While doing so, translate the attribute number of the
partition into the attribute number of the partitioned table.
Author: Ashutosh Bapat <ashutosh.bapat@enterprisedb.com>
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Reviewed-by: Dmitry Dolgov <9erthalion6@gmail.com>
Discussion: https://www.postgresql.org/message-id/3b8a9dc1-bbc7-0ef5-6863-c432afac7d59@gmail.com
|
|
While converting a pg_attribute tuple into a ColumnDef,
ColumnDef::compression remains NULL if there is no compression method
set fot the attribute. Calling strcmp() with NULL
ColumnDef::compression, when comparing compression methods of parents,
causes segmentation fault in MergeInheritedAttribute(). Skip
comparing compression methods if either of them is NULL.
Author: Ashutosh Bapat <ashutosh.bapat@enterprisedb.com>
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Discussion: https://www.postgresql.org/message-id/b22a6834-aacb-7b18-0424-a3f5fe889667%40gmail.com
|
|
Same as 42b041243, except that the trouble case is a publication
WHERE clause that depends on a column.
Again reported by Alexander Lakhin. Back-patch to v15 where
we added publication WHERE clauses.
Discussion: https://postgr.es/m/548a47bc-87ae-b3df-c6a2-60b9966f808b@gmail.com
|
|
We support changing NO INHERIT constraint to INHERIT for constraints in
child relations when adding a constraint to some ancestor relation, and
also during pg_upgrade's schema restore; but other than those special
cases, command ALTER TABLE ADD CONSTRAINT should not be allowed to
change an existing constraint from NO INHERIT to INHERIT, as that would
require to process child relations so that they also acquire an
appropriate constraint, which we may not be in a position to do. (It'd
also be surprising behavior.)
It is conceivable that we want to allow ALTER TABLE SET NOT NULL to make
such a change; but in that case some more code is needed to implement it
correctly, so for now I've made that throw the same error message.
Also, during the prep phase of ALTER TABLE ADD CONSTRAINT, acquire locks
on all descendant tables; otherwise we might operate on child tables on
which no locks are held, particularly in the mode where a primary key
causes not-null constraints to be created on children.
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Discussion: https://postgr.es/m/7d923a66-55f0-3395-cd40-81c142b5448b@gmail.com
|
|
Discussion: https://postgr.es/m/4ea13583-7305-40b0-8525-58381533e2b1@eisentraut.org
Reported-by: Peter Eisentraut
|
|
This commit makes new partitions created by ALTER TABLE ... SPLIT PARTITION
and ALTER TABLE ... MERGE PARTITIONS commands inherit the paret table access
method.
Reported-by: Alexander Lakhin
Discussion: https://postgr.es/m/84ada05b-be5c-473e-6d1c-ebe5dd21b190%40gmail.com
Reviewed-by: Pavel Borisov
|
|
The createPartitionTable() function is responsible for creating new partitions
for ALTER TABLE ... MERGE PARTITIONS, and ALTER TABLE ... SPLIT PARTITION
commands. It emulates the behaviour of CREATE TABLE ... (LIKE ...), where
new table persistence should be specified by the user. In the table
partitioning persistent of the partition and its parent must match. So, this
commit makes createPartitionTable() copy the persistence of the parent
partition.
Also, this commit makes createPartitionTable() recheck the persistence after
the new table creation. This is needed because persistence might be affected
by pg_temp in search_path.
This commit also changes the signature of createPartitionTable() making it
take the parent's Relation itself instead of the name of the parent relation,
and return the Relation of new partition. That doesn't lead to
complications, because both callers have the parent table open and need to
open the new partition.
Reported-by: Alexander Lakhin
Discussion: https://postgr.es/m/dbc8b96c-3cf0-d1ee-860d-0e491da20485%40gmail.com
Author: Dmitry Koval
Reviewed-by: Alexander Korotkov, Robert Haas, Justin Pryzby, Pavel Borisov
|
|
The name collision happens when the name of the new partition is the same as
the name of one of the merging partitions. Currently, ATExecMergePartitions()
first gives the new partition a temporary name and then renames it when old
partitions are deleted. That negatively influences the naming of related
objects like indexes and constrains, which could inherit a temporary name.
This commit changes the implementation in the following way. A merging
partition gets renamed first, then the new partition is created with the
right name immediately. This resolves the issue of the naming of related
objects.
Reported-by: Alexander Lakhin
Discussion: https://postgr.es/m/edfbd846-dcc1-42d1-ac26-715691b687d3%40postgrespro.ru
Author: Dmitry Koval, Alexander Korotkov
Reviewed-by: Robert Haas, Justin Pryzby, Pavel Borisov
|
|
Repeating loads of inplace-updated fields tends to cause bugs like the
one from the previous commit. While there's no bug to fix in these code
sites, adopt the load-once style. This improves the chance of future
copy/paste finding the safe style.
Discussion: https://postgr.es/m/20240423003956.e7.nmisch@google.com
|
|
vac_update_datfrozenxid() did multiple loads of relfrozenxid and
relminmxid from buffer memory, and it assumed each would get the same
value. Not so if a concurrent vac_update_relstats() did an inplace
update. Commit 2d2e40e3befd8b9e0d2757554537345b15fa6ea2 fixed the same
kind of bug in vac_truncate_clog(). Today's bug could cause the
rel-level field and XIDs in the rel's rows to precede the db-level
field. A cluster having such values should VACUUM affected tables.
Back-patch to v12 (all supported versions).
Discussion: https://postgr.es/m/20240423003956.e7.nmisch@google.com
|
|
ALTER COLUMN TYPE wasn't expecting to find any pg_proc objects
depending on the column whose type is to be altered. That indeed
wasn't possible when this code was written, but it is possible
since we introduced new-style SQL function bodies.
It's about as difficult to fix this case as it is to fix dependent
views, and we've been punting on those for years, so I don't feel
too awful about punting for functions too. (I sure wouldn't risk
back-patching such code.) So just throw a more user-facing error.
Also, adjust some of the existing comments to reflect that these
are all pretty much the same issue.
(This patch also fixes it so we will tolerate finding such a
dependency during ALTER COLUMN SET EXPRESSION; in that, we need
not do anything to the function, so no error is wanted. That
problem is new in HEAD.)
Per bug #18449 from Alexander Lakhin. Back-patch to v14 where
we added new-style SQL functions.
Discussion: https://postgr.es/m/18449-f8248467aaa294d5@postgresql.org
|
|
Do not allow ALTER SUBSCRIPTION ... SET (failover = on|off) in a
transaction block as the changed failover option of the slot can't be
rolled back. For the same reason, we refrain from altering the replication
slot's failover property if the subscription is created with a valid
slot_name and create_slot=false.
Reprted-by: Kuroda Hayato
Author: Hou Zhijie
Reviewed-by: Shveta Malik, Bertrand Drouvot, Kuroda Hayato
Discussion: https://postgr.es/m/OS0PR01MB57165542B09DFA4943830BF294082@OS0PR01MB5716.jpnprd01.prod.outlook.com
|
|
When specifying the createdb strategy, the documentation suggests valid
options are FILE_COPY and WAL_LOG, but the code does case-sensitive
comparison and accepts only "file_copy" and "wal_log" as valid.
Fixed by doing a case-insensitive comparison using pg_strcasecmp(), same
as for other string parameters nearby.
While at it, apply fmtId() to a nearby "locale_provider". This already
did the comparison in case-insensitive way, but the value would not be
double-quoted, confusing the parser and the error message.
Backpatch to 15, where the strategy was introduced.
Backpatch-through: 15
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/90c6913a-1dd2-42b4-8365-ce3b09c39b17@enterprisedb.com
|
|
The optimization for inserts into BRIN indexes added by c1ec02be1d79
relies on a cache that needs to be explicitly released after calling
index_insert(). The commit however failed to invoke the cleanup in
validate_index(), which calls index_insert() indirectly through
table_index_validate_scan().
After inspecting index_insert() callers, it seems unique_key_recheck()
is missing the call too.
Fixed by adding the two missing index_insert_cleanup() calls.
The commit does two additional improvements. The aminsertcleanup()
signature is modified to have the index as the first argument, to make
it more like the other AM callbacks. And the aminsertcleanup() callback
is invoked even if the ii_AmCache is NULL, so that it can decide if the
cleanup is necessary.
Author: Alvaro Herrera, Tomas Vondra
Reported-by: Alexander Lakhin
Discussion: https://postgr.es/m/202401091043.e3nrqiad6gb7@alvherre.pgsql
|
|
It is possible for certain cases to remove not-null constraints without
maintaining the attnotnull in its correct state; for example if you drop
a column that's part of the primary key, and the other columns of the PK don't
have not-null constraints, then we should reset the attnotnull flags for
those other columns; up to this commit, we didn't. Handle those cases
better by doing the attnotnull reset in RemoveConstraintById() instead
of in dropconstraint_internal().
However, there are some cases where we must not do so. For example if
those other columns are in replica identity indexes or are generated
identity columns, we must keep attnotnull set, even though it results in
the catalog inconsistency that no not-null constraint supports that.
Because the attnotnull reset now happens in more places than before, for
instance when a column of the primary key changes type, we need an
additional trick to reinstate it as necessary. Introduce a new
alter-table pass that does this, which needs simply reschedule some
AT_SetAttNotNull subcommands that were already being generated and
ignored.
Because of the exceptions in which attnotnull is not reset noted above,
we also include a pg_dump hack to include a not-null constraint when the
attnotnull flag is set even if no pg_constraint row exists. This part
is undesirable but necessary, because failing to handle the case can
result in unrestorable dumps.
Reported-by: Tender Wang <tndrwang@gmail.com>
Co-authored-by: Tender Wang <tndrwang@gmail.com>
Reviewed-by: jian he <jian.universality@gmail.com>
Discussion: https://postgr.es/m/CAHewXN=hMbNa3d43NOR=OCgdgpTt18S-1fmueCoEGesyeK4bqw@mail.gmail.com
|
|
This fixes various typos, duplicated words, and tiny bits of whitespace
mainly in code comments but also in docs.
Author: Daniel Gustafsson <daniel@yesql.se>
Author: Heikki Linnakangas <hlinnaka@iki.fi>
Author: Alexander Lakhin <exclusion@gmail.com>
Author: David Rowley <dgrowleyml@gmail.com>
Author: Nazir Bilal Yavuz <byavuz81@gmail.com>
Discussion: https://postgr.es/m/3F577953-A29E-4722-98AD-2DA9EFF2CBB8@yesql.se
|
|
In tables with primary keys, pg_dump creates tables with primary keys by
initially dumping them with throw-away not-null constraints (marked "no
inherit" so that they don't create problems elsewhere), to later drop
them once the primary key is restored. Because of a unrelated
consideration, on tables with children we add not-null constraints to
all columns of the primary key when it is created.
If both a table and its child have primary keys, and pg_dump happens to
emit the child table first (and its throw-away not-null) and later its
parent table, the creation of the parent's PK will fail because the
throw-away not-null constraint collides with the permanent not-null
constraint that the PK wants to add, so the dump fails to restore.
We can work around this problem by letting the primary key "take over"
the child's not-null. This requires no changes to pg_dump, just two
changes to ALTER TABLE: first, the ability to convert a no-inherit
not-null constraint into a regular inheritable one (including recursing
down to children, if there are any); second, the ability to "drop" a
constraint that is defined both directly in the table and inherited from
a parent (which simply means to mark it as no longer having a local
definition).
Secondarily, change ATPrepAddPrimaryKey() to acquire locks all the way
down the inheritance hierarchy, in case we need to recurse when
propagating constraints.
These two changes allow pg_dump to reproduce more cases involving
inheritance from versions 16 and older.
Lastly, make two changes to pg_dump: 1) do not try to drop a not-null
constraint that's marked as inherited; this allows a dump to restore
with no errors if a table with a PK inherits from another which also has
a PK; 2) avoid giving inherited constraints throwaway names, for the
rare cases where such a constraint survives after the restore.
Reported-by: Andrew Bille <andrewbille@gmail.com>
Reported-by: Justin Pryzby <pryzby@telsasoft.com>
Discussion: https://postgr.es/m/CAJnzarwkfRu76_yi3dqVF_WL-MpvT54zMwAxFwJceXdHB76bOA@mail.gmail.com
Discussion: https://postgr.es/m/Zh0aAH7tbZb-9HbC@pryzbyj2023
|
|
Databases exist, contrary to datatabases.
Oversight in e83d1b0c40cc.
Author: Japin Li
Discussion: https://postgr.es/m/ME3P282MB316611A2F7BF43919F695228B6082@ME3P282MB3166.AUSP282.PROD.OUTLOOK.COM
|
|
The ON_ERROR option of the COPY command previously allowed omitting
its value, which was inconsistent with the syntax synopsis in the
documentation and the behavior of other non-boolean COPY options.
This change enforces providing a value for the ON_ERROR option,
ensuring consistency across other non-boolean options and aligning
with the documented syntax.
Author: Atsushi Torikoshi
Reviewed-by: Masahiko Sawada
Discussion: https://postgr.es/m/a9770bf57646d90dedc3d54cf32634b2%40oss.nttdata.com
|
|
This commit reverts 27bc1772fc and dd1f6b0c17. Per review by Andres Freund.
Discussion: https://postgr.es/m/20240415201057.khoyxbwwxfgzomeo%40awork3.anarazel.de
|
|
When matching constraints in AttachPartitionEnsureIndexes() we weren't
testing the constraint type, which could make a UNIQUE key lacking a
not-null constraint incorrectly satisfy a primary key requirement. Fix
this by testing that the constraint types match. (Other possible
mismatches are verified by comparing index properties.)
Discussion: https://postgr.es/m/202402051447.wimb4xmtiiyb@alvherre.pgsql
|
|
The fixes relate to comments, error messages, and corresponding expected output
of regression tests.
Discussion: https://postgr.es/m/CAMbWs49DDsknxyoycBqiE72VxzL_sYHF6zqL8dSeNehKPJhkKg%40mail.gmail.com
Discussion: https://postgr.es/m/86bfd241-a58c-479a-9a72-2c67a02becf8%40postgrespro.ru
Discussion: https://postgr.es/m/CAHewXNkGMPU50QG7V6Q60JGFORfo8LfYO1_GCkCa0VWbmB-fEw%40mail.gmail.com
Author: Richard Guo, Dmitry Koval, Tender Wang
|
|
In one of the many strange corner cases of multiple inheritance being
used, commit b0e96f311985 missed a CommandCounterIncrement() call after
updating the attnotnull flag during ALTER TABLE ADD COLUMN, which caused
a catalog tuple to be update attempted twice in the same command, giving
rise to a "tuple already updated by self" error. Add the missing call
to solve that, and a test case that reproduces the scenario.
As a (perhaps surprising) secondary effect, this CCI addition triggers
another behavior change: when a primary key is added to a parent
partitioned table and the column in an existing partition does not have
a not-null constraint, we no longer error out. This will probably be a
welcome change by some users, and I think it's unlikely that anybody
will miss the old behavior.
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Discussion: http://postgr.es/m/045dec3f-9b3d-aa44-0c99-85f6992306c7@gmail.com
|
|
This commit reverts 06c418e163, e37662f221, bf1e650806, 25f42429e2,
ee79928441, and 74eaf66f98 per review by Heikki Linnakangas.
Discussion: https://postgr.es/m/b155606b-e744-4218-bda5-29379779da1a%40iki.fi
|
|
This commit reverts c35a3fb5e0 per review by Andres Freund.
Discussion: https://postgr.es/m/20240410165236.rwyrny7ihi4ddxw4%40awork3.anarazel.de
|
|
This commit reverts 87985cc925 and 818861eb57 per review by Andres Freund.
Discussion: https://postgr.es/m/20240410165236.rwyrny7ihi4ddxw4%40awork3.anarazel.de
|
|
This commit reverts b1484a3f19 per review by Andres Freund.
Discussion: https://postgr.es/m/20240410165236.rwyrny7ihi4ddxw4%40awork3.anarazel.de
|
|
This commit reverts 9bd99f4c26 and 422041542f per review by Andres Freund.
Discussion: https://postgr.es/m/20240410165236.rwyrny7ihi4ddxw4%40awork3.anarazel.de
|
|
Check that the target partition actually belongs to the parent table.
Reported-by: Alexander Lakhin
Discussion: https://postgr.es/m/cd842601-cf1a-9806-f7b7-d2509b93ba61%40gmail.com
Author: Dmitry Koval
|
|
|
|
While keeping API the same, this commit provides a way for block-level table
AMs to re-use existing acquire_sample_rows() by providing custom callbacks
for getting the next block and the next tuple.
Reported-by: Andres Freund
Discussion: https://postgr.es/m/20240407214001.jgpg5q3yv33ve6y3%40awork3.anarazel.de
Reviewed-by: Pavel Borisov
|
|
Discussion: https://postgr.es/m/CAHewXNkGMPU50QG7V6Q60JGFORfo8LfYO1_GCkCa0VWbmB-fEw%40mail.gmail.com
Author: Tender Wang
|
|
Let table AM define custom reloptions for its tables. This allows specifying
AM-specific parameters by the WITH clause when creating a table.
The reloptions, which could be used outside of table AM, are now extracted
into the CommonRdOptions data structure. These options could be by decision
of table AM directly specified by a user or calculated in some way.
The new test module test_tam_options evaluates the ability to set up custom
reloptions and calculate fields of CommonRdOptions on their base.
The code may use some parts from prior work by Hao Wu.
Discussion: https://postgr.es/m/CAPpHfdurb9ycV8udYqM%3Do0sPS66PJ4RCBM1g-bBpvzUfogY0EA%40mail.gmail.com
Discussion: https://postgr.es/m/AMUA1wBBBxfc3tKRLLdU64rb.1.1683276279979.Hmail.wuhao%40hashdata.cn
Reviewed-by: Reviewed-by: Pavel Borisov, Matthias van de Meent, Jess Davis
|
|
The ANALYZE command prefetches and reads sample blocks chosen by a
BlockSampler algorithm. Instead of calling [Prefetch|Read]Buffer() for
each block, ANALYZE now uses the streaming API introduced in b5a9b18cd0.
Author: Nazir Bilal Yavuz <byavuz81@gmail.com>
Reviewed-by: Melanie Plageman <melanieplageman@gmail.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Jakub Wartak <jakub.wartak@enterprisedb.com>
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Reviewed-by: Thomas Munro <thomas.munro@gmail.com>
Discussion: https://postgr.es/m/flat/CAN55FZ0UhXqk9v3y-zW_fp4-WCp43V8y0A72xPmLkOM%2B6M%2BmJg%40mail.gmail.com
|
|
This new DDL command splits a single partition into several parititions.
Just like ALTER TABLE ... MERGE PARTITIONS ... command, new patitions are
created using createPartitionTable() function with parent partition as the
template.
This commit comprises quite naive implementation which works in single process
and holds the ACCESS EXCLUSIVE LOCK on the parent table during all the
operations including the tuple routing. This is why this new DDL command
can't be recommended for large partitioned tables under a high load. However,
this implementation come in handy in certain cases even as is.
Also, it could be used as a foundation for future implementations with lesser
locking and possibly parallel.
Discussion: https://postgr.es/m/c73a1746-0cd0-6bdd-6b23-3ae0b7c0c582%40postgrespro.ru
Author: Dmitry Koval
Reviewed-by: Matthias van de Meent, Laurenz Albe, Zhihong Yu, Justin Pryzby
Reviewed-by: Alvaro Herrera, Robert Haas, Stephane Tachoires
|
|
This new DDL command merges several partitions into the one partition of the
target table. The target partition is created using new
createPartitionTable() function with parent partition as the template.
This commit comprises quite naive implementation which works in single process
and holds the ACCESS EXCLUSIVE LOCK on the parent table during all the
operations including the tuple routing. This is why this new DDL command
can't be recommended for large partitioned tables under a high load. However,
this implementation come in handy in certain cases even as is.
Also, it could be used as a foundation for future implementations with lesser
locking and possibly parallel.
Discussion: https://postgr.es/m/c73a1746-0cd0-6bdd-6b23-3ae0b7c0c582%40postgrespro.ru
Author: Dmitry Koval
Reviewed-by: Matthias van de Meent, Laurenz Albe, Zhihong Yu, Justin Pryzby
Reviewed-by: Alvaro Herrera, Robert Haas, Stephane Tachoires
|
|
Not just WaitLSNState.waitersHeap, but also WaitLSNState.procInfos and
updating of WaitLSNState.minWaitedLSN is protected by WaitLSNLock. There
is one now documented exclusion on fast-path checking of
WaitLSNProcInfo.inHeap flag.
Discussion: https://postgr.es/m/202404030658.hhj3vfxeyhft%40alvherre.pgsql
|
|
This should prevent busy-waiting when number of waiting processes is high.
Discussion: https://postgr.es/m/202404030658.hhj3vfxeyhft%40alvherre.pgsql
Author: Alvaro Herrera
|
|
When this bit of commentary was written, it was alluding to the
fact that we looked for newlines and EOD markers in the raw
(not yet encoding-converted) input data. We don't do that anymore,
preferring to batch the conversion of larger chunks of input and
split it into lines later. Hence there's no longer any need for
assumptions about the relevant characters being encoding-invariant,
and we should remove this comment saying we assume that.
Discussion: https://postgr.es/m/1461688.1712347668@sss.pgh.pa.us
|
|
JSON_TABLE() allows JSON data to be converted into a relational view
and thus used, for example, in a FROM clause, like other tabular
data. Data to show in the view is selected from a source JSON object
using a JSON path expression to get a sequence of JSON objects that's
called a "row pattern", which becomes the source to compute the
SQL/JSON values that populate the view's output columns. Column
values themselves are computed using JSON path expressions applied to
each of the JSON objects comprising the "row pattern", for which the
SQL/JSON query functions added in 6185c9737cf4 are used.
To implement JSON_TABLE() as a table function, this augments the
TableFunc and TableFuncScanState nodes that are currently used to
support XMLTABLE() with some JSON_TABLE()-specific fields.
Note that the JSON_TABLE() spec includes NESTED COLUMNS and PLAN
clauses, which are required to provide more flexibility to extract
data out of nested JSON objects, but they are not implemented here
to keep this commit of manageable size.
Author: Nikita Glukhov <n.gluhov@postgrespro.ru>
Author: Teodor Sigaev <teodor@sigaev.ru>
Author: Oleg Bartunov <obartunov@gmail.com>
Author: Alexander Korotkov <aekorotkov@gmail.com>
Author: Andrew Dunstan <andrew@dunslane.net>
Author: Amit Langote <amitlangote09@gmail.com>
Author: Jian He <jian.universality@gmail.com>
Reviewers have included (in no particular order):
Andres Freund, Alexander Korotkov, Pavel Stehule, Andrew Alsup,
Erik Rijkers, Zihong Yu, Himanshu Upadhyaya, Daniel Gustafsson,
Justin Pryzby, Álvaro Herrera, Jian He
Discussion: https://postgr.es/m/cd0bb935-0158-78a7-08b5-904886deac4b@postgrespro.ru
Discussion: https://postgr.es/m/20220616233130.rparivafipt6doj3@alap3.anarazel.de
Discussion: https://postgr.es/m/abd9b83b-aa66-f230-3d6d-734817f0995d%40postgresql.org
Discussion: https://postgr.es/m/CA+HiwqE4XTdfb1nW=Ojoy_tQSRhYt-q_kb6i5d4xcKyrLC1Nbg@mail.gmail.com
|
|
EXPLAIN (ANALYZE, SERIALIZE) allows collection of statistics about
the volume of data emitted by a query, as well as the time taken
to convert the data to the on-the-wire format. Previously there
was no way to investigate this without actually sending the data
to the client, in which case network transmission costs might
swamp what you wanted to see. In particular this feature allows
investigating the costs of de-TOASTing compressed or out-of-line
data during formatting.
Stepan Rutz and Matthias van de Meent,
reviewed by Tomas Vondra and myself
Discussion: https://postgr.es/m/ca0adb0e-fa4e-c37e-1cd7-91170b18cae1@gmx.de
|
|
06c418e163 introduced pg_wal_replay_wait() procedure allowing to wait for
the particular LSN to be replayed on standby. The waiters were stored in
the flat array. Even though scanning small arrays is fast, that might be a
problem at scale (a lot of waiting processes).
This commit replaces the flat shared memory array with the pairing heap,
which holds the waiter with the least LSN at the top. This gives us O(log N)
complexity for both inserting and removing waiters.
Reported-by: Alvaro Herrera
Discussion: https://postgr.es/m/202404030658.hhj3vfxeyhft%40alvherre.pgsql
|
|
* Remove extra includes
* Fill 'cur' in addLSNWaiter() before taking the spinlock
* Initialize 'endtime' with zero in WaitForLSN() to avoid compiler warning
Reported-by: Alvaro Herrera, Masahiko Sawada, Daniel Gustafsson
Discussion: https://postgr.es/m/202404030658.hhj3vfxeyhft%40alvherre.pgsql
Discussion: https://postgr.es/m/CAD21AoAx7irptnPH1OkkkNh9E0M6X-phfX7sYZfwoMsc1qV1sQ%40mail.gmail.com
|
|
pg_wal_replay_wait() is to be used on standby and specifies waiting for
the specific WAL location to be replayed before starting the transaction.
This option is useful when the user makes some data changes on primary and
needs a guarantee to see these changes on standby.
The queue of waiters is stored in the shared memory array sorted by LSN.
During replay of WAL waiters whose LSNs are already replayed are deleted from
the shared memory array and woken up by setting of their latches.
pg_wal_replay_wait() needs to wait without any snapshot held. Otherwise,
the snapshot could prevent the replay of WAL records implying a kind of
self-deadlock. This is why it is only possible to implement
pg_wal_replay_wait() as a procedure working in a non-atomic context,
not a function.
Catversion is bumped.
Discussion: https://postgr.es/m/eb12f9b03851bb2583adab5df9579b4b%40postgrespro.ru
Author: Kartyshov Ivan, Alexander Korotkov
Reviewed-by: Michael Paquier, Peter Eisentraut, Dilip Kumar, Amit Kapila
Reviewed-by: Alexander Lakhin, Bharath Rupireddy, Euler Taveira
|
|
This reverts commit c95c25f9af4bc77f2f66a587735c50da08c12b37 due to multiple
design issues spotted after commit.
Reported-by: Jeff Davis
Discussion: https://postgr.es/m/11550b536211d5748bb2865ed6cb3502ff073bf7.camel%40j-davis.com
|
|
Previously, we used a simple array for storing dead tuple IDs during
lazy vacuum, which had a number of problems:
* The array used a single allocation and so was limited to 1GB.
* The allocation was pessimistically sized according to table size.
* Lookup with binary search was slow because of poor CPU cache and
branch prediction behavior.
This commit replaces that array with the TID store from commit
30e144287a.
Since the backing radix tree makes small allocations as needed, the
1GB limit is now gone. Further, the total memory used is now often
smaller by an order of magnitude or more, depending on the
distribution of blocks and offsets. These two features should make
multiple rounds of heap scanning and index cleanup an extremely rare
event. TID lookup during index cleanup is also several times faster,
even more so when index order is correlated with heap tuple order.
Since there is no longer a predictable relationship between the number
of dead tuples vacuumed and the space taken up by their TIDs, the
number of tuples no longer provides any meaningful insights for users,
nor is the maximum number predictable. For that reason this commit
also changes to byte-based progress reporting, with the relevant
columns of pg_stat_progress_vacuum renamed accordingly to
max_dead_tuple_bytes and dead_tuple_bytes.
For parallel vacuum, both the TID store and supplemental information
specific to vacuum are shared among the parallel vacuum workers. As
with the previous array, we don't take any locks on TidStore during
parallel vacuum since writes are still only done by the leader
process.
Bump catalog version.
Reviewed-by: John Naylor, (in an earlier version) Dilip Kumar
Discussion: https://postgr.es/m/CAD21AoAfOZvmfR0j8VmZorZjL7RhTiQdVttNuC4W-Shdc2a-AA%40mail.gmail.com
|
|
This commit adds a new COPY option LOG_VERBOSITY, which controls the
amount of messages emitted during processing. Valid values are
'default' and 'verbose'.
This is currently used in COPY FROM when ON_ERROR option is set to
ignore. If 'verbose' is specified, a NOTICE message is emitted for
each discarded row, providing additional information such as line
number, column name, and the malformed value. This helps users to
identify problematic rows that failed to load.
Author: Bharath Rupireddy
Reviewed-by: Michael Paquier, Atsushi Torikoshi, Masahiko Sawada
Discussion: https://www.postgresql.org/message-id/CALj2ACUk700cYhx1ATRQyRw-fBM%2BaRo6auRAitKGff7XNmYfqQ%40mail.gmail.com
|