Age | Commit message (Collapse) | Author |
|
Previously, pg_dump --filter could misinterpret invalid object types
in the filter file as valid ones. For example, the invalid object type
"table-data" (likely a typo for the valid "table_data") could be
mistakenly recognized as "table", causing pg_dump to succeed
when it should have failed.
This happened because pg_dump identified keywords as sequences of
ASCII alphabetic characters, treating non-alphabetic characters
(like hyphens) as keyword boundaries. As a result, "table-data" was
parsed as "table".
To fix this, pg_dump --filter now treats keywords as strings of
non-whitespace characters, ensuring invalid types like "table-data"
are correctly rejected.
Back-patch to v17, where the --filter option was introduced.
Author: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Xuneng Zhou <xunengzhou@gmail.com>
Reviewed-by: Srinath Reddy <srinath2133@gmail.com>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://postgr.es/m/CAHGQGwFzPKUwiV5C-NLBqz1oK1+z9K8cgrF+LcxFem-p3_Ftug@mail.gmail.com
Backpatch-through: 17
|
|
Remove redundant options --with-data and --with-schema, and rename
--with-statistics to just --statistics.
Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Discussion: https://postgr.es/m/f379d0aeefe8effe13302a436bc28f549f09e924.camel@j-davis.com
Backpatch-through: 18
|
|
Reviewed-by: Álvaro Herrera <alvherre@kurilemu.de>
Discussion: https://postgr.es/m/8ce896d1a05040905cc1a3afbc04e94d8e95669a.camel@j-davis.com
Backpatch-through: 18
|
|
pg_dump sorts objects by their logical names, e.g. (nspname, relname,
tgname), before dependency-driven reordering. That removes one source
of logically-identical databases differing in their schema-only dumps.
In other words, it helps with schema diffing. The logical name sort
ignored essential sort keys for constraints, operators, PUBLICATION
... FOR TABLE, PUBLICATION ... FOR TABLES IN SCHEMA, operator classes,
and operator families. pg_dump's sort then depended on object OID,
yielding spurious schema diffs. After this change, OIDs affect dump
order only in the event of catalog corruption. While pg_dump also
wrongly ignored pg_collation.collencoding, CREATE COLLATION restrictions
have been keeping that imperceptible in practical use.
Use techniques like we use for object types already having full sort key
coverage. Where the pertinent queries weren't fetching the ignored sort
keys, this adds columns to those queries and stores those keys in memory
for the long term.
The ignorance of sort keys became more problematic when commit
172259afb563d35001410dc6daad78b250924038 added a schema diff test
sensitive to it. Buildfarm member hippopotamus witnessed that.
However, dump order stability isn't a new goal, and this might avoid
other dump comparison failures. Hence, back-patch to v13 (all supported
versions).
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Discussion: https://postgr.es/m/20250707192654.9e.nmisch@google.com
Backpatch-through: 13
|
|
Recent discussions of the mechanisms used to manage global data have
raised concerns about their robustness and security. Rather than try
to deal with those concerns at a very late stage of the release cycle,
the conclusion is to revert these features and work on them for the
next release.
This reverts parts or all of the following commits:
1495eff7bdb Non text modes for pg_dumpall, correspondingly change pg_restore
5db3bf7391d Clean up from commit 1495eff7bdb
289f74d0cb2 Add more TAP tests for pg_dumpall
2ef57908067 Fix a couple of error messages and tests for them
b52a4a5f285 Clean up error messages from 1495eff7bdb
4170298b6ec Further cleanup for directory creation on pg_dump/pg_dumpall
22cb6d28950 Fix memory leak in pg_restore.c
928394b664b Improve various new-to-v18 appendStringInfo calls
39729ec01d2 Fix fat fingering in 22cb6d28950
5822bf21d50 Add missing space in pg_restore documentation.
f09088a01d3 Free memory properly in pg_restore.c
40b9c27014d pg_restore cleanups
4aad2cb7707 Portability fix: isdigit() must be passed an unsigned char.
88e947136b4 Fix typos and grammar in the code
f60420cff66 doc: Alphabetize long options for pg_dump[all].
bc35adee8d7 doc: Put new options in consistent order on man pages
a876464abc7 Message style improvements
dec6643487b Improve pg_dump/pg_dumpall help synopses and terminology
0ebd2425558 Run pgperltidy
Discussion: https://postgr.es/m/20250708212819.09.nmisch@google.com
Backpatch-to: 18
Reviewed-by: Noah Misch <noah@leadboat.com>
|
|
Logical replication requires reliable conflict detection to maintain data
consistency across nodes. To achieve this, we must prevent premature
removal of tuples deleted by other origins and their associated commit_ts
data by VACUUM, which could otherwise lead to incorrect conflict reporting
and resolution.
This patch introduces a mechanism to retain deleted tuples on the
subscriber during the application of concurrent transactions from remote
nodes. Retaining these tuples allows us to correctly ignore concurrent
updates to the same tuple. Without this, an UPDATE might be misinterpreted
as an INSERT during resolutions due to the absence of the original tuple.
Additionally, we ensure that origin metadata is not prematurely removed by
vacuum freeze, which is essential for detecting update_origin_differs and
delete_origin_differs conflicts.
To support this, a new replication slot named pg_conflict_detection is
created and maintained by the launcher on the subscriber. Each apply
worker tracks its own non-removable transaction ID, which the launcher
aggregates to determine the appropriate xmin for the slot, thereby
retaining necessary tuples.
Conflict information retention (deleted tuples and commit_ts) can be
enabled per subscription via the retain_conflict_info option. This is
disabled by default to avoid unnecessary overhead for configurations that
do not require conflict resolution or logging.
During upgrades, if any subscription on the old cluster has
retain_conflict_info enabled, a conflict detection slot will be created to
protect relevant tuples from deletion when the new cluster starts.
This is a foundational work to correctly detect update_deleted conflict
which will be done in a follow-up patch.
Author: Zhijie Hou <houzj.fnst@fujitsu.com>
Reviewed-by: shveta malik <shveta.malik@gmail.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com>
Reviewed-by: Nisha Moond <nisha.moond412@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Discussion: https://postgr.es/m/OS0PR01MB5716BE80DAEB0EE2A6A5D1F5949D2@OS0PR01MB5716.jpnprd01.prod.outlook.com
|
|
Commit e5da0fe3c22b introduced catalog entries for not-null constraints
on domains; but because commit b0e96f311985 (the original work for
catalogued not-null constraints on tables) forgot to teach pg_dump to
process the comments for them, this one also forgot. Add that now.
We also need to teach repairDependencyLoop() about the new type of
constraints being possible for domains.
Backpatch-through: 17
Co-authored-by: jian he <jian.universality@gmail.com>
Co-authored-by: Álvaro Herrera <alvherre@kurilemu.de>
Reported-by: jian he <jian.universality@gmail.com>
Discussion: https://postgr.es/m/CACJufxF-0bqVR=j4jonS6N2Ka6hHUpFyu3_3TWKNhOW_4yFSSg@mail.gmail.com
|
|
Presently, pg_dump generates commands like
SELECT pg_catalog.lo_create('5432');
ALTER LARGE OBJECT 5432 OWNER TO alice;
GRANT SELECT ON LARGE OBJECT 5432 TO bob;
for each large object. This is particularly slow at restore time,
especially when there are tens or hundreds of millions of large
objects. From reports and personal experience, such slow restores
seem to be most painful when encountered during pg_upgrade. This
commit teaches pg_dump to instead dump pg_largeobject_metadata and
the corresponding pg_shdepend rows when in binary upgrade mode,
i.e., pg_dump now generates commands like
COPY pg_catalog.pg_largeobject_metadata (oid, lomowner, lomacl) FROM stdin;
5432 16384 {alice=rw/alice,bob=r/alice}
\.
COPY pg_catalog.pg_shdepend (dbid, classid, objid, objsubid, refclassid, refobjid, deptype) FROM stdin;
5 2613 5432 0 1260 16384 o
5 2613 5432 0 1260 16385 a
\.
Testing indicates the COPY approach can be significantly faster.
To do any better, we'd probably need to find a way to copy/link
pg_largeobject_metadata's files during pg_upgrade, which would be
limited to upgrades from >= v16 (since commit 7b378237aa changed
the storage format for aclitem, which is used for
pg_largeobject_metadata.lomacl).
Note that this change only applies to binary upgrade mode (i.e.,
dumps initiated by pg_upgrade) since it inserts rows directly into
catalogs. Also, this optimization can only be used for upgrades
from >= v12 because pg_largeobject_metadata was created WITH OIDS
in older versions, which prevents pg_dump from handling
pg_largeobject_metadata.oid properly. With some extra effort, it
might be possible to support upgrades from older versions, but the
added complexity didn't seem worth it to support versions that will
have been out-of-support for nearly 3 years by the time this change
is released.
Experienced hackers may remember that prior to v12, pg_upgrade
copied/linked pg_largeobject_metadata's files (see commit
12a53c732c). Besides the aforementioned storage format issues,
this approach failed to transfer the relevant pg_shdepend rows, and
pg_dump still had to generate an lo_create() command per large
object so that creating the dependent comments and security labels
worked. We could perhaps adopt a hybrid approach for upgrades from
v16 and newer (i.e., generate lo_create() commands for each large
object, copy/link pg_largeobject_metadata's files, and COPY the
relevant pg_shdepend rows), but further testing is needed.
Reported-by: Hannu Krosing <hannuk@google.com>
Suggested-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Hannu Krosing <hannuk@google.com>
Reviewed-by: Nitin Motiani <nitinmotiani@google.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CAMT0RQSS-6qLH%2BzYsOeUbAYhop3wmQTkNmQpo5--QRDUR%2BqYmQ%40mail.gmail.com
|
|
We skip dumping constraints together with domains if they are invalid
('separate') so that they appear after data -- but their comments were
dumped together with the domain definition, which in effect leads to the
comment being dumped when the constraint does not yet exist. Delay
them in the same way.
Oversight in 7eca575d1c28; backpatch all the way back.
Author: jian he <jian.universality@gmail.com>
Discussion: https://postgr.es/m/CACJufxF_C2pe6J_+nPr6C5jf5rQnbYP8XOKr4HM8yHZtp2aQqQ@mail.gmail.com
|
|
Previously, pg_dumpall would still dump global objects such as roles
and tablespaces even when --statistics-only or --no-schema was specified.
Since these global objects are treated as schema-level data, they should
be skipped in these cases.
This commit fixes the issue by ensuring that global objects are not
dumped when either --statistics-only or --no-schema is used.
Author: Fujii Masao <masao.fujii@oss.nttdata.com>
Reviewed-by: Corey Huinker <corey.huinker@gmail.com>
Discussion: https://postgr.es/m/08129593-6f3c-4fb9-94b7-5aa2eefb99b0@oss.nttdata.com
Backpatch-through: 18
|
|
Commit a45c78e328 moved large object metadata from SECTION_PRE_DATA
to SECTION_DATA but neglected to move PRIO_LARGE_OBJECT in
dbObjectTypePriorities accordingly. While this hasn't produced any
known live bugs, it causes problems for a proposed patch that
optimizes upgrades with many large objects. Fixing the priority
might also make the topological sort step marginally faster by
reducing the number of ordering violations that have to be fixed.
Reviewed-by: Nitin Motiani <nitinmotiani@google.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/aBkQLSkx1zUJ-LwJ%40nathan
Discussion: https://postgr.es/m/aG_5DBCjdDX6KAoD%40nathan
Backpatch-through: 17
|
|
The previous minimum was to maintain support for Python 3.5, but we
now require Python 3.6 anyway (commit 45363fca637), so that reason is
obsolete. A small raise to Meson 0.57 allows getting rid of a fair
amount of version conditionals and silences some future-deprecated
warnings.
With the version bump, the following deprecation warnings appeared and
are fixed:
WARNING: Project targets '>=0.57' but uses feature deprecated since '0.55.0': ExternalProgram.path. use ExternalProgram.full_path() instead
WARNING: Project targets '>=0.57' but uses feature deprecated since '0.56.0': meson.build_root. use meson.project_build_root() or meson.global_build_root() instead.
It turns out that meson 0.57.0 and 0.57.1 are buggy for our use, so
the minimum is actually set to 0.57.2. This is specific to this
version series; in the future we won't necessarily need to be this
precise.
Reviewed-by: Nazir Bilal Yavuz <byavuz81@gmail.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://www.postgresql.org/message-id/flat/42e13eb0-862a-441e-8d84-4f0fd5f6def0%40eisentraut.org
|
|
This is required before the creation of a new branch. pgindent is
clean, as well as is reformat-dat-files.
perltidy version is v20230309, as documented in pgindent's README.
|
|
We were missing collecting comments for not-null constraints that are
dumped inline with the table definition (i.e., valid ones), because they
aren't represented by a separately dumpable object. Fix by creating
separate TocEntries for the comments.
Co-Authored-By: Jian He <jian.universality@gmail.com>
Co-Authored-By: Álvaro Herrera <alvherre@kurilemu.de>
Reported-By: Fujii Masao <masao.fujii@oss.nttdata.com>
Reviewed-By: Fujii Masao <masao.fujii@oss.nttdata.com>
Discussion: https://postgr.es/m/d50ff977-c728-4e9e-8488-fc2688e08754@oss.nttdata.com
|
|
Increase consistency of --help and man page synopses between pg_dump
and pg_dumpall. These should now be very similar, as pg_dumpall can
now also produce non-text dump output. But actually, they had drifted
further apart.
- Use verb "export" consistently, instead of "dump" or "extract".
- Use "SQL script" instead of just "script" or "text file".
- Maintain consistent distinction between SQL script and other
formats/archives (which is relevant for pg_restore).
Reviewed-by: Robert Treat <rob@xzilla.net>
Discussion: https://www.postgresql.org/message-id/flat/3f71d8a7-095b-4829-9b0b-fce09e9866b3%40eisentraut.org
|
|
Commit 1fd1bd87101 introduced support for dumping statistics with
pg_dump and pg_dumpall, covering tables, materialized views, and indexes.
However, it overlooked foreign tables, even though functions like
pg_restore_relation_stats() support them.
This commit fixes that oversight by allowing pg_dump and pg_dumpall
to include statistics for foreign tables.
Author: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Corey Huinker <corey.huinker@gmail.com>
Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>
Discussion: https://postgr.es/m/3772e4e4-ef39-4deb-bb76-aa8165f33fb6@oss.nttdata.com
|
|
Some message style improvements in new code, and some small
refactorings to make translations easier.
|
|
Commit bde2fb797aa added the --with-schema, --with-data, and --with-statistics
options to pg_restore. These options control whether to restore schema, data,
or statistics if present in the archive. However, the help message and
documentation incorrectly described them as affecting what gets dumped.
This commit corrects those descriptions to clarify that the options control
restoration, not dumping.
Bug: #18952
Reported-by: TAKATSUKA Haruka <harukat@sraoss.co.jp>
Author: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: TAKATSUKA Haruka <harukat@sraoss.co.jp>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://postgr.es/m/18952-be40a620f8b1e755@postgresql.org
|
|
pg_restore failed to restore large objects (blobs) out of
directory-format dumps made by versions before PG v12.
That's because, due to a bug fixed in commit 548e50976, those
old versions put the wrong filename into the BLOBS TOC entry.
Said bug was harmless before v17, because we ignored the
incorrect filename field --- but commit a45c78e32 assumed it
would be correct.
Reported-by: Pavel Stehule <pavel.stehule@gmail.com>
Author: Pavel Stehule <pavel.stehule@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CAFj8pRCrZ=_e1Rv1N+6vDaH+6gf=9A2mE2J4RvnvKA1bLiXvXA@mail.gmail.com
Backpatch-through: 17
|
|
POSIX allows such platforms. Given the lack of complaints, we may not
currently test on such a platform. This is new in v18 (commit
7d5c83b4e90c7156655f98b7312a30ae5eeb4d27), so no back-patch.
|
|
Set the default behavior of pg_dump and pg_dumpall to be
--no-statistics.
Leave the default for pg_restore and pg_upgrade to be
--with-statistics.
Discussion: https://postgr.es/m/CA+TgmoZ9=RnWcCOZiKYYjZs_AW1P4QXCw--h4dOLLHuf1Omung@mail.gmail.com
Reviewed-by: Greg Sabino Mullane <htamfids@gmail.com>
Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
|
|
Before v14, a reltuples value of 0 was ambiguous: it could either
mean the relation is empty, or it could mean that it hadn't yet
been vacuumed or analyzed. (Commit 3d351d916b taught v14 and newer
to use -1 for the latter case.) This ambiguity allegedly can cause
the planner to choose inefficient plans after restoring to v18 or
newer. To fix, let's just dump reltuples as -1 in that case. This
will cause some truly empty tables to be seen as not-yet-processed,
but that seems unlikely to cause too much trouble in practice.
Note that we could alternatively teach pg_restore_relation_stats()
to translate reltuples based on the version argument, but since
that function doesn't exist until v18, there's no particular
advantage to that approach. That is, there's no chance of
restoring stats dumped from a pre-v14 server to another pre-v14
server. Per discussion, the current policy is to fix pre-v18
behavior differences during export and everything else during
import.
Commit 9879105024 fixed a similar problem for vacuumdb by removing
the check for reltuples != 0. Presumably we could reinstate that
check now, but I've chosen to leave it in place in case reltuples
isn't accurate. As before, processing some empty tables seems
relatively harmless.
Author: Hari Krishna Sunder <hari.db.pg@gmail.com>
Reviewed-by: Jeff Davis <pgsql@j-davis.com>
Reviewed-by: Corey Huinker <corey.huinker@gmail.com>
Discussion: https://postgr.es/m/CAAeiqZ0o2p4SX5_xPcuAbbsmXjg6MJLNuPYSLUjC%3DWh-VeW64A%40mail.gmail.com
|
|
Presently, fetchAttributeStats() builds array literals by treating
the elements as SQL identifiers. This is incorrect for a couple of
reasons:
* Array literal content must match the external text representation
of the array, i.e., what array_out() would return. One notable
problem is that double quotes are escaped with "" in identifiers
but with \" in array literals. To fix, build the array content
using the pre-existing appendPGArray() function.
* Array literals must be written as string constants. A notable
problem here is that single quotes are escaped via '' in strings
but are not escaped in the text representation of an array. To
fix, append the aforementioned array literal content to the query
with appendStringLiteralAH().
While at it, modify a test case to use an identifier that would
cause the test to fail without this change.
Oversight in commit 9c02e3a986.
Reported-by: Philippe Beaudoin <pbh.emaj@free.fr>
Author: Jian He <jian.universality@gmail.com>
Co-authored-by: Nathan Bossart <nathandbossart@gmail.com>
Co-authored-by: Stepan Neretin <slpmcf@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Bug: #18923
Discussion: https://postgr.es/m/18923-e79273f87c6bed69%40postgresql.org
|
|
I recently added this option to pg_dump, but I forgot to add it to
pg_dumpall, too. There's probably little use for it at the moment,
but we will need it if/when we teach pg_upgrade to use pg_dumpall
to dump the database schemas.
Oversight in commit 9c49f0e8cd.
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/aBE8rHFo922xQUwh%40nathan
|
|
Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: f90ee4803c30491e5c49996b973b8a30de47bfb2
|
|
The --help synopsis should only be one line. This rephrases the first
line a bit to reflect the new functionality of restoring multiple
databases from pg_dumpall output. Additional explanations are better
kept in the man page.
|
|
Also make the description a bit more consistent with similar options.
|
|
When a child constraint is validated and the parent constraint it
derives from isn't, pg_dump must be coerced into printing the child
constraint; failing to do would result in a dump that restores the
constraint as not valid, which would be incorrect.
Co-authored-by: jian he <jian.universality@gmail.com>
Co-authored-by: Álvaro Herrera <alvherre@kurilemu.de>
Reported-by: jian he <jian.universality@gmail.com>
Message-id: https://postgr.es/m/CACJufxGHNNMc0E2JphUqJMzD3=bwRSuAEVBF5ekgkG8uY0Q3hg@mail.gmail.com
|
|
This cleans up some loose ends left by commit e8ca9ed1d. I hadn't
looked closely enough at these places before, but now I have.
The use of double-quoted #includes for Perl headers in plperl_system.h
seems to be simply a mistake introduced in 6c944bf3c and faithfully
copied forward since then. (I had thought possibly it was required
by some weird Windows build setup, but there's no evidence of that in
our history.)
The occurrences in SectionMemoryManager.h and SectionMemoryManager.cpp
evidently stem from those files' origin as LLVM code. It's
understandable that LLVM would treat their own files as needing
double-quoted #includes; but they're still system headers to us.
I also applied the same check to *.c files, and found a few other
random incorrect usages in both directions.
Our ECPG headers and test files routinely use angle brackets to refer
to ECPG headers. I left those usages alone, since it seems reasonable
for an ECPG user to regard those headers as system headers.
|
|
Adjust the way the increment operators are placed to be consistent
throughout the function. Fixup for commit commit c1da7281060.
|
|
The large majority of these have been introduced by recent commits done
in the v18 development cycle.
Author: Alexander Lakhin <exclusion@gmail.com>
Discussion: https://postgr.es/m/9a7763ab-5252-429d-a943-b28941e0e28b@gmail.com
|
|
Use appendPQExpBufferStr when there are no parameters and
appendPQExpBufferChar when the string length is 1.
Unlike 3fae25cbb, which fixed this issue for code that was new to v18,
this one fixes up instances which exist in the backbranches. We've
historically tried to maintain this standard and if we're going to
continue doing that, then we won't be doing that selectively based on
when the code was introduced. Now seems like a good time to flush out the
existing misuses. Waiting until v19 just prolongs their existence in
terms of released versions that the misuses exist in.
Author: David Rowley <drowleyml@gmail.com>
Discussion: https://postgr.es/m/CAApHDvoARMvPeXTTC0HnpARBHn-WgVstc8XFCyMGOzvgu_1HvQ@mail.gmail.com
|
|
Oversight in commit 40b9c2701, per buildfarm member mamba.
|
|
The end callback for ZStandard compression frees the private_data
but didn't set the pointer to NULL after freeing. This is not a
bug as the code is right now, since nothing is dereferencing the
pointer upon returning from the callback but it is good practice
to do.
Author: Alexander Kuznetsov <kuznetsovam@altlinux.org>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/efaee52b-9550-44ca-8633-ea86076b3283@altlinux.org
|
|
In pg_dump and pg_restore, _allocAH() calls _discoverArchiveFormat() to
determine the archive format when the input format is unknown one.
If the input or discovered format is unrecognized, it reports an error
including the archive format number.
If discovered format is unrecognized, its number should be shown in
the error message. But previously the error message mistakenly showed
the originally requested format number (i.e., unknown one) instead of
the discovered one, due to referencing the wrong variable in the error
message.
This commit corrects the issue by using the appropriate variable in
the error message.
This fix has no practical impact since _discoverArchiveFormat() never
returns an unrecognized format and that error mesasge is actually
never output. Therefore, while the issue exists in back branches,
it's not worth the trouble and buildfarm cycles to back-patch.
So this fix is applied only to the master branch.
Author: Mahendra Singh Thalor <mahi6run@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Discussion: https://postgr.es/m/CAKYtNAqu+N-Ab2Fq6wzNSOm_-0N-BMneanYNV1+6kFDXjva1Eg@mail.gmail.com
|
|
Use appendPQExpBufferStr when there are no parameters and
appendPQExpBufferChar when the string length is 1.
Author: David Rowley <drowleyml@gmail.com>
Discussion: https://postgr.es/m/CAApHDvoARMvPeXTTC0HnpARBHn-WgVstc8XFCyMGOzvgu_1HvQ@mail.gmail.com
|
|
We'd try to drop the partitions of a partitioned index separately,
which is disallowed by the backend, leading to an error during
restore. While the error is harmless, it causes problems if you
try to use --single-transaction mode.
Fortunately, there seems no need to do a DROP at all, since the
partition will go away silently when we drop either the parent index
or the partition's table. So just make the DROP conditional on not
being a partition.
Reported-by: jian he <jian.universality@gmail.com>
Author: jian he <jian.universality@gmail.com>
Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CACJufxF0QSdkjFKF4di-JGWN6CSdQYEAhGPmQJJCdkSZtd=oLg@mail.gmail.com
Backpatch-through: 13
|
|
. remove unnecessary oid_string list stuff
. use pg_get_line_buf() instead of open-coding it
. cleaner parsing of map.dat lines
Reverts 2b69afbe50d add new list type simple_oid_string_list to fe-utils/simple_list
Author: Álvaro Herrera <alvherre@kurilemu.de>
Author: Andrew Dunstan <andrew@dunslane.net>
Discussion: https://postgr.es/m/202504141220.343fmoxfsbj4@alvherre.pgsql
|
|
Thinko in commit 39729ec01d2. Mea maxima culpa.
Per Mahendra Singh Thalor <mahi6run@gmail.com>
|
|
added by commit c1da7281060
|
|
Per Rainier Vilela
|
|
Similar to 8461424fd, here we adjust a few new locations which were not
using the most suitable appendStringInfo* function for the intended
purpose.
Author: David Rowley <drowleyml@gmail.com
Discussion: https://postgr.es/m/CAApHDvqJnNjueb=Eoj8K+8n0g7nj_AcPWSiCj5RNV4fDejAfqA@mail.gmail.com
|
|
Oversight in 1495eff7bdb
Author: Ranier Vilela <ranier.vf@gmail.com>
|
|
Instead of two separate (and different) implementations, refactor to use
a single common routine.
Along the way, remove use of a hardcoded file permissions constant in
favor of the common project setting for directory creation.
Author: Mahendra Singh Thalor <mahi6run@gmail.com>
Discussion: https://postgr.es/m/CAKYtNApihL8X1h7XO-zOjznc8Ca66Aevgvhc9zOTh6DBh2iaeA@mail.gmail.com
|
|
for commits 8f427187db7, 6ee3b91bad2
|
|
This allows them to be added without scanning the table, and validating
them afterwards without holding access exclusive lock on the table after
any violating rows have been deleted or fixed.
Doing ALTER TABLE ... SET NOT NULL for a column that has an invalid
not-null constraint validates that constraint. ALTER TABLE .. VALIDATE
CONSTRAINT is also supported. There are various checks on whether an
invalid constraint is allowed in a child table when the parent table has
a valid constraint; this should match what we do for enforced/not
enforced constraints.
pg_attribute.attnotnull is now only an indicator for whether a not-null
constraint exists for the column; whether it's valid or invalid must be
queried in pg_constraint. Applications can continue to query
pg_attribute.attnotnull as before, but now it's possible that NULL rows
are present in the column even when that's set to true.
For backend internal purposes, we cache the nullability status in
CompactAttribute->attnullability that each tuple descriptor carries
(replacing CompactAttribute.attnotnull, which was a mirror of
Form_pg_attribute.attnotnull). During the initial tuple descriptor
creation, based on the pg_attribute scan, we set this to UNRESTRICTED if
pg_attribute.attnotnull is false, or to UNKNOWN if it's true; then we
update the latter to VALID or INVALID depending on the pg_constraint
scan. This flag is also copied when tupledescs are copied.
Comparing tuple descs for equality must also compare the
CompactAttribute.attnullability flag and return false in case of a
mismatch.
pg_dump deals with these constraints by storing the OIDs of invalid
not-null constraints in a separate array, and running a query to obtain
their properties. The regular table creation SQL omits them entirely.
They are then dealt with in the same way as "separate" CHECK
constraints, and dumped after the data has been loaded. Because no
additional pg_dump infrastructure was required, we don't bump its
version number.
I decided not to bump catversion either, because the old catalog state
works perfectly in the new world. (Trying to run with new catalog state
and the old server version would likely run into issues, however.)
System catalogs do not support invalid not-null constraints (because
commit 14e87ffa5c54 didn't allow them to have pg_constraint rows
anyway.)
Author: Rushabh Lathia <rushabh.lathia@gmail.com>
Author: Jian He <jian.universality@gmail.com>
Reviewed-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Tested-by: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>
Discussion: https://postgr.es/m/CAGPqQf0KitkNack4F5CFkFi-9Dqvp29Ro=EpcWt=4_hs-Rt+bQ@mail.gmail.com
|
|
Quote file names, and mostly avoid hard coded file names. Along the way
make a few other minor improvements.
Discussion: https://postgr.es/m/20250407.152721.1397761902317499205.horikyota.ntt@gmail.com
|
|
Introduced by 1495eff7bdb0.
Author: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Discussion: https://postgr.es/m/20250407.151359.72428746612514925.horikyota.ntt@gmail.com
|
|
Coverity objected to the original code, and in any case this is much
cleaner, using the existing routine pg_check_dir() instead of rolling
its own test.
Per suggestion from Tom Lane.
|
|
Coverity complained about a possible overrun in the copy, but there is
no actual need to copy the string at all.
|