Age | Commit message (Collapse) | Author |
|
This adds pg_dump support for table AMs in a similar manner to how
tablespaces are handled. That is, instead of specifying the AM for
every CREATE TABLE etc, emit SET default_table_access_method
statements. That makes it easier to change the AM for all/most tables
in a dump, and allows restore to succeed even if some AM is not
available.
This increases the dump archive version, as a tables/matview's AM
needs to be tracked therein.
Author: Dimitri Dolgov, Andres Freund
Discussion:
https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
https://postgr.es/m/20190304234700.w5tmhducs5wxgzls@alap3.anarazel.de
|
|
This introduces the concept of table access methods, i.e. CREATE
ACCESS METHOD ... TYPE TABLE and
CREATE TABLE ... USING (storage-engine).
No table access functionality is delegated to table AMs as of this
commit, that'll be done in following commits.
Subsequent commits will incrementally abstract table access
functionality to be routed through table access methods. That change
is too large to be reviewed & committed at once, so it'll be done
incrementally.
Docs will be updated at the end, as adding them incrementally would
likely make them less coherent, and definitely is a lot more work,
without a lot of benefit.
Table access methods are specified similar to index access methods,
i.e. pg_am.amhandler returns, as INTERNAL, a pointer to a struct with
callbacks. In contrast to index AMs that struct needs to live as long
as a backend, typically that's achieved by just returning a pointer to
a constant struct.
Psql's \d+ now displays a table's access method. That can be disabled
with HIDE_TABLEAM=true, which is mainly useful so regression tests can
be run against different AMs. It's quite possible that this behaviour
still needs to be fine tuned.
For now it's not allowed to set a table AM for a partitioned table, as
we've not resolved how partitions would inherit that. Disallowing
allows us to introduce, if we decide that's the way forward, such a
behaviour without a compatibility break.
Catversion bumped, to add the heap table AM and references to it.
Author: Haribabu Kommi, Andres Freund, Alvaro Herrera, Dimitri Golgov and others
Discussion:
https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql
https://postgr.es/m/20190107235616.6lur25ph22u5u5av@alap3.anarazel.de
https://postgr.es/m/20190304234700.w5tmhducs5wxgzls@alap3.anarazel.de
|
|
This was missing since 803b130, which has introduced the option for the
user-facing VACUUM and ANALYZE.
Author: Masahiko Sawada
Discussion: https://postgr.es/m/CAD21AoD2TMdTxRhZ7WSp940V82_OAyPmgHnbi25UbbArLgA92Q@mail.gmail.com
|
|
This test fails if the containing directory contains a funny character
such as a space or some perl metacharacter. To avoid that, we check for
files names using readdir and a regex, rather than using a glob pattern.
Discussion: https://postgr.es/m/CAM6_UM6dGdU39PKAC24T+HD9ouy0jLN9vH6163K8QEEzr__iZw@mail.gmail.com
Author: Fabien COELHO
Reviewed-by: Raúl Marín Rodríguez
|
|
The original commit appears to have accidentally introduced a
duplicate definition. Keep only one of them.
|
|
For some reason the dump test with names with high bits set fails on
Msys2 (although not Msys1). Disable the tests for now, so that other
tests can run.
|
|
|
|
The test crashes and burns quite badly, for some reason, but even if it
didn't it wouldn't work, since Windows doesn't let you rename a file
held by a running process.
|
|
Commit f092de05 added a test for pg_dumpall --exclude-database including
the wildcard pattern '*dump*' which matches some files in the source
directory. The test library on msys uses the shell which expands this
and thus the program gets incorrect arguments. This doesn't happen if
the pattern doesn't match any files, so here the pattern is set to
'*dump_test*' which is such a pattern.
Per buildfarm animal jacana.
|
|
It turns out that different getopt implementations spell the error for
missing arguments different ways. This test is of fairly marginal
value, so instead of trying to keep up with the different error
messages just remove the test.
|
|
Headings are added for the User Configurations and Databases sections,
and for each user configuration and database in the output.
Author: Fabien Coelho
Discussion: https://postgr.es/m/alpine.DEB.2.21.1812272222130.32444@lancre
|
|
This option functions similarly to pg_dump's --exclude-table option, but
for database names. The option can be given once, and the argument can
be a pattern including wildcard characters.
Author: Andrew Dunstan.
Reviewd-by: Fabien Coelho and Michael Paquier
Discussion: https://postgr.es/m/43a54a47-4aa7-c70e-9ca6-648f436dd6e6@2ndQuadrant.com
|
|
Commit f831d4acc changed what pg_dump emits for some empty fields: they
were output as empty strings before, NULL pointer afterwards. That
makes old pg_restore unable to work (crash) with such files, which is
unacceptable. Return to the original representation by explicitly
setting those struct members to "" where needed; remove some no longer
needed checks for NULL input.
We can declutter the code a little by returning to NULLs when we next
update the archive version, so add a note to remind us later.
Discussion: https://postgr.es/m/20190225074539.az6j3u464cvsoxh6@depesz.com
Reported-by: hubert depesz lubaczewski
Author: Dmitry Dolgov
|
|
9a4059d simplified the flush of target data folder when finishing
processing, and could have done a bit more.
Discussion: https://postgr.es/m/20190131064759.GA13429@paquier.xyz
|
|
The check in create_help.pl for a null end tag (</>) has been obsolete
since the conversion from SGML to XML, since XML does not allow that
anymore.
|
|
Changes made by commit 02ddd49 mean that dumps made against pre version
12 instances are no longer comparable with those made against version 12
or later instances. This makes cross-version upgrade testing fail in the
buildfarm. Experimentation has shown that the error is cured if the
dumps are made when extra_float_digits is set to 0. Hence this patch
allows for it to be explicitly set rather than relying on pg_dump's
builtin default (3 in almost all cases). This feature might have other
uses, but should not normally be used.
Discussion: https://postgr.es/m/c76f7051-8fd3-ec10-7579-1f8842305b85@2ndQuadrant.com
|
|
ee9e145 has fixed the tests of pg_basebackup for checksums a first time,
still one seek() call missed the shot. Also, the data written in files
to emulate corruptions was not actually writing zeros as the quoting
style was incorrect.
Backpatch the portion for pg_basebackup to v11 where these tests have
been introduced. The tests of pg_verify_checksums are new as of v12.
Author: Michael Banck
Discussion: https://postgr.es/m/1550153276.796.35.camel@credativ.de
Backpatch-through: 11
|
|
Replace casts whose only purpose is to cast away const with the
unconstify() macro.
Discussion: https://www.postgresql.org/message-id/flat/53a28052-f9f3-1808-fed9-460fd43035ab%402ndquadrant.com
|
|
Since its introduction, max_wal_senders is counted as part of
max_connections when it comes to define how many connection slots can be
used for replication connections with a WAL sender context. This can
lead to confusion for some users, as it could be possible to block a
base backup or replication from happening because other backend sessions
are already taken for other purposes by an application, and
superuser-only connection slots are not a correct solution to handle
that case.
This commit makes max_wal_senders independent of max_connections for its
handling of PGPROC entries in ProcGlobal, meaning that connection slots
for WAL senders are handled using their own free queue, like autovacuum
workers and bgworkers.
One compatibility issue that this change creates is that a standby now
requires to have a value of max_wal_senders at least equal to its
primary. So, if a standby created enforces the value of
max_wal_senders to be lower than that, then this could break failovers.
Normally this should not be an issue though, as any settings of a
standby are inherited from its primary as postgresql.conf gets normally
copied as part of a base backup, so parameters would be consistent.
Author: Alexander Kukushkin
Reviewed-by: Kyotaro Horiguchi, Petr Jelínek, Masahiko Sawada, Oleksii
Kliukin
Discussion: https://postgr.es/m/CAFh8B=nBzHQeYAu0b8fjK-AF1X4+_p6GRtwG+cCgs6Vci2uRuQ@mail.gmail.com
|
|
Renaming varchar_transform to varchar_support had a side effect
I hadn't foreseen: the core regression tests leave around a
transform object that relies on that function, so the name
change breaks cross-version upgrade tests, because the name
used in the older branches doesn't match.
Since the dependency on varchar_transform was chosen with the
aid of a dartboard anyway (it would surely not work as a
language transform support function), fix by just choosing
a different random builtin function with the right signature.
Also add some comments explaining why this isn't horribly unsafe.
I chose to make the same substitution in a couple of other
copied-and-pasted test cases, for consistency, though those
aren't directly contributing to the testing problem.
Per buildfarm. Back-patch, else it doesn't fix the problem.
|
|
warn_or_exit_horribly() was blithely passing a potentially-NULL
string pointer to a %s format specifier. That works (at least
to the extent of not crashing) on some platforms, but not all,
and since we switched to our own snprintf.c it doesn't work
for us anywhere.
Of the three string fields being handled this way here, I think
that only "owner" is supposed to be nullable ... but considering
that this is error-reporting code, it has very little business
assuming anything, so put in defenses for all three.
Per a crash observed on buildfarm member crake and then
reproduced here. Because of the portability aspect,
back-patch to all supported versions.
|
|
Rename/repurpose pg_proc.protransform as "prosupport". The idea is
still that it names an internal function that provides knowledge to
the planner about the behavior of the function it's attached to;
but redesign the API specification so that it's not limited to doing
just one thing, but can support an extensible set of requests.
The original purpose of simplifying a function call is handled by
the first request type to be invented, SupportRequestSimplify.
Adjust all the existing transform functions to handle this API,
and rename them fron "xxx_transform" to "xxx_support" to reflect
the potential generalization of what they do. (Since we never
previously provided any way for extensions to add transform functions,
this change doesn't create an API break for them.)
Also add DDL and pg_dump support for attaching a support function to a
user-defined function. Unfortunately, DDL access has to be restricted
to superusers, at least for now; but seeing that support functions
will pretty much have to be written in C, that limitation is just
theoretical. (This support is untested in this patch, but a follow-on
patch will add cases that exercise it.)
Discussion: https://postgr.es/m/15193.1548028093@sss.pgh.pa.us
|
|
The modules RewindTest.pm and ServerSetup.pm are really only useful for
TAP tests, so they really belong in the TAP test directories. In
addition, ServerSetup.pm is renamed to SSLServer.pm.
The test scripts have their own directories added to the search path so
that the relocated modules will be found, regardless of where the tests
are run from, even on modern perl where "." is no longer in the
searchpath.
Discussion: https://postgr.es/m/e4b0f366-269c-73c3-9c90-d9cb0f4db1f9@2ndQuadrant.com
Backpatch as appropriate to 9.5
|
|
Change pg_dump and ruleutils.c to use the FUNCTION keyword instead of
PROCEDURE in trigger and event trigger definitions.
This completes the pieces of the transition started in
0a63f996e018ac508c858e87fa39cc254a5db49f that were kept out of
PostgreSQL 11 because of the required catversion change.
Discussion: https://www.postgresql.org/message-id/381bef53-f7be-29c8-d977-948e389161d6@2ndquadrant.com
|
|
This enforces one-or-more character matches in the regular expressions
for pg_dump testing on SQL syntax output where zero-or-more matches
implies a syntax error.
Author: Daniel Gustafsson
Reviewed-by: David G. Johnston, Michael Paquier
Discussion: https://postgr.es/m/B313C32C-0E24-4AFB-95FF-6DA0C4E18A89@yesql.se
|
|
Some tests have been using regular expressions which have been lax in
escaping dots, which may cause tests to pass when they should not. This
make the whole set of tests more robust where needed.
Author: David Rowley
Reviewed-by: Daniel Gustafsson, Michael Paquier
Discussion: https://postgr.es/m/CAKJS1f9jD8aVo1BTH+Vgwd=f-ynbuRVrS90XbWMT6UigaOQJTA@mail.gmail.com
|
|
Commit 62215de29 turns out to have been not quite on-the-mark.
When we are forced to postpone dumping of a materialized view into
the dump's post-data section (because it depends on a unique index
that isn't created till that section), we may also have to postpone
dumping other matviews that depend on said matview. The previous fix
didn't reliably work for such cases: it'd break the dependency loops
properly, producing a workable object ordering, but it didn't
necessarily mark all the matviews as "postponed_def". This led to
harmless bleating about "archive items not in correct section order",
as reported by Tom Cassidy in bug #15602. Less harmlessly,
selective-restore options such as --section might misbehave due to
the matview dump objects not being properly labeled.
The right way to fix it is to consider that each pre-data dependency
we break amounts to moving the no-longer-dependent object into
post-data, and hence we should mark that object if it's a matview.
Back-patch to all supported versions, since the issue's been there
since matviews were introduced.
Discussion: https://postgr.es/m/15602-e895445f73dc450b@postgresql.org
|
|
The ArchiveEntry function has a number of arguments that can be
considered optional. Split them out into a separate struct, to make the
API more flexible for changes.
Author: Dmitry Dolgov
Discussion: https://postgr.es/m/CA+q6zcXRxPE+qp6oerQWJ3zS061WPOhdxeMrdc-Yf-2V5vsrEw@mail.gmail.com
|
|
These two new options can be used to improve the selectivity of
relations to vacuum or analyze even further depending on the age of
respectively their transaction ID or multixact ID, so as it is possible
to prioritize tables to prevent wraparound of one or the other.
Combined with --table, it is possible to target a subset of tables to
choose as potential processing targets.
Author: Nathan Bossart
Reviewed-by: Michael Paquier, Masahiko Sawada
Discussion: https://postgr.es/m/FFE5373C-E26A-495B-B5C8-911EC4A41C5E@amazon.com
|
|
If a user specifies a relation name which cannot be processed, then the
backend can warn directly about what is wrong with it. This fixes an
oversight from e0c2933.
Author: Nathan Bossart
Discussion: https://postgr.es/m/32049A78-C429-4742-AEC1-941C9ABDE7B8@amazon.com
|
|
vacuumdb would use a catalog query only when the command caller does not
define a list of tables. Switching to a catalog table represents two
advantages:
- Relation existence check can happen before running any VACUUM or
ANALYZE query. Before this change, if multiple relations are defined
using --table, the utility would fail only after processing the
firstly-defined ones, which may be a long some depending on the size of
the relation. This adds checks for the relation names, and does
nothing, at least yet, for the attribute names.
- More filtering options can become available for the utility user.
These options, which may be introduced later on, are based on the
relation size or the relation age, and need to be made available even if
the user does not list any specific table with --table.
Author: Nathan Bossart
Reviewed-by: Michael Paquier, Masahiko Sawada
Discussion: https://postgr.es/m/FFE5373C-E26A-495B-B5C8-911EC4A41C5E@amazon.com
|
|
This was used for the old CLUSTER syntax, has been unused since
e55c8e36ae44677dca4420bed07ad09d191fdf6c.
|
|
The completion here consists of attribute numbers, which is specific to
this grammar.
Author: Tatsuro Yamada
Reviewed-by: Peter Eisentraut
Discussion: https://portgr.es/m/b58a78fa-81ce-186f-f0bc-c1aa93c46cbf@lab.ntt.co.jp
|
|
vacuumdb generates by itself SQL queries to run ANALYZE or VACUUM on the
backend, but we never actually checked for query patterns with column
lists defined.
Author: Michael Paquier
Reviewed-by: Nathan Bossart
Discussion: https://postgr.es/m/FFE5373C-E26A-495B-B5C8-911EC4A41C5E@amazon.com
|
|
Previously, \g would successfully execute the COPY command, but
the target specification if any was ignored, so that the data was
always dumped to the regular query output target. This seems like
a clear bug, so let's not just fix it but back-patch it.
While at it, adjust the documentation for \copy to recommend
"COPY ... TO STDOUT \g foo" as a plausible alternative.
Back-patch to 9.5. The problem exists much further back, but the
code associated with \g was refactored enough in 9.5 that we'd
need a significantly different patch for 9.4, and it doesn't
seem worth the trouble.
Daniel Vérité, reviewed by Fabien Coelho
Discussion: https://postgr.es/m/15dadc39-e050-4d46-956b-dcc4ed098753@manitou-mail.org
|
|
|
|
The pgbench regression test supposed that srandom() with a specific value
would result in deterministic output from random(), as required by POSIX.
It emerges however that OpenBSD is too smart to be constrained by mere
standards, so their random() emits nondeterministic output anyway.
While a workaround does exist, what seems like a better fix is to stop
relying on the platform's srandom()/random() altogether, so that what
you get from --random-seed=N is not merely deterministic but platform
independent. Hence, use a separate pg_jrand48() random sequence in
place of random().
Also adjust the regression test case that's supposed to detect
nondeterminism so that it's more likely to detect it; the original
choice of random_zipfian parameter tended to produce the same output
all the time even if the underlying behavior wasn't deterministic.
In passing, improve pgbench's docs about random_zipfian().
Back-patch to v11 where this code was introduced.
Fabien Coelho and Tom Lane
Discussion: https://postgr.es/m/4615.1547792324@sss.pgh.pa.us
|
|
Spotted mostly by Fabien Coelho.
Discussion: https://www.postgresql.org/message-id/alpine.DEB.2.21.1901230947050.16643@lancre
|
|
Author: Moon, Insung
Discussion: https://postgr.es/m/008001d4b2db$1f772170$5e656450$@lab.ntt.co.jp
|
|
This is in preparation for always using a catalog query to discover
tables, where the ANALYZE and VACUUM queries get completed with relation
names.
Author: Nathan Bossart
Discussion: https://postgr.es/m/20190122060730.GD8719@paquier.xyz
|
|
This reverts commit 458a1244f1fcf407874482a93b7631ecf5303d6e.
It has portability problems on Windows, which will require
a little bit of research to fix.
Discussion: https://postgr.es/m/20202.1548035461@sss.pgh.pa.us
|
|
Commit c0d0e54084 replaced the ones in the documentation, but missed out
on the ones in the code. Replace those as well, but unlike c0d0e54084,
don't backpatch the code changes to avoid breaking translations.
|
|
Avoids issues if build directory's pathname contains regex
metacharacters.
Raúl Marín Rodríguez
Discussion: https://postgr.es/m/CAM6_UM6dGdU39PKAC24T+HD9ouy0jLN9vH6163K8QEEzr__iZw@mail.gmail.com
|
|
I've had enough of "fixing" this test case. Whatever value it has
is limited to verifying that pgbench fails for an unrecognized switch,
and we don't need to assume anything about what getopt_long prints in
order to do that.
Discussion: https://postgr.es/m/9427.1547701450@sss.pgh.pa.us
|
|
This reverts commit c203d6cf8 and some follow-on fixes, completing the
task begun in commit 5d28c9bd7. If that feature is ever resurrected,
the code will look quite a bit different from this, so it seems best
to start from a clean slate.
The v11 branch is not touched; in that branch, the recheck_on_update
storage option remains present, but nonfunctional and undocumented.
Discussion: https://postgr.es/m/20190114223409.3tcvejfhlvbucrv5@alap3.anarazel.de
|
|
pg_ctl is supposed to daemonize the postmaster process, so that it's not
affected by signals to the launching process group. Before this patch, if
you had a shell script that used "pg_ctl start", and you interrupted the
shell script after postmaster had been launched, postmaster was also
killed. To fix, call setsid() after forking the postmaster process.
Long time ago, we had a 'silent_mode' option, which daemonized the
postmaster process by calling setsid(), but that was removed back in 2011
(commit f7ea6beaf4). We discussed bringing that back in some form, but
pg_ctl is the documented way of launching postmaster to the background, so
putting the setsid() call in pg_ctl itself seems appropriate.
Just putting postmaster in a separate session would change the behavior
when you interrupt "pg_ctl -w start", e.g. with CTRL-C, while it's waiting
for postmaster to start. The historical behavior has been that
interrupting pg_ctl aborts the server launch, which is handy if the server
is stuck in recovery, for example, and won't fully start up. To keep that
behavior, install a signal handler in pg_ctl, to explicitly kill
postmaster, if pg_ctl is interrupted while it's waiting for the server to
start up. This isn't 100% watertight, there is a small window after
forking the postmaster process, where the signal handler doesn't know the
postmaster's PID yet, but seems good enough.
Arguably this is a long-standing bug, but I refrained from back-batching,
out of fear of breaking someone's scripts that depended on the old
behavior.
Reviewed by Tom Lane. Report and original patch by Paul Guo, with
feedback from Michael Paquier.
Discussion: https://www.postgresql.org/message-id/CAEET0ZH5Bf7dhZB3mYy8zZQttJrdZg_0Wwaj0o1PuuBny1JkEw%40mail.gmail.com
|
|
This is what one usually wants for recovery and almost always wants
for a standby.
Discussion: https://www.postgresql.org/message-id/flat/6dd2c23a-4162-8469-410f-bfe146e28c0c@2ndquadrant.com/
Reviewed-by: David Steele <david@pgmasters.net>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
|
|
Per buildfarm
|
|
These commands allow assignment of values produced by queries to pgbench
variables, where they can be used by further commands. \gset terminates
a command sequence (just like a bare semicolon); \cset separates
multiple queries in a compound command, like an escaped semicolon (\;).
A prefix can be provided to the \-command and is prepended to the name
of each output column to produce the final variable name.
This feature allows pgbench scripts to react meaningfully to the actual
database contents, allowing more powerful benchmarks to be written.
Authors: Fabien Coelho, Álvaro Herrera
Reviewed-by: Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>
Reviewed-by: Stephen Frost <sfrost@snowman.net>
Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Tatsuo Ishii <ishii@sraoss.co.jp>
Reviewed-by: Rafia Sabih <rafia.sabih@enterprisedb.com>
Discussion: https://postgr.es/m/alpine.DEB.2.20.1607091005330.3412@sto
|
|
DISABLE_PAGE_SKIPPING is available since v9.6, and SKIP_LOCKED since
v12. They lacked equivalents for vacuumdb, so this closes the gap.
Author: Nathan Bossart
Reviewed-by: Michael Paquier, Masahiko Sawada
Discussion: https://postgr.es/m/FFE5373C-E26A-495B-B5C8-911EC4A41C5E@amazon.com
|