| Age | Commit message (Collapse) | Author |
|
The error messages using the word "non-negative" are confusing
because it's ambiguous about whether it accepts zero or not.
This commit improves those error messages by replacing it with
less ambiguous word like "greater than zero" or
"greater than or equal to zero".
Also this commit added the note about the word "non-negative" to
the error message style guide, to help writing the new error messages.
When postgres_fdw option fetch_size was set to zero, previously
the error message "fetch_size requires a non-negative integer value"
was reported. This error message was outright buggy. Therefore
back-patch to all supported versions where such buggy error message
could be thrown.
Reported-by: Hou Zhijie
Author: Bharath Rupireddy
Reviewed-by: Kyotaro Horiguchi, Fujii Masao
Discussion: https://postgr.es/m/OS0PR01MB5716415335A06B489F1B3A8194569@OS0PR01MB5716.jpnprd01.prod.outlook.com
|
|
Previously, isolationtester displayed SQL query results using some
ad-hoc code that clearly hadn't had much effort expended on it.
Field values longer than 14 characters weren't separated from
the next field, and usually caused misalignment of the columns
too. Also there was no visual separation of a query's result
from subsequent isolationtester output. This made test result
files confusing and hard to read.
To improve matters, let's use libpq's PQprint() function. Although
that's long since unused by psql, it's still plenty good enough
for the purpose here.
Like 741d7f104, back-patch to all supported branches, so that this
isn't a stumbling block for back-patching isolation test changes.
Discussion: https://postgr.es/m/582362.1623798221@sss.pgh.pa.us
|
|
The context is an object that no longer bears some aclitem that it bore
initially. (A user issued REVOKE or GRANT statements upon the object.)
pg_dump is forming SQL to reproduce the object ACL. Since initdb
creates no ACL bearing GRANT OPTION, reaching this bug requires an
extension where the creation script establishes such an ACL. No PGXN
extension does that. If an installation did reach the bug, pg_dump
would have omitted a semicolon, causing a REVOKE and the next SQL
statement to fail. Separately, since the affected code exists to
eliminate an entire aclitem, it wants plain REVOKE, not REVOKE GRANT
OPTION FOR. Back-patch to 9.6, where commit
23f34fa4ba358671adab16773e79c17c92cbc870 first appeared.
Discussion: https://postgr.es/m/20210109102423.GA160022@rfd.leadboat.com
|
|
Commit 3eb3d3e78 was a few bricks shy of a load: while it correctly
set the table's "interesting" flag when deciding to dump the data of
an extension config table, it was not correct to clear that flag
if we concluded we shouldn't dump the data. This led to the crash
reported in bug #16655, because in fact we'll traverse dumpTableSchema
anyway for all extension tables (to see if they have user-added
seclabels or RLS policies).
The right thing to do is to force "interesting" true in makeTableDataInfo,
and otherwise leave the flag alone. (Doing it there is more future-proof
in case additional calls are added, and it also avoids setting the flag
unnecessarily if that function decides the table is non-dumpable.)
This investigation also showed that while only the --inserts code path
had an obvious failure in the case considered by 3eb3d3e78, the COPY
code path also has a problem with not having loaded table subsidiary
data. That causes fmtCopyColumnList to silently return an empty string
instead of the correct column list. That accidentally mostly works,
which perhaps is why we didn't notice this before. It would only fail
if the restore column order is different from the dump column order,
which only happens in weird inheritance cases, so it's not surprising
nobody had hit the case with an extension config table. Nonetheless,
it's a bug, and it goes a long way back, not just to v12 where the
--inserts code path started to have a problem with this.
In hopes of catching such cases a bit sooner in future, add some
Asserts that "interesting" has been set in both dumpTableData and
dumpTableSchema. Adjust the test case added by 3eb3d3e78 so that it
checks the COPY rather than INSERT form of that bug, allowing it to
detect the longer-standing symptom.
Per bug #16655 from Cameron Daniel. Back-patch to all supported
branches.
Discussion: https://postgr.es/m/16655-5c92d6b3a9438137@postgresql.org
Discussion: https://postgr.es/m/18048b44-3414-b983-8c7c-9165b177900d@2ndQuadrant.com
|
|
pg_dump is oblivious to this kind of dependency, so they're lost on
dump/restores (and pg_upgrade). Have pg_dump emit ALTER lines so that
they're preserved. Add some pg_dump tests for the whole thing, also.
Reviewed-by: Tom Lane (offlist)
Reviewed-by: Ibrar Ahmed
Reviewed-by: Ahsan Hadi (who also reviewed commit 899a04f5ed61)
Discussion: https://postgr.es/m/20200217225333.GA30974@alvherre.pgsql
|
|
If the command is attempted for an extension that the object already
depends on, silently do nothing.
In particular, this means that if a database containing multiple such
entries is dumped, the restore will silently do the right thing and
record just the first one. (At least, in a world where pg_dump does
dump such entries -- which it doesn't currently, but it will.)
Backpatch to 9.6, where this kind of dependency was introduced.
Reviewed-by: Ibrar Ahmed, Tom Lane (offlist)
Discussion: https://postgr.es/m/20200217225333.GA30974@alvherre.pgsql
|
|
Reported-by: Tham Nguyen
Discussion: https://postgr.es/m/157851402876.29175.12977878383183540468@wrigleys.postgresql.org
Backpatch-through: 9.4
|
|
Renaming varchar_transform to varchar_support had a side effect
I hadn't foreseen: the core regression tests leave around a
transform object that relies on that function, so the name
change breaks cross-version upgrade tests, because the name
used in the older branches doesn't match.
Since the dependency on varchar_transform was chosen with the
aid of a dartboard anyway (it would surely not work as a
language transform support function), fix by just choosing
a different random builtin function with the right signature.
Also add some comments explaining why this isn't horribly unsafe.
I chose to make the same substitution in a couple of other
copied-and-pasted test cases, for consistency, though those
aren't directly contributing to the testing problem.
Per buildfarm. Back-patch, else it doesn't fix the problem.
|
|
This module overlooked this necessary fixup step on the results of
transformWhereClause(). It accidentally worked anyway, because the
constructed expression involved type "name" which is not collatable,
but it fell over while I was experimenting with changing "name" to
be collatable.
Back-patch, not because there's any live bug here in back branches,
but because somebody might use this code as a model for some real
application and then not understand why it doesn't work.
|
|
Should have done this in e69f5e0efca ...
Per note from Tom Lane.
|
|
Doing so causes a pg_upgrade of a database containing these objects to
fail whenever pg_class changes. And it's pointless anyway: we have more
interesting tables anyhow.
Discussion: https://postgr.es/m/CAD5tBc+s8pW9WvH2+_z=B4x95FD4QuzZKcaMpff_9H4rS0VU1A@mail.gmail.com
|
|
This omission seems to be what is causing buildfarm failures on crake.
Security: CVE-2018-1058
|
|
Historically, pg_dump has "set search_path = foo, pg_catalog" when
dumping an object in schema "foo", and has also caused that setting
to be used while restoring the object. This is problematic because
functions and operators in schema "foo" could capture references meant
to refer to pg_catalog entries, both in the queries issued by pg_dump
and those issued during the subsequent restore run. That could
result in dump/restore misbehavior, or in privilege escalation if a
nefarious user installs trojan-horse functions or operators.
This patch changes pg_dump so that it does not change the search_path
dynamically. The emitted restore script sets the search_path to what
was used at dump time, and then leaves it alone thereafter. Created
objects are placed in the correct schema, regardless of the active
search_path, by dint of schema-qualifying their names in the CREATE
commands, as well as in subsequent ALTER and ALTER-like commands.
Since this change requires a change in the behavior of pg_restore
when processing an archive file made according to this new convention,
bump the archive file version number; old versions of pg_restore will
therefore refuse to process files made with new versions of pg_dump.
Security: CVE-2018-1058
|
|
Makefile.global assigns this prerequisite to every target named "check",
but similar targets must mention it explicitly. Affected targets
failed, tested $PATH binaries, or tested a stale temporary installation.
The src/test/modules examples worked properly when called as "make -C
src/test/modules/$FOO check", but "make -j" allowed the test to start
before the temporary installation was in place. Back-patch to 9.5,
where commit dcae5faccab64776376d354decda0017c648bb53 introduced the
shared temp-install.
|
|
Otherwise, the script output has a lot of pointless warnings.
This was forgotten in 9def031bd2821f35b5f506260d922482648a8bb0
|
|
Due to a missing CommandCounterIncrement() call, parsing of a non-utility
command in an extension script would not see the effects of the immediately
preceding DDL command, unless that command's execution ends with
CommandCounterIncrement() internally ... which some do but many don't.
Report by Philippe Beaudoin, diagnosis by Julien Rouhaud.
Rather remarkably, this bug has evaded detection since extensions were
invented, so back-patch to all supported branches.
Discussion: https://postgr.es/m/2cf7941e-4e41-7714-3de8-37b1a8f74dff@free.fr
|
|
On EXEC_BACKEND builds, this can fail if ASLR is in use.
Backpatch to 9.5. On master, completely remove the bgw_main field
completely, since there is no situation in which it is safe for an
EXEC_BACKEND build. On 9.6 and 9.5, leave the field intact to avoid
breaking things for third-party code that doesn't care about working
under EXEC_BACKEND. Prior to 9.5, there are no in-core bgworker
entrypoints.
Petr Jelinek, reviewed by me.
Discussion: http://postgr.es/m/09d8ad33-4287-a09b-a77f-77f8761adb5e@2ndquadrant.com
|
|
As pointed out by Alvaro, we actually use perltidy on the perl scripts
in the source tree, so go back to the results of a perltidy run for the
test_pg_dump TAP script.
To make it look slightly less tragic, I changed most of the independent
arguments into long-form single arguments (eg: -f file.sql changed to be
--file=file.sql) to avoid having them confusingly split across lines due
to perltidy.
Back-patch to 9.6, as the last patch was.
|
|
In commit 6c268df, pg_init_privs was added to track the initial
privileges of catalog objects and extensions. Unfortunately, that
commit didn't include understanding of ALTER EXTENSION ADD/DROP, which
allows the objects associated with an extension to be changed after the
initial CREATE EXTENSION script has been run.
The result of this meant that ACLs for objects added through
ALTER EXTENSION ADD were not recorded into pg_init_privs and we would
end up including those ACLs in pg_dump when we shouldn't have.
This commit corrects that by making sure to have pg_init_privs updated
when ALTER EXTENSION ADD/DROP is run, recording the permissions as they
are at ALTER EXTENSION ADD time, and removing any if/when ALTER
EXTENSION DROP is called.
This issue was pointed out by Moshe Jacobson as commentary on bug #14456
(which was actually a bug about versions prior to 9.6 not handling
custom ACLs on extensions correctly, an issue now addressed with
pg_init_privs in 9.6).
Back-patch to 9.6 where pg_init_privs was introduced.
|
|
The formatting of the perl hashes used in the TAP tests for test_pg_dump
was rather horribly inconsistent and made it more difficult than it
really should have been to add new tests or adjust what tests are for
what runs, etc.
Reformat to clean that all up.
Whitespace-only changes.
|
|
Our current DDL only allows a database name to be specified in COMMENT
ON DATABASE, which Andrew Dunstan reports to make this test fail on the
buildfarm. Remove the line until we gain a DDL command that allows the
current database to be operated on without having the specify it by
name.
Backpatch to 9.5, where these tests appeared.
Discussion: https://postgr.es/m/e6084b89-07a7-7e57-51ee-d7b8fc9ec864@2ndQuadrant.com
|
|
When considering a sequence's Data entry in dumpSequenceData, we were
actually looking at the sequence definition's dump flag to decide if we
should dump the data or not. That's generally fine, except for when the
sequence data entry was created by processExtensionTables() because it's
a config sequence. In that case, the sequence itself won't be marked as
dumping data because it's part of an extension, leading to the need for
processExtensionTables() to create the sequence data entry.
This leads to extension config sequence data not being included in the
dump when it should be. Fix this by looking at the sequence data's dump
flag instead, just as dumpTableData() was doing for tables (which is why
config tables were correctly being handled), and add a regression test
to make sure we don't break it moving forward.
All of this is a bit round-about since we can now represent which
components of a given dump item should be dumped out through the dump
flag. A future improvement might be to change checkExtensionMembership()
to check for config sequences/tables and set the dump flag based on that
directly, possibly removing the need for processExtensionTables().
Bug found by Daniele Varrazzo.
Discussion: https://postgr.es/m/CA+mi_8ZmxQM7+nZ7pJ8uyfxc9V3o=UAG14dVqvftdmvw8OJ3gQ@mail.gmail.com
Patch by Michael Paquier, with some tweaking of the regression tests by
me.
Back-patch to 9.6 where the bug was introduced.
|
|
When we are altering a text search configuration, we are getting the
tuple from pg_ts_config and using its OID, so use TSConfigRelationId
when invoking any post-alter hooks and setting the object address.
Further, in the functions called from AlterTSConfiguration(), we're
saving information about the command via
EventTriggerCollectAlterTSConfig(), so we should be setting
commandCollected to true. Also add a regression test to
test_ddl_deparse for ALTER TEXT SEARCH CONFIGURATION.
Author: Artur Zakirov, a few additional comments by me
Discussion: https://www.postgresql.org/message-id/57a71eba-f2c7-e7fd-6fc0-2126ec0b39bd%40postgrespro.ru
Back-patch the fix for the InvokeObjectPostAlterHook() call to 9.3 where
it was introduced, and the fix for the ObjectAddressSet() call and
setting commandCollected to true to 9.5 where those changes to
ProcessUtilitySlow() were introduced.
|
|
Commit 08dd23cec introduced an exception to the rule that extension member
objects can only be dropped as part of dropping the whole extension,
intending to allow such drops while running the extension's own creation or
update scripts. However, the exception was only applied at the outermost
recursion level, because it was modeled on a pre-existing check to ignore
dependencies on objects listed in pendingObjects. Bug #14434 from Philippe
Beaudoin shows that this is inadequate: in some cases we can reach an
extension member object by recursion from another one. (The bug concerns
the serial-sequence case; I'm not sure if there are other cases, but there
might well be.)
To fix, revert 08dd23cec's changes to findDependentObjects() and instead
apply the creating_extension exception regardless of stack level.
Having seen this example, I'm a bit suspicious that the pendingObjects
logic is also wrong and such cases should likewise be allowed at any
recursion level. However, changing that would interact in subtle ways
with the recursion logic (at least it would need to be moved to after the
recursing-from check). Given that the code's been like that a long time,
I'll refrain from touching it without a clear example showing it's wrong.
Back-patch to all active branches. In HEAD and 9.6, where suitable
test infrastructure exists, add a regression test case based on the
bug report.
Report: <20161125151448.6529.33039@wrigleys.postgresql.org>
Discussion: <13224.1480177514@sss.pgh.pa.us>
|
|
Previously, requesting commit timestamp for transactions
FrozenTransactionId and BootstrapTransactionId resulted in an error.
But since those values can validly appear in committed tuples' Xmin,
this behavior is unhelpful and error prone: each caller would have to
special-case those values before requesting timestamp data for an Xid.
We already have a perfectly good interface for returning "the Xid you
requested is too old for us to have commit TS data for it", so let's use
that instead.
Backpatch to 9.5, where commit timestamps appeared.
Author: Craig Ringer
Discussion: https://www.postgresql.org/message-id/CAMsr+YFM5Q=+ry3mKvWEqRTxrB0iU3qUSRnS28nz6FJYtBwhJg@mail.gmail.com
|
|
In each case, absence of a trailing newline would itself constitute a
PostgreSQL bug. Therefore, this slightly enhances the changed tests.
This works around a bug that last appeared in Perl 5.8.8, fixing
src/test/modules/test_pg_dump when run against that version. Commit
e7293e3271bf618eeb2d4779a15fc516a69fe463 worked around the bug, but the
subsequent addition of test_pg_dump introduced affected code. As that
commit had shown, slight increases in pattern complexity can suppress
the bug. This commit edits qr/foo$/m patterns too complex to encounter
the bug today, for style consistency and robustness against unrelated
pattern changes. Back-patch to 9.6, where test_pg_dump was introduced.
As of this writing, a fresh MSYS installation includes an affected Perl
5.8.8. The Perl 5.8.8 in Red Hat Enterprise Linux 5.11 carries a patch
that renders it unaffected, but the Perl 5.8.5 of Red Hat Enterprise
Linux 4.4 is affected.
|
|
An oversight in setting the boundaries of known commit timestamps during
startup caused old commit timestamps to become inaccessible after a
server restart.
Author and reporter: Julien Rouhaud
Review, test code: Craig Ringer
|
|
Without this, statistics changes accumulated by the worker never get
reported to the stats collector, which is bad.
Julien Rouhaud
|
|
commit_ts and test_pg_dump were declaring targets before including the
PGXS stanza, which meant that the "all" target customarily defined as
the first (and therefore default target) was not the default anymore.
Fix that by moving those target definitions to after PGXS.
commit_ts was initially good, but I broke it in commit 9def031bd2;
test_pg_dump was born broken, probably copying from commit_ts' mistake.
In passing, fix a comment mistake in test_pg_dump/Makefile.
Backpatch to 9.6.
Noted by Tom Lane.
|
|
Previously, if a schema was created by an extension, a normal pg_dump run
(not --binary-upgrade) would summarily skip every object in that schema.
In a case where an extension creates a schema and then users create other
objects within that schema, this does the wrong thing: we want pg_dump
to skip the schema but still create the non-extension-owned objects.
There's no easy way to fix this pre-9.6, because in earlier versions the
"dump" status for a schema is just a bool and there's no way to distinguish
"dump me" from "dump my members". However, as of 9.6 we do have enough
state to represent that, so this is a simple correction of the logic in
selectDumpableNamespace.
In passing, make some cosmetic fixes in nearby code.
Martín Marqués, reviewed by Michael Paquier
Discussion: <99581032-71de-6466-c325-069861f1947d@2ndquadrant.com>
|
|
|
|
This coding pattern creates a race condition, because if an interesting
interrupt happens after we've checked InterruptPending but before we reset
our latch, the latch-setting done by the signal handler would get lost,
and then we might block at WaitLatch in the next iteration without ever
noticing the interrupt condition. You can put the CHECK_FOR_INTERRUPTS
before WaitLatch or after ResetLatch, but not between them.
Aside from fixing the bugs, add some explanatory comments to latch.h
to perhaps forestall the next person from making the same mistake.
In HEAD, also replace gather_readnext's direct call of
HandleParallelMessages with CHECK_FOR_INTERRUPTS. It does not seem clean
or useful for this one caller to bypass ProcessInterrupts and go straight
to HandleParallelMessages; not least because that fails to consider the
InterruptPending flag, resulting in useless work both here
(if InterruptPending isn't set) and in the next CHECK_FOR_INTERRUPTS call
(if it is).
This thinko seems to have been introduced in the initial coding of
storage/ipc/shm_mq.c (commit ec9037df2), and then blindly copied into all
the subsequent parallel-query support logic. Back-patch relevant hunks
to 9.4 to extirpate the error everywhere.
Discussion: <1661.1469996911@sss.pgh.pa.us>
|
|
With the refactoring of pg_dump to handle components, getOwnedSeqs needs
to be a bit more intelligent regarding which components to dump when.
Specifically, we can't simply use the owning table's components as the
set of components to dump as the table might only be including certain
components while all components of the sequence should be dumped, for
example, when the table is a member of an extension while the sequence
is not.
Handle this by combining the set of components to be dumped for the
sequence explicitly and those to be dumped for the table when setting
the components to be dumped for the sequence.
Also add a number of regression tests around this to, hopefully, catch
any future changes which break the expected behavior.
Discovered by: Philippe BEAUDOIN
Reviewed by: Michael Paquier
|
|
|
|
To ensure that "make installcheck" can be used safely against an existing
installation, we need to be careful about what global object names
(database, role, and tablespace names) we use; otherwise we might
accidentally clobber important objects. There's been a weak consensus that
test databases should have names including "regression", and that test role
names should start with "regress_", but we didn't have any particular rule
about tablespace names; and neither of the other rules was followed with
any consistency either.
This commit moves us a long way towards having a hard-and-fast rule that
regression test databases must have names including "regression", and that
test role and tablespace names must start with "regress_". It's not
completely there because I did not touch some test cases in rolenames.sql
that test creation of special role names like "session_user". That will
require some rethinking of exactly what we want to test, whereas the intent
of this patch is just to hit all the cases in which the needed renamings
are cosmetic.
There is no enforcement mechanism in this patch either, but if we don't
add one we can expect that the tests will soon be violating the convention
again. Again, that's not such a cosmetic change and it will require
discussion. (But I did use a quick-hack enforcement patch to find these
cases.)
Discussion: <16638.1468620817@sss.pgh.pa.us>
|
|
As of 9.6, pg_regress doesn't build unless storage/lwlocknames.h has been
created; but there was nothing forcing that to happen if you just went into
src/test/regress/ and built there. We previously had a similar complaint
about plpython.
To fix in a way that won't break next time we invent a generated header,
make src/backend/Makefile expose a phony target for updating all the
include files it builds, and invoke that before building pg_regress or
plpython. In principle, maybe we ought to invoke that everywhere; but
it would add a lot of usually-useless make cycles, so let's just do it
in the places where people have complained.
I made a couple of cosmetic adjustments in src/backend/Makefile as well,
to deal with the generated headers in consistent orders.
Michael Paquier and Tom Lane
Report: <31398.1467036827@sss.pgh.pa.us>
Report: <20150916200959.GB32090@msg.df7cb.de>
|
|
|
|
|
|
dumpAccessMethod() didn't get the memo that we now have a bitfield for
the components which should be dumped instead of a simple boolean.
Correct that by checking if the relevant bit is set for each component
being dumped out (and not dumping it out if it isn't set).
This corrects an issue where CREATE ACCESS METHOD commands were being
included in non-binary-upgrades when an extension included an access
method (as the bloom extensions does).
Also add a regression test to make sure that we only dump out the
ACCESS METHOD commands, when they are part of an extension, when doing
a binary upgrade.
Pointed out by Thom Brown.
|
|
Mostly these are just comments but there are a few in documentation
and a handful in code and tests. Hopefully this doesn't cause too much
unnecessary pain for backpatching. I relented from some of the most
common like "thru" for that reason. The rest don't seem numerous
enough to cause problems.
Thanks to Kevin Lyda's tool https://pypi.python.org/pypi/misspellings
|
|
All of the other tables used in the query in dumpTable(), which is
collecting column-level ACLs, are qualified, so we should be qualifying
the pg_init_privs, the related sub-select against pg_class and the
other queries added by the pg_dump catalog ACLs work.
Also, use ::regclass (or ::pg_catalog.regclass, where appropriate)
instead of using a poorly constructed query to get the OID for various
catalog tables.
Issues identified by Noah and Alvaro, patch by me.
|
|
The test_pg_dump extension doesn't have a C component, so we need
to exclude it from the MSVC build system trying to figure out how
to build it.
Also add a "MODULES" line to the Makefile, as test_extensions has.
Might not be necessary, but seems good to keep things consistent.
Lastly, remove the 'installcheck' line from test_pg_dump, as that
was causing redefinition errors, at least on my box. This also
makes test_pg_dump consistent with how commit_ts is set up.
|
|
The Makefile for test_pg_dump shouldn't have a MODULES_big line
because there's no actual compiled bit for that extension. Hopefully
this will fix the Windows buildfarm members which were complaining.
In passing, also add the 'prove_installcheck' bit to the pg_dump and
test_pg_dump Makefiles, to get the buildfarm members to actually run
those tests.
|
|
This TAP test suite will create a new cluster, populate it based on
the 'create_sql' values in the '%tests' hash, run all of the runs
defined in the '%pgdump_runs' hash, and then for each test in the
'%tests' hash, compare each run's output the the regular expression
defined for the test under the 'like' and 'unlike' functions, as
appropriate.
While this test suite covers a fair bit of ground (67% of pg_dump.c
and quite a bit of the other files in src/bin/pg_dump), there is
still quite a bit which remains to be added to provide better code
coverage. Still, this is quite a bit better than we had, and has
found a few bugs already (note that the CREATE TRANSFORM test is
commented out, as it is currently failing).
Idea for using the TAP system from Tom, though all of the code is mine.
|
|
This reverts commits f07d18b6e94d, 82c83b337202, 3a3b309041b0, and
24c5f1a103ce.
This feature has shown enough immaturity that it was deemed better to
rip it out before rushing some more fixes at the last minute. There are
discussions on larger changes in this area for the next release.
|
|
For reasons of sheer brain fade, we (I) was calling systable_endscan()
immediately after systable_getnext() and expecting the tuple returned
by systable_getnext() to still be valid.
That's clearly wrong. Move the systable_endscan() down below the tuple
usage.
Discovered initially by Pavel Stehule and then also by Alvaro.
Add a regression test based on Alvaro's testing.
|
|
|
|
|
|
This feature is controlled by a new old_snapshot_threshold GUC. A
value of -1 disables the feature, and that is the default. The
value of 0 is just intended for testing. Above that it is the
number of minutes a snapshot can reach before pruning and vacuum
are allowed to remove dead tuples which the snapshot would
otherwise protect. The xmin associated with a transaction ID does
still protect dead tuples. A connection which is using an "old"
snapshot does not get an error unless it accesses a page modified
recently enough that it might not be able to produce accurate
results.
This is similar to the Oracle feature, and we use the same SQLSTATE
and error message for compatibility.
|
|
This introduces a new dependency type which marks an object as depending
on an extension, such that if the extension is dropped, the object
automatically goes away; and also, if the database is dumped, the object
is included in the dump output. Currently the grammar supports this for
indexes, triggers, materialized views and functions only, although the
utility code is generic so adding support for more object types is a
matter of touching the parser rules only.
Author: Abhijit Menon-Sen
Reviewed-by: Alexander Korotkov, Álvaro Herrera
Discussion: http://www.postgresql.org/message-id/20160115062649.GA5068@toroid.org
|