| Age | Commit message (Collapse) | Author |
|
There's no need to export this function, so don't. Michaël didn't
actually write the patch, but we list him as first author because with a
trivial one like this, intellectual authorship is as important (if not
more) as bit shovelling.
Author: Michaël Paquier, Amit Langote
Discussion: https://postgr.es/m/c91299c4-199b-0f16-339b-a29d6d2a39ee@lab.ntt.co.jp
|
|
If a continuation record is split so that its first half has already been
removed from the master, and is only present in pg_wal, and there is a
recycled WAL segment in the standby server that looks like it would
contain the second half, recovery would get stuck. The code in
XLogPageRead() incorrectly started streaming at the beginning of the
WAL record, even if we had already read the first page.
Backpatch to 9.4. In principle, older versions have the same problem, but
without replication slots, there was no straightforward mechanism to
prevent the master from recycling old WAL that was still needed by standby.
Without such a mechanism, I think it's reasonable to assume that there's
enough slack in how many old segments are kept around to not run into this,
or you have a WAL archive.
Reported by Jonathon Nelson. Analysis and patch by Kyotaro HORIGUCHI, with
some extra comments by me.
Discussion: https://www.postgresql.org/message-id/CACJqAM3xVz0JY1XFDKPP%2BJoJAjoGx%3DGNuOAshEDWCext7BFvCQ%40mail.gmail.com
|
|
The principle behind the locking was not very well thought-out, and not
documented. Add a section in the README to explain how it's supposed to
work, and change the code so that it actually works that way.
This fixes two bugs:
1. If fast update was turned on concurrently, subsequent inserts to the
pending list would not conflict with predicate locks that were acquired
earlier, on entry pages. The included 'predicate-gin-fastupdate' test
demonstrates that. To fix, make all scans acquire a predicate lock on
the metapage. That lock represents a scan of the pending list, whether
or not there is a pending list at the moment. Forget about the
optimization to skip locking/checking for locks, when fastupdate=off.
2. If a scan finds no match, it still needs to lock the entry page. The
point of predicate locks is to lock the gabs between values, whether
or not there is a match. The included 'predicate-gin-nomatch' test
tests that case.
In addition to those two bug fixes, this removes some unnecessary locking,
following the principle laid out in the README. Because all items in
a posting tree have the same key value, a lock on the posting tree root is
enough to cover all the items. (With a very large posting tree, it would
possibly be better to lock the posting tree leaf pages instead, so that a
"skip scan" with a query like "A & B", you could avoid unnecessary conflict
if a new tuple is inserted with A but !B. But let's keep this simple.)
Also, some spelling fixes.
Author: Heikki Linnakangas with some editorization by me
Review: Andrey Borodin, Alexander Korotkov
Discussion: https://www.postgresql.org/message-id/0b3ad2c2-2692-62a9-3a04-5724f2af9114@iki.fi
|
|
Make these rules look more like the ones associated with genbki.pl,
to wit:
* Use a stamp file to record when we last ran the script, instead of
relying on the timestamps of the individual output files.
* Take the knowledge out of backend/Makefile and put it in utils/Makefile
where it belongs. I moved down the handling of errcodes.h and probes.h
too, although those continue to be built by separate processes.
In itself, this is just much-needed cleanup with little practical effect.
However, by decoupling these makefile rules from the timestamps of the
generated header files, we open the door to not advancing those timestamps
unnecessarily, which will be taken advantage of by the next commit.
msvc/Solution.pm should be taught to do things similarly, but I'll leave
that for another commit.
Discussion: https://postgr.es/m/16925.1525376229@sss.pgh.pa.us
|
|
Since the SPI stack has been moved from TopTransactionContext to
TopMemoryContext, setting _SPI_stack to NULL in AtEOXact_SPI() leaks
memory. In fact, we don't need to do that anymore: We just leave the
allocated stack around for the next SPI use.
Also, refactor the SPI cleanup so that it is run both at transaction end
and when returning to the main loop on an exception. The latter is
necessary when a procedure calls a COMMIT or ROLLBACK command that
itself causes an error.
|
|
Instead of depending on glibc's getauxval() function, just try to execute
the CRC code, and trap SIGILL if that happens.
Thomas Munro
Discussion: https://postgr.es/m/HE1PR0801MB1323D171938EABC04FFE7FA9E3110@HE1PR0801MB1323.eurprd08.prod.outlook.com
|
|
There were three related issues:
* BufFileAppend() incorrectly reset the seek position on the 'source' file.
As a result, if you had called BufFileRead() on the file before calling
BufFileAppend(), it got confused, and subsequent calls would read/write
at wrong position.
* BufFileSize() did not work with files opened with BufFileOpenShared().
* FileGetSize() only worked on temporary files.
To fix, change the way BufFileSize() works so that it works on shared
files. Remove FileGetSize() altogether, as it's no longer needed. Remove
buffilesize from TapeShare struct, as the leader process can simply call
BufFileSize() to get the tape's size, there's no need to pass it through
shared memory anymore.
Discussion: https://www.postgresql.org/message-id/CAH2-WznEDYe_NZXxmnOfsoV54oFkTdMy7YLE2NPBLuttO96vTQ@mail.gmail.com
|
|
For some reason it was previously defined as 0, which is silly. The only
effect was to disable use of <stdbool.h>, which commit b2328bf62 intended
to make possible.
Thomas Munro
Discussion: https://postgr.es/m/CAEepm%3D3%3DTDYEXUEcHpEx%2BTwc31wo7PA0oBAiNt6sWmq93MW02A%40mail.gmail.com
|
|
Don't overflow the result type of constant expressions. Don't negate
unsigned types. Define HAVE_STDBOOL_H for Visual C++ 2013 and later.
Thomas Munro
Reviewed-By: Michael Paquier and Tom Lane
Discussion: https://postgr.es/m/CAEepm%3D3%3DTDYEXUEcHpEx%2BTwc31wo7PA0oBAiNt6sWmq93MW02A%40mail.gmail.com
|
|
Previously a tuple that has been moved to a different partition (see
f16241bef7c), set the block number on the old tuple to an invalid
value to indicate that fact. But the tuple offset was left
untouched. That turned out to trigger a wal_consistency_checking
failure as reported by Peter Geoghegan, as the offset wasn't
always overwritten during WAL replay.
Heikki observed that we're wasting valuable data by not putting
information also in the offset. Thus set that to
MovedPartitionsOffsetNumber when a tuple indicates it has moved.
We continue to set the block number to MovedPartitionsBlockNumber, as
that seems more likely to cause problems for code not updated to know
about moved tuples.
As t_ctid's offset number is now always set, this refinement also
fixes the wal_consistency_checking issue.
This technically is a minor disk format break, with previously created
moved tuples not being recognized anymore. But since there not even
has been a beta release since f16241bef7c...
Reported-By: Peter Geoghegan
Author: Heikki Linnakangas, Amul Sul
Discussion: https://postgr.es/m/CAH2-Wzm9ty+1BX7-GMNJ=xPRg67oJTVeDNdA9LSyJJtMgRiCMA@mail.gmail.com
|
|
The point of this is not really to remove redundancy in pg_language.dat;
with only three entries, it's hardly worth it. Rather, it is to get
to a point where there are exactly zero hard-coded numeric pg_proc OID
references in the catalog .dat files. The lanvalidator column was the
only remaining location of such references, and it seems like a good
thing for future-proofing reasons to make it not be a special case.
There are still a few places in the .dat files with numeric OID references
to other catalogs, but after review I don't see any that seem worth
changing at present. In each case there are just too few entries to make
it worth the trouble to create lookup infrastructure.
This doesn't change the emitted postgres.bki file, so no catversion bump.
|
|
This change makes this module act more like most of our other low-level
resource management modules. It's a caller error if something is not
explicitly closed by the end of a successful transaction, so issue
a WARNING about it. This would not actually have caught the file leak
bug fixed in commit 231bcd080, because that was in a transaction-abort
path; but it still seems like a good, and pretty cheap, cross-check.
Discussion: https://postgr.es/m/152056616579.4966.583293218357089052@wrigleys.postgresql.org
|
|
Use the same code we already applied in duplicate_oids and unused_oids
to let this script find Catalog.pm without help. This removes the need
to supply a -I switch in most cases.
Also, mark the script executable, again to follow the precedent of
duplicate_oids and unused_oids. Now you can just do
"./reformat_dat_file.pl pg_proc.dat"
if you want to reformat only one or a few .dat files rather than all.
It'd be possible to remove the -I switches in the Makefile's convenience
targets, but I chose to leave them: they don't hurt anything, and it's
possible that in weird VPATH situations they might be of value.
|
|
I (tgl) originally coded the special case for pg_proc.pronargs as
though it were a kind of default value. It seems better though to
treat computable columns as an independent concern: this makes the
code clearer, and probably a bit faster too since we needn't do
work inside the per-column loop.
Improve related comments, as well, in the expectation that there
might be more cases like this in future.
John Naylor, some additional comment-hacking by me
Discussion: https://postgr.es/m/CAJVSVGW-D7OobzU=dybVT2JqZAx-4X1yvBJdavBmqQL05Q6CLw@mail.gmail.com
|
|
Identify pg_replication_origin as a shared catalog in catalogs.sgml,
using the same boilerplate wording used for most other shared catalogs
(and tweak another place where someone had randomly deviated from
that boilerplate).
Make an example in mmgr/README more consistent with surrounding text.
Update an obsolete cross-reference in a comment in storage/block.h.
Zhuo Ql
Discussion: https://postgr.es/m/44296255.1819230.1524889719001@mail.yahoo.com
|
|
Reported-by: Amit Langote
Discussion: https://postgr.es/m/d2e6674c-c5c6-fe89-1d0b-3534b9db0476@lab.ntt.co.jp
|
|
Discussion: https://postgr.es/m/15719.1523984266@sss.pgh.pa.us
|
|
Update typedefs.list from current buildfarm results. Adjust pgindent's
typedef blacklist to block some more unfortunate typedef names that have
snuck in since last time. Manually tweak a few places where I didn't
like the initial results of pgindent'ing.
|
|
Previously, you had to cd into src/include/catalog before running either
of these scripts. That's a bit tedious, so let's make the scripts do it
for you.
In passing, improve the initial comments in both scripts. Also remove
unused_oids' code to complain about duplicate oids. That was added in
yesterday's commit 5602265f7, but on second thought we shouldn't be
randomly redefining the script's behavior that way.
John Naylor and Tom Lane
Discussion: https://postgr.es/m/37D774E4-FE1F-437E-B3D2-593F314B7505@postgrespro.ru
|
|
unused_oids was previously a shell script, which of course didn't work at
all on Windows. Also, commit 372728b0d introduced some other portability
problems, as complained of by Stas Kelvich. We can improve matters by
converting it to Perl.
While we're at it, let's future-proof both this script and duplicate_oids
to use Catalog.pm rather than having a bunch of ad-hoc logic for parsing
catalog headers and .dat files. These scripts are thereby a bit slower,
which doesn't seem like a problem for typical manual use. It is a little
annoying for buildfarm purposes, but we should be able to fix that case
by having genbki.pl make the check instead of parsing the headers twice.
(That's not done in this commit, though.)
Stas Kelvich, adjusted a bit by me
Discussion: https://postgr.es/m/37D774E4-FE1F-437E-B3D2-593F314B7505@postgrespro.ru
|
|
Instead of immediately constructing the string we need to emit into the
.BKI file, preserve the items we extracted from the header file in a hash.
This eases using the info for other purposes.
John Naylor (with cosmetic adjustments by me)
Discussion: https://postgr.es/m/37D774E4-FE1F-437E-B3D2-593F314B7505@postgrespro.ru
|
|
Discussion: https://postgr.es/m/CABUevEzK3cNiHZQ18f5tK0guoT+cN_jWeVzhYYxY=r+1Q3SmoA@mail.gmail.com
|
|
Instead of doing ExecInitExpr every time a Param needs to be evaluated
in run-time partition pruning, do it once during run-time pruning
set-up and cache the exprstate in PartitionPruneContext, saving a lot of
work.
Author: David Rowley
Reviewed-by: Amit Langote, Álvaro Herrera
Discussion: https://postgr.es/m/CAKJS1f8-x+q-90QAPDu_okhQBV4DPEtPz8CJ=m0940GyT4DA4w@mail.gmail.com
|
|
This controls both plan-time and execution-time new-style partition
pruning. While finer-grain control is possible (maybe using an enum GUC
instead of boolean), there doesn't seem to be much need for that.
This new parameter controls partition pruning for all queries:
trivially, SELECT queries that affect partitioned tables are naturally
under its control since they are using the new technology. However,
while UPDATE/DELETE queries do not use the new code, we make the new GUC
control their behavior also (stealing control from
constraint_exclusion), because it is more natural, and it leads to a
more natural transition to the future in which those queries will also
use the new pruning code.
Constraint exclusion still controls pruning for regular inheritance
situations (those not involving partitioned tables).
Author: David Rowley
Review: Amit Langote, Ashutosh Bapat, Justin Pryzby, David G. Johnston
Discussion: https://postgr.es/m/CAKJS1f_0HwsxJG9m+nzU+CizxSdGtfe6iF_ykPYBiYft302DCw@mail.gmail.com
|
|
After introducing usage of t_tid of inner or page high key for storing
number of attributes of tuple, validation of tuple's ItemPointer with
ItemPointerIsValid becomes incorrect, it's need to validate only blocknumber of
ItemPointer. Missing this causes a incorrect page deletion, fix that. Test is
added.
BTW, current contrib/amcheck doesn't fail on index corrupted by this way.
Also introduce BTreeTupleGetTopParent/BTreeTupleSetTopParent macroses to improve
code readability and to avoid possible confusion with page high key: high key
is used to store top-parent link for branch to remove.
Bug found by Michael Paquier, but bug doesn't exist in previous versions because
t_tid was set to P_HIKEY.
Author: Teodor Sigaev
Reviewer: Peter Geoghegan
Discussion: https://www.postgresql.org/message-id/flat/20180419052436.GA16000%40paquier.xyz
|
|
Author: David Rowley
Discussion: https://www.postgresql.org/message-id/CAKJS1f8Ge2y0sDs6RQEJFH-vjb-bWhs86rCX4Fp4FZ+TmxtRkw@mail.gmail.com
|
|
Suggested by Michael Paquier
|
|
Discussion: https://postgr.es/m/f8128b11-c5bf-3539-48cd-234178b2314d@proxel.se
|
|
On further reflection, commit e5d83995e didn't go far enough: pretty much
everywhere in the planner that examines a clause's is_pushed_down flag
ought to be changed to use the more complicated behavior where we also
check the clause's required_relids. Otherwise we could make incorrect
decisions about whether, say, a clause is safe to use as a hash clause.
Some (many?) of these places are safe as-is, either because they are
never reached while considering a parameterized path, or because there
are additional checks that would reject a pushed-down clause anyway.
However, it seems smarter to just code them all the same way rather
than rely on easily-broken reasoning of that sort.
In support of that, invent a new macro RINFO_IS_PUSHED_DOWN that should
be used in place of direct tests on the is_pushed_down flag.
Like the previous patch, back-patch to all supported branches.
Discussion: https://postgr.es/m/f8128b11-c5bf-3539-48cd-234178b2314d@proxel.se
|
|
Use the term "system catalog" rather than "system relation" in assorted
places where it's clearly referring to a table rather than, say, an
index. Use more natural word order in the header boilerplate, improve
some of the one-liner catalog descriptions, and fix assorted random
deviations from the normal boilerplate. All purely neatnik-ism, but
why not.
John Naylor, some additional cleanup by me
Discussion: https://postgr.es/m/CAJVSVGUeJmFB3h-NJ18P32NPa+kzC165nm7GSoGHfPaN80Wxcw@mail.gmail.com
|
|
In some cases a clause attached to an outer join can be pushed down into
the outer join's RHS even though the clause is not degenerate --- this
can happen if we choose to make a parameterized path for the RHS. If
the clause ends up attached to a lower outer join, we'd misclassify it
as being a "join filter" not a plain "filter" condition at that node,
leading to wrong query results.
To fix, teach extract_actual_join_clauses to examine each join clause's
required_relids, not just its is_pushed_down flag. (The latter now
seems vestigial, or at least in need of rethinking, but we won't do
anything so invasive as redefining it in a bug-fix patch.)
This has been wrong since we introduced parameterized paths in 9.2,
though it's evidently hard to hit given the lack of previous reports.
The test case used here involves a lateral function call, and I think
that a lateral reference may be required to get the planner to select
a broken plan; though I wouldn't swear to that. In any case, even if
LATERAL is needed to trigger the bug, it still affects all supported
branches, so back-patch to all.
Per report from Andreas Karlsson. Thanks to Andrew Gierth for
preliminary investigation.
Discussion: https://postgr.es/m/f8128b11-c5bf-3539-48cd-234178b2314d@proxel.se
|
|
It appears that new fields introduced in 857f9c36 have inconsistent datatypes:
BTMetaPageData.btm_last_cleanup_num_heap_tuples is of float4 type,
while xl_btree_metadata.last_cleanup_num_heap_tuples is of double type.
IndexVacuumInfo.num_heap_tuples, which is a source of values for
both former fields is of double type. So, make both those fields in
BTMetaPageData and xl_btree_metadata use float8 type in order to match the
precision of the source. That shouldn't be double type, because we always
use types with explicit width in WAL.
Patch introduces incompatibility of on-disk format since 857f9c36 commit, but
that versions never was released, so just bump catalog version to avoid
possible confusion.
Author: Alexander Korortkov
|
|
Add several assertions that ensure that we're dealing with a pivot tuple
without non-key attributes where that's expected. Also, remove the
assertion within _bt_isequal(), restoring the v10 function signature. A
similar check will be performed for the page highkey within
_bt_moveright() in most cases. Also avoid dropping all objects within
regression tests, to increase pg_dump test coverage for INCLUDE indexes.
Rather than using infrastructure that's generally intended to be used
with reference counted heap tuple descriptors during truncation, use the
same function that was introduced to store flat TupleDescs in shared
memory (we use a temp palloc'd buffer). This isn't strictly necessary,
but seems more future-proof than the old approach. It also lets us
avoid including rel.h within indextuple.c, which was arguably a
modularity violation. Also, we now call index_deform_tuple() with the
truncated TupleDesc, not the source TupleDesc, since that's more robust,
and saves a few cycles.
In passing, fix a memory leak by pfree'ing truncated pivot tuple memory
during CREATE INDEX. Also pfree during a page split, just to be
consistent.
Refactor _bt_check_natts() to be more readable.
Author: Peter Geoghegan with some editorization by me
Reviewed by: Alexander Korotkov, Teodor Sigaev
Discussion: https://www.postgresql.org/message-id/CAH2-Wz%3DkCWuXeMrBCopC-tFs3FbiVxQNjjgNKdG2sHxZ5k2y3w%40mail.gmail.com
|
|
Change things around so that proper quoting of values interpolated into
the BKI data by initdb is the responsibility of initdb, not something
we half-heartedly handle by putting double quotes into the raw BKI data.
(Note: experimentation shows that it still doesn't work to put a double
quote into the initial superuser username, but that's the fault of
inadequate quoting while interpolating the name into SQL scripts;
the BKI aspect of it works fine now.)
Having done that, we can remove the special-case handling of values
that look like "something" from genbki.pl, and instead teach it to
escape double --- and single --- quotes properly. This removes the
nowhere-documented need to treat those specially in the BKI source
data; whatever you write will be passed through unchanged into the
inserted data value, modulo Perl's rules about single-quoted strings.
Add documentation explaining the (pre-existing) handling of backslashes
in the BKI data.
Per an earlier discussion with John Naylor.
Discussion: https://postgr.es/m/CAJVSVGUNao=-Q2-vAN3PYcdF5tnL5JAHwGwzZGuYHtq+Mk_9ng@mail.gmail.com
|
|
Formerly, Catalog.pm turned a C array type declaration in the catalog
header files into a SQL type, e.g., 'foo[]'. Along the way, genbki.pl
turned this into '_foo' for the purpose of type lookups, but wrote 'foo[]'
to postgres.bki. During bootstrap, bootscanner.l had to have a special
case rule to tokenize this, and then MapArrayTypeName() would turn 'foo[]'
into '_foo' one more time.
This seems unnecessarily complicated, especially since nobody cares that
much about the readability of postgres.bki. Instead, make Catalog.pm
convert the C declaration into '_foo' to start with, and preserve that
representation of the type name throughout bootstrap data processing.
Then rip out the special-case code in bootscanner.l and bootstrap.c.
This changes postgres.bki to the extent that array fields are now
declared like
proconfig = _text ,
rather than
proconfig = text[] ,
No documentation update, since the SGML docs didn't mention any of this
in the first place, and it's all pretty transparent to writers of
catalog header files anyway.
John Naylor
Discussion: https://postgr.es/m/CAJVSVGUNao=-Q2-vAN3PYcdF5tnL5JAHwGwzZGuYHtq+Mk_9ng@mail.gmail.com
|
|
Review of commit 1eb6d652: It's pointless to add padding to the GID fields,
when the code that follows assumes that there is no alignment, and uses
memcpy(). Remove the pointless padding.
Update comments to note the new fields in the WAL records.
Reviewed-by: Michael Paquier
Discussion: https://www.postgresql.org/message-id/33b787bf-dc20-1161-54e9-3f3b607bf59d%40iki.fi
|
|
It turns out that after runtime partition pruning, Append's
first_partial_plan does not accurately represent partial plans to run,
if any of those got pruned. This could limit participation of workers
in some partial subplans, if other subplans got pruned. Fix it by
keeping an index of the first valid partial subplan in the state node,
determined at execnode Init time.
Author: David Rowley, with cosmetic changes by me.
Discussion: https://postgr.es/m/CAKJS1f8o2Yd=rOP=Et3A0FWgF+gSAOkFSU6eNhnGzTPV7nN8sQ@mail.gmail.com
|
|
This omission prevented partitioning header files from being installed.
Per buildfarm member crake.
|
|
There's been a massive addition of partitioning code in PostgreSQL 11,
with little oversight on its placement, resulting in a
catalog/partition.c with poorly defined boundaries and responsibilities.
This commit tries to set a couple of distinct modules to separate things
a little bit. There are no code changes here, only code movement.
There are three new files:
src/backend/utils/cache/partcache.c
src/include/partitioning/partdefs.h
src/include/utils/partcache.h
The previous arrangement of #including catalog/partition.h almost
everywhere is no more.
Authors: Amit Langote and Álvaro Herrera
Discussion: https://postgr.es/m/98e8d509-790a-128c-be7f-e48a5b2d8d97@lab.ntt.co.jp
https://postgr.es/m/11aa0c50-316b-18bb-722d-c23814f39059@lab.ntt.co.jp
https://postgr.es/m/143ed9a4-6038-76d4-9a55-502035815e68@lab.ntt.co.jp
https://postgr.es/m/20180413193503.nynq7bnmgh6vs5vm@alvherre.pgsql
|
|
MaxIndexTuplesPerPage ignores the fact that btree indexes sometimes
store tuples with no data payload. But it also ignores the possibility
of "special space" on index pages, which offsets that, so that the
result isn't an underestimate. This all seems worth documenting, though.
In passing, remove #define MinIndexTupleSize, which was added by
commit 2c03216d8 but not used in that commit nor later ones.
Comment text by me; issue noticed by Peter Geoghegan.
Discussion: https://postgr.es/m/CAH2-WzkQmb54Kbx-YHXstRKXcNc+_87jwV3DRb54xcybLR7Oig@mail.gmail.com
|
|
We need to call expand_function_arguments() to expand named and default
arguments.
In PL/pgSQL, we also need to deal with named and default INOUT arguments
when receiving the output values into variables.
Author: Pavel Stehule <pavel.stehule@gmail.com>
|
|
This reverts commits d204ef63776b8a00ca220adec23979091564e465,
83454e3c2b28141c0db01c7d2027e01040df5249 and a few more commits thereafter
(complete list at the end) related to MERGE feature.
While the feature was fully functional, with sufficient test coverage and
necessary documentation, it was felt that some parts of the executor and
parse-analyzer can use a different design and it wasn't possible to do that in
the available time. So it was decided to revert the patch for PG11 and retry
again in the future.
Thanks again to all reviewers and bug reporters.
List of commits reverted, in reverse chronological order:
f1464c5380 Improve parse representation for MERGE
ddb4158579 MERGE syntax diagram correction
530e69e59b Allow cpluspluscheck to pass by renaming variable
01b88b4df5 MERGE minor errata
3af7b2b0d4 MERGE fix variable warning in non-assert builds
a5d86181ec MERGE INSERT allows only one VALUES clause
4b2d44031f MERGE post-commit review
4923550c20 Tab completion for MERGE
aa3faa3c7a WITH support in MERGE
83454e3c2b New files for MERGE
d204ef6377 MERGE SQL Command following SQL:2016
Author: Pavan Deolasee
Reviewed-by: Michael Paquier
|
|
Rename ii_KeyAttrNumbers to ii_IndexAttrNumbers to prevent confusion with
ii_NumIndexAttrs/ii_NumIndexKeyAttrs. ii_IndexAttrNumbers contains
all attributes including "including" columns, not only key attribute.
Discussion: https://www.postgresql.org/message-id/13123421-1d52-d0e4-c95c-6d69011e0595%40sigaev.ru
|
|
The HeapFetches counter was using a simple value in IndexOnlyScanState,
which fails to propagate values from parallel workers; so the counts are
wrong when IndexOnlyScan runs in parallel. Move it to Instrumentation,
like all the other counters.
While at it, change INSERT ON CONFLICT conflicting tuple counter to use
the new ntuples2 instead of nfiltered2, which is a blatant misuse.
Discussion: https://postgr.es/m/20180409215851.idwc75ct2bzi6tea@alvherre.pgsql
|
|
In 499be013de support for pruning unneeded Append subnodes was added.
The logic in that commit was not correctly checking if the next subplan
was in fact a valid subplan. This could cause parallel workers processes
to be given a subplan to work on which didn't require any work.
Per code review following an otherwise unexplained regression failure in
buildfarm member Pademelon. (We haven't been able to reproduce the
failure, so this is a bit of a blind fix in terms of whether it'll
actually fix it; but it is a clear bug nonetheless).
In passing, also add a comment to explain what first_partial_plan means.
Author: David Rowley
Discussion: https://postgr.es/m/CAKJS1f_E5r05hHUVG3UmCQJ49DGKKHtN=SHybD44LdzBn+CJng@mail.gmail.com
|
|
In its original form, reformat_dat_file.pl smashed consecutive blank
lines to a single blank line, which was helpful for mopping up excess
whitespace during the bootstrap data format conversion. But going
forward, there seems little reason to do that; if developers want to
put in multiple blank lines, let 'em. This makes it conform to the
documentation I (tgl) wrote, too.
In passing, clean up some sloppy markup choices in bki.sgml.
John Naylor
Discussion: https://postgr.es/m/28827.1523039259@sss.pgh.pa.us
|
|
In commit 9c0a0de4c, I'd failed to notice that catalog/catalog.h
should also be considered a frontend-unsafe header, because it includes
(and needs) the full form of pg_class.h, not to mention relcache.h.
However, various frontend code was depending on it to get
TABLESPACE_VERSION_DIRECTORY, so refactoring of some sort is called for.
The cleanest answer seems to be to move TABLESPACE_VERSION_DIRECTORY,
as well as the OIDCHARS symbol, to common/relpath.h. Do that, and mop up
inclusions as necessary. (I found that quite a few current users of
catalog/catalog.h don't seem to need it at all anymore, apparently as a
result of the refactorings that created common/relpath.[hc]. And
initdb.c needed it only as a route to pg_class_d.h.)
Discussion: https://postgr.es/m/6629.1523294509@sss.pgh.pa.us
|
|
Lack thereof pointed out by Tom Lane.
|
|
This reverts the backend sides of commit 1fde38beaa0c3e66c340efc7cc0dc272d6254bb0.
I have, at least for now, left the pg_verify_checksums tool in place, as
this tool can be very valuable without the rest of the patch as well,
and since it's a read-only tool that only runs when the cluster is down
it should be a lot safer.
|
|
Fix a couple of typos, and update a comment about why we set a BMS to
NULL.
Author: David Rowley
Discussion: http://postgr.es/m/CAKJS1f-tux=KdUz6ENJ9GHM_V2qgxysadYiOyQS9Ko9PTteVhQ@mail.gmail.com
|