Age | Commit message (Collapse) | Author |
|
This uses the progress reporting infrastructure added by c16dc1aca5e0,
adding support for CREATE INDEX and CREATE INDEX CONCURRENTLY.
There are two pieces to this: one is index-AM-agnostic, and the other is
AM-specific. The latter is fairly elaborate for btrees, including
reportage for parallel index builds and the separate phases that btree
index creation uses; other index AMs, which are much simpler in their
building procedures, have simplistic reporting only, but that seems
sufficient, at least for non-concurrent builds.
The index-AM-agnostic part is fairly complete, providing insight into
the CONCURRENTLY wait phases as well as block-based progress during the
index validation table scan. (The index validation index scan requires
patching each AM, which has not been included here.)
Reviewers: Rahila Syed, Pavan Deolasee, Tatsuro Yamada
Discussion: https://postgr.es/m/20181220220022.mg63bhk26zdpvmcj@alvherre.pgsql
|
|
When asked for a slice of a TOAST entry, decompress enough to return the
slice instead of decompressing the entire object.
For use cases where the slice is at, or near, the beginning of the entry,
this avoids a lot of unnecessary decompression work.
This changes the signature of pglz_decompress() by adding a boolean to
indicate if it's ok for the call to finish before consuming all of the
source or destination buffers.
Author: Paul Ramsey
Reviewed-By: Rafia Sabih, Darafei Praliaskouski, Regina Obe
Discussion: https://postgr.es/m/CACowWR07EDm7Y4m2kbhN_jnys%3DBBf9A6768RyQdKm_%3DNpkcaWg%40mail.gmail.com
|
|
On at least ZFS, it can be beneficial to create new WAL files every
time and not to bother zero-filling them. Since it's not clear which
other filesystems might benefit from one or both of those things,
add individual GUCs to control those two behaviors independently and
make only very general statements in the docs.
Author: Jerry Jelinek, with some adjustments by Thomas Munro
Reviewed-by: Alvaro Herrera, Andres Freund, Tomas Vondra, Robert Haas and others
Discussion: https://postgr.es/m/CACPQ5Fo00QR7LNAcd1ZjgoBi4y97%2BK760YABs0vQHH5dLdkkMA%40mail.gmail.com
|
|
Noted by Pavel Stehule
Discussion: https://postgr.es/m/CAFj8pRAaGO5FX7bnP3E=mRssoK8y5T78x7jKy-vDiyS68L888Q@mail.gmail.com
|
|
This unifies the various ad hoc logging (message printing, error
printing) systems used throughout the command-line programs.
Features:
- Program name is automatically prefixed.
- Message string does not end with newline. This removes a common
source of inconsistencies and omissions.
- Additionally, a final newline is automatically stripped, simplifying
use of PQerrorMessage() etc., another common source of mistakes.
- I converted error message strings to use %m where possible.
- As a result of the above several points, more translatable message
strings can be shared between different components and between
frontends and backend, without gratuitous punctuation or whitespace
differences.
- There is support for setting a "log level". This is not meant to be
user-facing, but can be used internally to implement debug or
verbose modes.
- Lazy argument evaluation, so no significant overhead if logging at
some level is disabled.
- Some color in the messages, similar to gcc and clang. Set
PG_COLOR=auto to try it out. Some colors are predefined, but can be
customized by setting PG_COLORS.
- Common files (common/, fe_utils/, etc.) can handle logging much more
simply by just using one API without worrying too much about the
context of the calling program, requiring callbacks, or having to
pass "progname" around everywhere.
- Some programs called setvbuf() to make sure that stderr is
unbuffered, even on Windows. But not all programs did that. This
is now done centrally.
Soft goals:
- Reduces vertical space use and visual complexity of error reporting
in the source code.
- Encourages more deliberate classification of messages. For example,
in some cases it wasn't clear without analyzing the surrounding code
whether a message was meant as an error or just an info.
- Concepts and terms are vaguely aligned with popular logging
frameworks such as log4j and Python logging.
This is all just about printing stuff out. Nothing affects program
flow (e.g., fatal exits). The uses are just too varied to do that.
Some existing code had wrappers that do some kind of print-and-exit,
and I adapted those.
I tried to keep the output mostly the same, but there is a lot of
historical baggage to unwind and special cases to consider, and I
might not always have succeeded. One significant change is that
pg_rewind used to write all error messages to stdout. That is now
changed to stderr.
Reviewed-by: Donald Dong <xdong@csumb.edu>
Reviewed-by: Arthur Zakirov <a.zakirov@postgrespro.ru>
Discussion: https://www.postgresql.org/message-id/flat/6a609b43-4f57-7348-6480-bd022f924310@2ndquadrant.com
|
|
jsonb_path_match() checks if jsonb document matches jsonpath query. Therefore,
jsonpath query should return single boolean. Currently, if result of jsonpath
is not a single boolean, NULL is returned independently whether silent mode
is on or off. But that appears to be wrong when silent mode is off. This
commit makes jsonb_path_match() throw an error in this case.
Author: Nikita Glukhov
|
|
Jsonpath now accepts integers with leading zeroes and floats starting with
a dot. However, SQL standard requires to follow JSON specification, which
doesn't allow none of these cases. Our json[b] datatypes also restrict that.
So, restrict it in jsonpath altogether.
Author: Nikita Glukhov
|
|
This commit makes existing GIN operator classes jsonb_ops and json_path_ops
support "jsonb @@ jsonpath" and "jsonb @? jsonpath" operators. Basic idea is
to extract statements of following form out of jsonpath.
key1.key2. ... .keyN = const
The rest of jsonpath is rechecked from heap.
Catversion is bumped.
Discussion: https://postgr.es/m/fcc6fc6a-b497-f39a-923d-aa34d0c588e8%402ndQuadrant.com
Author: Nikita Glukhov, Alexander Korotkov
Reviewed-by: Jonathan Katz, Pavel Stehule
|
|
One should almost always terminate an old process, not use a manual
removal tool like ipcrm. Removal of the ipcclean script eleven years
ago (39627b1ae680cba44f6e56ca5facec4fdbfe9495) and its non-replacement
corroborate that manual shm removal is now a niche goal. Back-patch to
9.4 (all supported versions).
Reviewed by Daniel Gustafsson and Kyotaro HORIGUCHI.
Discussion: https://postgr.es/m/20180812064815.GB2301738@rfd.leadboat.com
|
|
This is an SQL-standard feature that allows creating columns that are
computed from expressions rather than assigned, similar to a view or
materialized view but on a column basis.
This implements one kind of generated column: stored (computed on
write). Another kind, virtual (computed on read), is planned for the
future, and some room is left for it.
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/b151f851-4019-bdb1-699e-ebab07d2f40a@2ndquadrant.com
|
|
This moves the responsibility for:
- creating the storage necessary for a relation, including creating a
new relfilenode for a relation with existing storage
- non-transactional truncation of a relation
- VACUUM FULL / CLUSTER's rewrite of a table
below tableam.
This is fairly straight forward, with a bit of complexity smattered in
to move the computation of xid / multixid horizons below the AM, as
they don't make sense for every table AM.
Author: Andres Freund
Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
|
|
Instead of inferring epoch progress from xids and checkpoints,
introduce a 64 bit FullTransactionId type and use it to track xid
generation. This fixes an unlikely bug where the epoch is reported
incorrectly if the range of active xids wraps around more than once
between checkpoints.
The only user-visible effect of this commit is to correct the epoch
used by txid_current() and txid_status(), also visible with
pg_controldata, in those rare circumstances. It also creates some
basic infrastructure so that later patches can use 64 bit
transaction IDs in more places.
The new type is a struct that we pass by value, as a form of strong
typedef. This prevents the sort of accidental confusion between
TransactionId and FullTransactionId that would be possible if we
were to use a plain old uint64.
Author: Thomas Munro
Reported-by: Amit Kapila
Reviewed-by: Andres Freund, Tom Lane, Heikki Linnakangas
Discussion: https://postgr.es/m/CAA4eK1%2BMv%2Bmb0HFfWM9Srtc6MVe160WFurXV68iAFMcagRZ0dQ%40mail.gmail.com
|
|
Introduce a third extended statistic type, supported by the CREATE
STATISTICS command - MCV lists, a generalization of the statistic
already built and used for individual columns.
Compared to the already supported types (n-distinct coefficients and
functional dependencies), MCV lists are more complex, include column
values and allow estimation of much wider range of common clauses
(equality and inequality conditions, IS NULL, IS NOT NULL etc.).
Similarly to the other types, a new pseudo-type (pg_mcv_list) is used.
Author: Tomas Vondra
Reviewed-by: Dean Rasheed, David Rowley, Mark Dilger, Alvaro Herrera
Discussion: https://postgr.es/m/dfdac334-9cf2-2597-fb27-f0fb3753f435@2ndquadrant.com
|
|
The transaction ID returned by GetNextXidAndEpoch() is in the future,
so we can't attempt to access its status or we might try to read a
CLOG page that doesn't exist. The > vs >= confusion probably stemmed
from the choice of a variable name containing the word "last" instead
of "next", so fix that too.
Back-patch to 10 where the function arrived.
Author: Thomas Munro
Discussion: https://postgr.es/m/CA%2BhUKG%2Buua_BV5cyfsioKVN2d61Lukg28ECsWTXKvh%3DBtN2DPA%40mail.gmail.com
|
|
These instances were apparently trying to carry the const qualifier
from the arguments through the complex casts, but for that the const
qualifier was misplaced.
|
|
This primarily is to allow WHERE CURRENT OF to continue to work as it
currently does. It's not clear to me that these semantics make sense
for every AM, but it works for the in-core heap, and the out of core
zheap. We can refine it further at a later point if necessary.
Author: Andres Freund
Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
|
|
We were getting just DEFAULT_INEQ_SEL for comparisons such as
"ctid >= constant", but it's possible to do a lot better if we don't
mind some assumptions about the table's tuple density being reasonably
uniform. There are already assumptions much like that elsewhere in
the planner, so that hardly seems like much of an objection.
Extracted from a patch set that also proposes to introduce a special
executor node type for such queries. Not sure if that's going to make
it into v12, but improving the selectivity estimate is useful
independently of that.
Edmund Horner, reviewed by David Rowley
Discussion: https://postgr.es/m/CAMyN-kB-nFTkF=VA_JPwFNo08S0d-Yk0F741S2B7LDmYAi8eyA@mail.gmail.com
|
|
This uses the same progress reporting infrastructure added in commit
c16dc1aca5e01e6acaadfcf38f5fc964a381dc62 and extends it to these
additional cases. We lack the ability to track the internal progress
of sorts and index builds so the information reported is
coarse-grained for some parts of the operation, but it still seems
like a significant improvement over having nothing at all.
Tatsuro Yamada, reviewed by Thomas Munro, Masahiko Sawada, Michael
Paquier, Jeff Janes, Alvaro Herrera, Rafia Sabih, and by me. A fair
amount of polishing also by me.
Discussion: http://postgr.es/m/59A77072.3090401@lab.ntt.co.jp
|
|
Non-backtracking flex parsers work faster than backtracking ones. So, this
commit gets rid of backtracking in jsonpath_scan.l. That required explicit
handling of some cases as well as manual backtracking for some cases. More
regression tests for numerics are added.
Discussion: https://mail.google.com/mail/u/0?ik=a20b091faa&view=om&permmsgid=msg-f%3A1628425344167939063
Author: John Naylor, Nikita Gluknov, Alexander Korotkov
|
|
This commit include formatting improvements, renamings and comments. Also,
it makes jsonpath_scan.l be more uniform with other our lexers. Firstly,
states names are renamed to more short alternatives. Secondly, <INITIAL>
prefix removed from the rules. Corresponding rules are moved to the tail, so
they would anyway work only in initial state.
Author: Alexander Korotkov
Reviewed-by: John Naylor
|
|
I failed to think about PIs starting with "xml". We don't really
need this check at all, so just take it out. Oversight in
commit 8d1dadb25 et al.
|
|
Previously we were using the SQL:2003 definition, which doesn't allow
this, but that creates a serious dump/restore gotcha: there is no
setting of xmloption that will allow all valid XML data. Hence,
switch to the 2006 definition.
Since libxml doesn't accept <!DOCTYPE> directives in the mode we
use for CONTENT parsing, the implementation is to detect <!DOCTYPE>
in the input and switch to DOCUMENT parsing mode. This should not
cost much, because <!DOCTYPE> should be close to the front of the
input if it's there at all. It's possible that this causes the
error messages for malformed input to be slightly different than
they were before, if said input includes <!DOCTYPE>; but that does
not seem like a big problem.
In passing, buy back a few cycles in parsing of large XML documents
by not doing strlen() of the whole input in parse_xml_decl().
Back-patch because dump/restore failures are not nice. This change
shouldn't break any cases that worked before, so it seems safe to
back-patch.
Chapman Flack (revised a bit by me)
Discussion: https://postgr.es/m/CAN-V+g-6JqUQEQZ55Q3toXEN6d5Ez5uvzL4VR+8KtvJKj31taw@mail.gmail.com
|
|
This reverts commit 4e274a043fc8310ce1148190ef674beca06e990c.
These files aren't actually built anymore since 550b9d26f.
|
|
|
|
This adds a flag "deterministic" to collations. If that is false,
such a collation disables various optimizations that assume that
strings are equal only if they are byte-wise equal. That then allows
use cases such as case-insensitive or accent-insensitive comparisons
or handling of strings with different Unicode normal forms.
This functionality is only supported with the ICU provider. At least
glibc doesn't appear to have any locales that work in a
nondeterministic way, so it's not worth supporting this for the libc
provider.
The term "deterministic comparison" in this context is from Unicode
Technical Standard #10
(https://unicode.org/reports/tr10/#Deterministic_Comparison).
This patch makes changes in three areas:
- CREATE COLLATION DDL changes and system catalog changes to support
this new flag.
- Many executor nodes and auxiliary code are extended to track
collations. Previously, this code would just throw away collation
information, because the eventually-called user-defined functions
didn't use it since they only cared about equality, which didn't
need collation information.
- String data type functions that do equality comparisons and hashing
are changed to take the (non-)deterministic flag into account. For
comparison, this just means skipping various shortcuts and tie
breakers that use byte-wise comparison. For hashing, we first need
to convert the input string to a canonical "sort key" using the ICU
analogue of strxfrm().
Reviewed-by: Daniel Verite <daniel@manitou-mail.org>
Reviewed-by: Peter Geoghegan <pg@bowt.ie>
Discussion: https://www.postgresql.org/message-id/flat/1ccc668f-4cbc-0bef-af67-450b47cdfee7@2ndquadrant.com
|
|
Trying to call the function with the top-most parent of a partition tree
was leading to a crash. In this case the correct result is to return
the top-most parent itself.
Reported-by: Álvaro Herrera
Author: Michael Paquier
Reviewed-by: Amit Langote
Discussion: https://postgr.es/m/20190322032612.GA323@alvherre.pgsql
|
|
I unnecessarily removed this check in 3de241dba86f because I
misunderstood what the final representation of constraints across a
partitioning hierarchy was to be. Put it back (in both branches).
Discussion: https://postgr.es/m/201901222145.t6wws6t6vrcu@alvherre.pgsql
|
|
Make nbtree treat all index tuples as having a heap TID attribute.
Index searches can distinguish duplicates by heap TID, since heap TID is
always guaranteed to be unique. This general approach has numerous
benefits for performance, and is prerequisite to teaching VACUUM to
perform "retail index tuple deletion".
Naively adding a new attribute to every pivot tuple has unacceptable
overhead (it bloats internal pages), so suffix truncation of pivot
tuples is added. This will usually truncate away the "extra" heap TID
attribute from pivot tuples during a leaf page split, and may also
truncate away additional user attributes. This can increase fan-out,
especially in a multi-column index. Truncation can only occur at the
attribute granularity, which isn't particularly effective, but works
well enough for now. A future patch may add support for truncating
"within" text attributes by generating truncated key values using new
opclass infrastructure.
Only new indexes (BTREE_VERSION 4 indexes) will have insertions that
treat heap TID as a tiebreaker attribute, or will have pivot tuples
undergo suffix truncation during a leaf page split (on-disk
compatibility with versions 2 and 3 is preserved). Upgrades to version
4 cannot be performed on-the-fly, unlike upgrades from version 2 to
version 3. contrib/amcheck continues to work with version 2 and 3
indexes, while also enforcing stricter invariants when verifying version
4 indexes. These stricter invariants are the same invariants described
by "3.1.12 Sequencing" from the Lehman and Yao paper.
A later patch will enhance the logic used by nbtree to pick a split
point. This patch is likely to negatively impact performance without
smarter choices around the precise point to split leaf pages at. Making
these two mostly-distinct sets of enhancements into distinct commits
seems like it might clarify their design, even though neither commit is
particularly useful on its own.
The maximum allowed size of new tuples is reduced by an amount equal to
the space required to store an extra MAXALIGN()'d TID in a new high key
during leaf page splits. The user-facing definition of the "1/3 of a
page" restriction is already imprecise, and so does not need to be
revised. However, there should be a compatibility note in the v12
release notes.
Author: Peter Geoghegan
Reviewed-By: Heikki Linnakangas, Alexander Korotkov
Discussion: https://postgr.es/m/CAH2-WzkVb0Kom=R+88fDFb=JSxZMFvbHVC6Mn9LJ2n=X=kS-Uw@mail.gmail.com
|
|
Use dedicated struct to represent nbtree insertion scan keys. Having a
dedicated struct makes the difference between search type scankeys and
insertion scankeys a lot clearer, and simplifies the signature of
several related functions. This is based on a suggestion by Andrey
Lepikhov.
Streamline how unique index insertions cache binary search progress.
Cache the state of in-progress binary searches within _bt_check_unique()
for later instead of having callers avoid repeating the binary search in
an ad-hoc manner. This makes it easy to add a new optimization:
_bt_check_unique() now falls out of its loop immediately in the common
case where it's already clear that there couldn't possibly be a
duplicate.
The new _bt_check_unique() scheme makes it a lot easier to manage cached
binary search effort afterwards, from within _bt_findinsertloc(). This
is needed for the upcoming patch to make nbtree tuples unique by
treating heap TID as a final tiebreaker column. Unique key binary
searches need to restore lower and upper bounds. They cannot simply
continue to use the >= lower bound as the offset to insert at, because
the heap TID tiebreaker column must be used in comparisons for the
restored binary search (unlike the original _bt_check_unique() binary
search, where scankey's heap TID column must be omitted).
Author: Peter Geoghegan, Heikki Linnakangas
Reviewed-By: Heikki Linnakangas, Andrey Lepikhov
Discussion: https://postgr.es/m/CAH2-WzmE6AhUdk9NdWBf4K3HjWXZBX3+umC7mH7+WDrKcRtsOw@mail.gmail.com
|
|
Jsonpath grammar and scanner are both quite small. It doesn't worth complexity
to compile them separately. This commit makes grammar and scanner be compiled
at once. Therefore, jsonpath_gram.h and jsonpath_gram.h are no longer needed.
This commit also does some reorganization of code in jsonpath_gram.y.
Discussion: https://postgr.es/m/d47b2023-3ecb-5f04-d253-d557547cf74f%402ndQuadrant.com
|
|
Reason is the same as in 75c57058b0.
|
|
Typedef name should be both unique and non-intersect with variable names
across all the sources. That makes both pg_indent and debuggers happy.
Discussion: https://postgr.es/m/23865.1552936099%40sss.pgh.pa.us
|
|
|
|
Unrecognized attribute names are supposed to be ignored. But the code
would error out on an unrecognized attribute value even if it did not
recognize the attribute name. So unrecognized attributes wouldn't
really be ignored unless the value happened to be one that matched a
recognized value. This would break some important cases where the
attribute would be processed by ucol_open() directly. Fix that and
add a test case.
The restructured code should also avoid compiler warnings about
initializing a UColAttribute value to -1, because the type might be an
unsigned enum. (reported by Andres Freund)
|
|
In RI_FKey_pk_upd_check_required(), we check among other things
whether the old and new key are equal, so that we don't need to run
cascade actions when nothing has actually changed. This was using the
equality operator. But the effect of this is that if a value in the
primary key is changed to one that "looks" different but compares as
equal, the update is not propagated. (Examples are float -0 and 0 and
case-insensitive text.) This appears to violate the SQL standard, and
it also behaves inconsistently if in a multicolumn key another key is
also updated that would cause the row to compare as not equal.
To fix, if we are looking at the PK table in ri_KeysEqual(), then do a
bytewise comparison similar to record_image_eq() instead of using the
equality operators. This only makes a difference for ON UPDATE
CASCADE, but for consistency we treat all changes to the PK the same. For
the FK table, we continue to use the equality operators.
Discussion: https://www.postgresql.org/message-id/flat/3326fc2e-bc02-d4c5-e3e5-e54da466e89a@2ndquadrant.com
|
|
As it was agreed to worsen the code readability.
Discussion: https://postgr.es/m/ecfcfb5f-3233-eaa9-0c83-07056fb49a83%402ndquadrant.com
|
|
Instead of tricky assignment to {0} introduce special macros, which
explicitly initialize every field.
|
|
Discussion: https://postgr.es/m/CAEeOP_a-Pfy%3DU9-f%3DgQ0AsB8FrxrC8xCTVq%2BeO71-2VoWP5cag%40mail.gmail.com
Author: Mark G
|
|
Because they fail build of jsonpath_scan.c.
|
|
Reported-by: Jeff Janes, Tom Lane
Discussion: https://postgr.es/m/CAMkU%3D1w1qBvoW82ZTFpAKae027R-2OHw-m6ALe0VQRNAFueBVA%40mail.gmail.com
|
|
Starting in ICU 54, collation customization attributes can be
specified in the locale string, for example
"@colStrength=primary;colCaseLevel=yes". Add support for this for
older ICU versions as well, by adding some minimal parsing of the
attributes in the locale string and calling ucol_setAttribute() on
them. This is essentially what never ICU versions do internally in
ucol_open(). This was we can offer this functionality in a consistent
way in all ICU versions supported by PostgreSQL.
Also add some tests for ICU collation customization.
Reported-by: Daniel Verite <daniel@manitou-mail.org>
Discussion: https://www.postgresql.org/message-id/0270ebd4-f67c-8774-1a5a-91adfb9bb41f@2ndquadrant.com
|
|
Warning was observed in gcc 4.4.6, gcc 4.4.7 and probably others.
Reported-by: Tom Lane
Discussion: https://postgr.es/m/25151.1552751426%40sss.pgh.pa.us
|
|
Per buildfarm. See commit 41c912cad for precedent.
|
|
Add support of numeric error suppression to jsonpath as it's required by
standard. This commit doesn't use PG_TRY()/PG_CATCH() in order to implement
that. Instead, it provides internal versions of numeric functions used, which
support error suppression.
Discussion: https://postgr.es/m/fcc6fc6a-b497-f39a-923d-aa34d0c588e8%402ndQuadrant.com
Author: Alexander Korotkov, Nikita Glukhov
Reviewed-by: Tomas Vondra
|
|
SQL 2016 standards among other things contains set of SQL/JSON features for
JSON processing inside of relational database. The core of SQL/JSON is JSON
path language, allowing access parts of JSON documents and make computations
over them. This commit implements partial support JSON path language as
separate datatype called "jsonpath". The implementation is partial because
it's lacking datetime support and suppression of numeric errors. Missing
features will be added later by separate commits.
Support of SQL/JSON features requires implementation of separate nodes, and it
will be considered in subsequent patches. This commit includes following
set of plain functions, allowing to execute jsonpath over jsonb values:
* jsonb_path_exists(jsonb, jsonpath[, jsonb, bool]),
* jsonb_path_match(jsonb, jsonpath[, jsonb, bool]),
* jsonb_path_query(jsonb, jsonpath[, jsonb, bool]),
* jsonb_path_query_array(jsonb, jsonpath[, jsonb, bool]).
* jsonb_path_query_first(jsonb, jsonpath[, jsonb, bool]).
This commit also implements "jsonb @? jsonpath" and "jsonb @@ jsonpath", which
are wrappers over jsonpath_exists(jsonb, jsonpath) and jsonpath_predicate(jsonb,
jsonpath) correspondingly. These operators will have an index support
(implemented in subsequent patches).
Catversion bumped, to add new functions and operators.
Code was written by Nikita Glukhov and Teodor Sigaev, revised by me.
Documentation was written by Oleg Bartunov and Liudmila Mantrova. The work
was inspired by Oleg Bartunov.
Discussion: https://postgr.es/m/fcc6fc6a-b497-f39a-923d-aa34d0c588e8%402ndQuadrant.com
Author: Nikita Glukhov, Teodor Sigaev, Alexander Korotkov, Oleg Bartunov, Liudmila Mantrova
Reviewed-by: Tomas Vondra, Andrew Dunstan, Pavel Stehule, Alexander Korotkov
|
|
|
|
Some buildfarm members using CLOBBER_CACHE_ALWAYS have been having OOM
problems of late. Commit 2455ab488 addressed this problem by recovering
space transiently used within RelationBuildPartitionDesc, but it turns
out that leaves quite a lot on the table, because other subroutines of
RelationBuildDesc also leak memory like mad. Let's move the temp-context
management into RelationBuildDesc so that leakage from the other
subroutines is also recovered.
I examined this issue by arranging for postgres.c to dump the size of
MessageContext just before resetting it in each command cycle, and
then running the update.sql regression test (which is one of the two
that are seeing buildfarm OOMs) with and without CLOBBER_CACHE_ALWAYS.
Before 2455ab488, the peak space usage with CCA was as much as 250MB.
That patch got it down to ~80MB, but with this patch it's about 0.5MB,
and indeed the space usage now seems nearly indistinguishable from a
non-CCA build.
RelationBuildDesc's traditional behavior of not worrying about leaking
transient data is of many years' standing, so I'm pretty hesitant to
change that without more evidence that it'd be useful in a normal build.
(So far as I can see, non-CCA memory consumption is about the same with
or without this change, whuch if anything suggests that it isn't useful.)
Hence, configure the patch so that we recover space only when
CLOBBER_CACHE_ALWAYS or CLOBBER_CACHE_RECURSIVELY is defined. However,
that choice can be overridden at compile time, in case somebody would
like to do some performance testing and try to develop evidence for
changing that decision.
It's possible that we ought to back-patch this change, but in the
absence of back-branch OOM problems in the buildfarm, I'm not in
a hurry to do that.
Discussion: https://postgr.es/m/CA+TgmoY3bRmGB6-DUnoVy5fJoreiBJ43rwMrQRCdPXuKt4Ykaw@mail.gmail.com
|
|
Previously, the SERIALIZABLE isolation level prevented parallel query
from being used. Allow the two features to be used together by
sharing the leader's SERIALIZABLEXACT with parallel workers.
An extra per-SERIALIZABLEXACT LWLock is introduced to make it safe to
share, and new logic is introduced to coordinate the early release
of the SERIALIZABLEXACT required for the SXACT_FLAG_RO_SAFE
optimization, as follows:
The first backend to observe the SXACT_FLAG_RO_SAFE flag (set by
some other transaction) will 'partially release' the SERIALIZABLEXACT,
meaning that the conflicts and locks it holds are released, but the
SERIALIZABLEXACT itself will remain active because other backends
might still have a pointer to it.
Whenever any backend notices the SXACT_FLAG_RO_SAFE flag, it clears
its own MySerializableXact variable and frees local resources so that
it can skip SSI checks for the rest of the transaction. In the
special case of the leader process, it transfers the SERIALIZABLEXACT
to a new variable SavedSerializableXact, so that it can be completely
released at the end of the transaction after all workers have exited.
Remove the serializable_okay flag added to CreateParallelContext() by
commit 9da0cc35, because it's now redundant.
Author: Thomas Munro
Reviewed-by: Haribabu Kommi, Robert Haas, Masahiko Sawada, Kevin Grittner
Discussion: https://postgr.es/m/CAEepm=0gXGYhtrVDWOTHS8SQQy_=S9xo+8oCxGLWZAOoeJ=yzQ@mail.gmail.com
|
|
The SQL:2016 standard adds support for the hyperbolic functions
sinh(), cosh(), and tanh(). POSIX has long required libm to
provide those functions as well as their inverses asinh(),
acosh(), atanh(). Hence, let's just expose the libm functions
to the SQL level. As with the trig functions, we only implement
versions for float8, not numeric.
For the moment, we'll assume that all platforms actually do have
these functions; if experience teaches otherwise, some autoconf
effort may be needed.
SQL:2016 also adds support for base-10 logarithm, but with the
function name log10(), whereas the name we've long used is log().
Add aliases named log10() for the float8 and numeric versions.
Lætitia Avrot
Discussion: https://postgr.es/m/CAB_COdguG22LO=rnxDQ2DW1uzv8aQoUzyDQNJjrR4k00XSgm5w@mail.gmail.com
|
|
Historically guc.c has just refused examples like set work_mem = '30.1GB',
but it seems more useful for it to take that and round off the value to
some reasonable approximation of what the user said. Just rounding to
the parameter's native unit would work, but it would lead to rather
silly-looking settings, such as 31562138kB for this example. Instead
let's round to the nearest multiple of the next smaller unit (if any),
producing 30822MB.
Also, do the units conversion math in floating point and round to integer
(if needed) only at the end. This produces saner results for inputs that
aren't exact multiples of the parameter's native unit, and removes another
difference in the behavior for integer vs. float parameters.
In passing, document the ability to use hex or octal input where it
ought to be documented.
Discussion: https://postgr.es/m/1798.1552165479@sss.pgh.pa.us
|