Age | Commit message (Collapse) | Author |
|
I accidentally missed adding this when adjusting 55fe60938 for back
patching. This adjustment was made for 9.6 to 13. 14 and master are not
affected.
Discussion: https://postgr.es/m/CAApHDvp=twCsGAGQG=A=cqOaj4mpknPBW-EZB-sd+5ZS5gCTtA@mail.gmail.com
|
|
Due to how pg_size_pretty(bigint) was implemented, it's possible that when
given a negative number of bytes that the returning value would not match
the equivalent positive return value when given the equivalent positive
number of bytes. This was due to two separate issues.
1. The function used bit shifting to convert the number of bytes into
larger units. The rounding performed by bit shifting is not the same as
dividing. For example -3 >> 1 = -2, but -3 / 2 = -1. These two
operations are only equivalent with positive numbers.
2. The half_rounded() macro rounded towards positive infinity. This meant
that negative numbers rounded towards zero and positive numbers rounded
away from zero.
Here we fix #1 by dividing the values instead of bit shifting. We fix #2
by adjusting the half_rounded macro always to round away from zero.
Additionally, adjust the pg_size_pretty(numeric) function to be more
explicit that it's using division rather than bit shifting. A casual
observer might have believed bit shifting was used due to a static
function being named numeric_shift_right. However, that function was
calculating the divisor from the number of bits and performed division.
Here we make that more clear. This change is just cosmetic and does not
affect the return value of the numeric version of the function.
Here we also add a set of regression tests both versions of
pg_size_pretty() which test the values directly before and after the
function switches to the next unit.
This bug was introduced in 8a1fab36a. Prior to that negative values were
always displayed in bytes.
Author: Dean Rasheed, David Rowley
Discussion: https://postgr.es/m/CAEZATCXnNW4HsmZnxhfezR5FuiGgp+mkY4AzcL5eRGO4fuadWg@mail.gmail.com
Backpatch-through: 9.6, where the bug was introduced.
|
|
Commit 03ffc4d6d added logic to bypass all caching behavior in
LookupOpclassInfo when CLOBBER_CACHE_ALWAYS is enabled. It doesn't
look like I stopped to think much about what that would cost, but
recent investigation shows that the cost is enormous: it roughly
doubles the time needed for cache-clobber test runs.
There does seem to be value in this behavior when trying to test
the opclass-cache loading logic itself, but for other purposes the
cost is excessive. Hence, let's back off to doing this only when
debug_invalidate_system_caches_always is at least 3; or in older
branches, when CLOBBER_CACHE_RECURSIVELY is defined.
While here, clean up some other minor issues in LookupOpclassInfo.
Re-order the code so we aren't left with broken cache entries (leading
to later core dumps) in the unlikely case that we suffer OOM while
trying to allocate space for a new entry. (That seems to be my
oversight in 03ffc4d6d.) Also, in >= v13, stop allocating one array
entry too many. That's evidently left over from sloppy reversion in
851b14b0c.
Back-patch to all supported branches, mainly to reduce the runtime
of cache-clobbering buildfarm animals.
Discussion: https://postgr.es/m/1370856.1625428625@sss.pgh.pa.us
|
|
In previous commit, I missed that relmap_redo() was also not acquiring the
RelationMappingLock. Thanks to Thomas Munro for pointing that out.
Backpatch-through: 9.6, like previous commit.
Discussion: https://www.postgresql.org/message-id/CA%2BhUKGLev%3DPpOSaL3WRZgOvgk217et%2BbxeJcRr4eR-NttP1F6Q%40mail.gmail.com
|
|
Contrary to the comment here, POSIX does not guarantee atomicity of a
read(), if another process calls write() concurrently. Or at least Linux
does not. Add locking to load_relmap_file() to avoid the race condition.
Fixes bug #17064. Thanks to Alexander Lakhin for the report and test case.
Backpatch-through: 9.6, all supported versions.
Discussion: https://www.postgresql.org/message-id/17064-bb0d7904ef72add3@postgresql.org
|
|
Previously, a zero value for the relfilenode resulted in
a confusing error message about "unexpected duplicate".
This function returns NULL for other invalid relfilenode
values, so zero should be treated likewise.
It's been like this all along, so back-patch to all supported
branches.
Justin Pryzby
Discussion: https://postgr.es/m/20210612023324.GT16435@telsasoft.com
|
|
This case should be disallowed, just as FOR UPDATE with a plain
GROUP BY is disallowed; FOR UPDATE only makes sense when each row
of the query result can be identified with a single table row.
However, we missed teaching CheckSelectLocking() to check
groupingSets as well as groupClause, so that it would allow
degenerate grouping sets. That resulted in a bad plan and
a null-pointer dereference in the executor.
Looking around for other instances of the same bug, the only one
I found was in examine_simple_variable(). That'd just lead to
silly estimates, but it should be fixed too.
Per private report from Yaoguang Chen.
Back-patch to all supported branches.
|
|
While we were (mostly) careful about ensuring that the dimensions of
arrays aren't large enough to cause integer overflow, the lower bound
values were generally not checked. This allows situations where
lower_bound + dimension overflows an integer. It seems that that's
harmless so far as array reading is concerned, except that array
elements with subscripts notionally exceeding INT_MAX are inaccessible.
However, it confuses various array-assignment logic, resulting in a
potential for memory stomps.
Fix by adding checks that array lower bounds aren't large enough to
cause lower_bound + dimension to overflow. (Note: this results in
disallowing cases where the last subscript position would be exactly
INT_MAX. In principle we could probably allow that, but there's a lot
of code that computes lower_bound + dimension and would need adjustment.
It seems doubtful that it's worth the trouble/risk to allow it.)
Somewhat independently of that, array_set_element() was careless
about possible overflow when checking the subscript of a fixed-length
array, creating a different route to memory stomps. Fix that too.
Security: CVE-2021-32027
|
|
Most GUC check hooks that inspect database state have special checks
that prevent them from throwing hard errors for state-dependent issues
when source == PGC_S_TEST. This allows, for example,
"ALTER DATABASE d SET default_text_search_config = foo" when the "foo"
configuration hasn't been created yet. Without this, we have problems
during dump/reload or pg_upgrade, because pg_dump has no idea about
possible dependencies of GUC values and can't ensure a safe restore
ordering.
However, check_role() and check_session_authorization() hadn't gotten
the memo about that, and would throw hard errors anyway. It's not
entirely clear what is the use-case for "ALTER ROLE x SET role = y",
but we've now heard two independent complaints about that bollixing
an upgrade, so apparently some people are doing it.
Hence, fix these two functions to act more like other check hooks
with similar needs. (But I did not change their insistence on
being inside a transaction, as it's still not apparent that setting
either GUC from the configuration file would be wise.)
Also fix check_temp_buffers, which had a different form of the disease
of making state-dependent checks without any exception for PGC_S_TEST.
A cursory survey of other GUC check hooks did not find any more issues
of this ilk. (There are a lot of interdependencies among
PGC_POSTMASTER and PGC_SIGHUP GUCs, which may be a bad idea, but
they're not relevant to the immediate concern because they can't be
set via ALTER ROLE/DATABASE.)
Per reports from Charlie Hornsby and Nathan Bossart. Back-patch
to all supported branches.
Discussion: https://postgr.es/m/HE1P189MB0523B31598B0C772C908088DB7709@HE1P189MB0523.EURP189.PROD.OUTLOOK.COM
Discussion: https://postgr.es/m/20160711223641.1426.86096@wrigleys.postgresql.org
|
|
Using Roman numbers (via "RM" or "rm") for a conversion to calculate a
number of months has never considered the case of negative numbers,
where a conversion could easily cause out-of-bound memory accesses. The
conversions in themselves were not completely consistent either, as
specifying 12 would result in NULL, but it should mean XII.
This commit reworks the conversion calculation to have a more
consistent behavior:
- If the number of months and years is 0, return NULL.
- If the number of months is positive, return the exact month number.
- If the number of months is negative, do a backward calculation, with
-1 meaning December, -2 November, etc.
Reported-by: Theodor Arsenij Larionov-Trichkin
Author: Julien Rouhaud
Discussion: https://postgr.es/m/16953-f255a18f8c51f1d5@postgresql.org
backpatch-through: 9.6
|
|
Author: Daniel Westermann
Backpatch-through: 9.6
Discussion: https://postgr.es/m/GV0P278MB0483A7AA85BAFCC06D90F453D2739@GV0P278MB0483.CHEP278.PROD.OUTLOOK.COM
|
|
Because guc.c prefers to keep all its string values in malloc'd
not palloc'd storage, it has to be more careful than usual to
avoid leaks. Error exits out of string GUC hook checks failed
to clear the proposed value string, and error exits out of
ProcessGUCArray() failed to clear the malloc'd results of
ParseLongOption().
Found via valgrind testing.
This problem is ancient, so back-patch to all supported branches.
Discussion: https://postgr.es/m/3816764.1616104288@sss.pgh.pa.us
|
|
Back-patch a tiny bit of commit fbb2e9a0 into 9.6 and 10, to silence an
uninitialized variable warning from GCC 10.2. Seen on buildfarm member
handfish, and my own development workflow where I like to use -Werror.
Discussion: https://postgr.es/m/CA%2BhUKGJRcwvK86Uf5t-FrTekZjqHtpv3u%3D3MuBg8Zw8R933Mqg%40mail.gmail.com
|
|
FreeBSD 13 gained O_DSYNC, which would normally cause wal_sync_method to
choose open_datasync as its default value. That may not be a good
choice for all systems, and performs worse than fdatasync in some
scenarios. Let's preserve the existing default behavior for now.
Like commit 576477e73c4, which did the same for Linux, back-patch to all
supported releases.
Discussion: https://postgr.es/m/CA%2BhUKGLsAMXBQrCxCXoW-JsUYmdOL8ALYvaX%3DCrHqWxm-nWbGA%40mail.gmail.com
|
|
Given a regex pattern with a very long fixed prefix (approaching 500
characters), the result of pow(FIXED_CHAR_SEL, fixed_prefix_len) can
underflow to zero. Typically the preceding selectivity calculation
would have underflowed as well, so that we compute 0/0 and get NaN.
In released branches this leads to an assertion failure later on.
That doesn't happen in HEAD, for reasons I've not explored yet,
but it's surely still a bug.
To fix, just skip the division when the pow() result is zero, so
that we'll (most likely) return a zero selectivity estimate. In
the edge cases where "sel" didn't yet underflow, perhaps this
isn't desirable, but I'm not sure that the case is worth spending
a lot of effort on. The results of regex_selectivity_sub() are
barely worth the electrons they're written on anyway :-(
Per report from Alexander Lakhin. Back-patch to all supported versions.
Discussion: https://postgr.es/m/6de0a0c3-ada9-cd0c-3e4e-2fa9964b41e3@gmail.com
|
|
I chanced to notice that this dumped core due to a faulty Assert.
To add insult to injury, the output has been misformatted since v11.
Obviously we need some regression testing here.
Discussion: https://postgr.es/m/d1cc628c-3953-4209-957b-29427acc38c8@www.fastmail.com
|
|
Commit bc43b7c2c0 used fabs() directly on an int variable, which
apparently requires an explicit cast on some platforms.
Per buildfarm.
|
|
In power_var_int(), the computation of the number of significant
digits to use in the computation used log(Abs(exp)), which isn't safe
because Abs(exp) returns INT_MIN when exp is INT_MIN. Use fabs()
instead of Abs(), so that the exponent is cast to a double before the
absolute value is taken.
Back-patch to 9.6, where this was introduced (by 7d9a4737c2).
Discussion: https://postgr.es/m/CAEZATCVd6pMkz=BrZEgBKyqqJrt2xghr=fNc8+Z=5xC6cgWrWA@mail.gmail.com
|
|
The code in charge of processing a single invalidation message has been
using since 568d413 the structure for relation mapping messages. This
had fortunately no consequence as both locate the database ID at the
same location, but it could become a problem in the future if this area
of the code changes.
Author: Konstantin Knizhnik
Discussion: https://postgr.es/m/8044c223-4d3a-2cdb-42bf-29940840ce94@postgrespro.ru
Backpatch-through: 9.5
|
|
This makes existing sessions reflect "ALTER ROLE ... [NO]INHERIT" as
quickly as they have been reflecting "GRANT role_name". Back-patch to
9.5 (all supported versions).
Reviewed by Nathan Bossart.
Discussion: https://postgr.es/m/20201221095028.GB3777719@rfd.leadboat.com
|
|
The jsonb || jsonb operator arbitrarily rejected certain combinations
of scalar and non-scalar inputs, while being willing to concatenate
other combinations. This was of course quite undocumented. Rather
than trying to document it, let's just remove the restriction,
creating a uniform rule that unless we are handling an object-to-object
concatenation, non-array inputs are converted to one-element arrays,
resulting in an array-to-array concatenation. (This does not change
the behavior for any case that didn't throw an error before.)
Per complaint from Joel Jacobson. Back-patch to all supported branches.
Discussion: https://postgr.es/m/163099.1608312033@sss.pgh.pa.us
|
|
A narrow reading of the C standard says that memcpy(x,x,n) is undefined,
although it's hard to envision an implementation that would really
misbehave. However, analysis tools such as valgrind might whine about
this; accordingly, let's band-aid relmapper.c to not do it.
See also 5b630501e, d3f4e8a8a, ad7b48ea0, and other similar fixes.
Apparently, none of those folk tried valgrinding initdb? This has been
like this for long enough that I'm surprised it hasn't been reported
before.
Back-patch, just in case anybody wants to use a back branch on a platform
that complains about this; we back-patched those earlier fixes too.
Discussion: https://postgr.es/m/161790.1608310142@sss.pgh.pa.us
|
|
Introduce TimestampDifferenceMilliseconds() to simplify callers
that would rather have the difference in milliseconds, instead of
the select()-oriented seconds-and-microseconds format. This gets
rid of at least one integer division per call, and it eliminates
some apparently-easy-to-mess-up arithmetic.
Two of these call sites were in fact wrong:
* pg_prewarm's autoprewarm_main() forgot to multiply the seconds
by 1000, thus ending up with a delay 1000X shorter than intended.
That doesn't quite make it a busy-wait, but close.
* postgres_fdw's pgfdw_get_cleanup_result() thought it needed to compute
microseconds not milliseconds, thus ending up with a delay 1000X longer
than intended. Somebody along the way had noticed this problem but
misdiagnosed the cause, and imposed an ad-hoc 60-second limit rather
than fixing the units. This was relatively harmless in context, because
we don't care that much about exactly how long this delay is; still,
it's wrong.
There are a few more callers of TimestampDifference() that don't
have a direct need for seconds-and-microseconds, but can't use
TimestampDifferenceMilliseconds() either because they do need
microsecond precision or because they might possibly deal with
intervals long enough to overflow 32-bit milliseconds. It might be
worth inventing another API to improve that, but that seems outside
the scope of this patch; so those callers are untouched here.
Given the fact that we are fixing some bugs, and the likelihood
that future patches might want to back-patch code that uses this
new API, back-patch to all supported branches.
Alexey Kondratov and Tom Lane
Discussion: https://postgr.es/m/3b1c053a21c07c1ed5e00be3b2b855ef@postgrespro.ru
|
|
The prohibitValueChange code paths in set_config_option(), which
are executed whenever we re-read a PGC_POSTMASTER variable from
postgresql.conf, neglected to free anything before exiting. Thus
we'd leak the proposed new value of a PGC_STRING variable, as noted
by BoChen in bug #16666. For all variable types, if the check hook
creates an "extra" chunk, we'd also leak that.
These are malloc not palloc chunks, so there is no mechanism for
recovering the leaks before process exit. Fortunately, the values
are typically not very large, meaning you'd have to go through an
awful lot of SIGHUP configuration-reload cycles to make the leakage
amount to anything. Still, for a long-lived postmaster process it
could potentially be a problem.
Oversight in commit 2594cf0e8. Back-patch to all supported branches.
Discussion: https://postgr.es/m/16666-2c41a4eec61b03e1@postgresql.org
|
|
Previously, a conversion such as
to_date('-44-02-01','YYYY-MM-DD')
would result in '0045-02-01 BC', as the code attempted to interpret
the negative year as BC, but failed to apply the correction needed
for our internal handling of BC years. Fix the off-by-one problem.
Also, arrange for the combination of a negative year and an
explicit "BC" marker to cancel out and produce AD. This is how
the negative-century case works, so it seems sane to do likewise.
Continue to read "year 0000" as 1 BC. Oracle would throw an error,
but we've accepted that case for a long time so I'm hesitant to
change it in a back-patch.
Per bug #16419 from Saeed Hubaishan. Back-patch to all supported
branches.
Dar Alathar-Yemen and Tom Lane
Discussion: https://postgr.es/m/16419-d8d9db0a7553f01b@postgresql.org
|
|
Several PGXN modules reference LockTagType values; renumbering would
force a recompile of those modules. Oversight in back-patch of today's
commit 566372b3d6435639e4cc4476d79b8505a0297c87. Back-patch to released
branches, v12 through 9.5.
Reported by Tom Lane.
Discussion: https://postgr.es/m/921383.1597523945@sss.pgh.pa.us
|
|
The SimpleLruTruncate() header comment states the new coding rule. To
achieve this, add locktype "frozenid" and two LWLocks. This closes a
rare opportunity for data loss, which manifested as "apparent
wraparound" or "could not access status of transaction" errors. Data
loss is more likely in pg_multixact, due to released branches' thin
margin between multiStopLimit and multiWrapLimit. If a user's physical
replication primary logged ": apparent wraparound" messages, the user
should rebuild standbys of that primary regardless of symptoms. At less
risk is a cluster having emitted "not accepting commands" errors or
"must be vacuumed" warnings at some point. One can test a cluster for
this data loss by running VACUUM FREEZE in every database. Back-patch
to 9.5 (all supported versions).
Discussion: https://postgr.es/m/20190218073103.GA1434723@rfd.leadboat.com
|
|
Up to now, upon receipt of a SIGTERM ("smart shutdown" command), the
postmaster has immediately killed all "optional" background processes,
and subsequently refused to launch new ones while it's waiting for
foreground client processes to exit. No doubt this seemed like an OK
policy at some point; but it's a pretty bad one now, because it makes
for a seriously degraded environment for the remaining clients:
* Parallel queries are killed, and new ones fail to launch. (And our
parallel-query infrastructure utterly fails to deal with the case
in a reasonable way --- it just hangs waiting for workers that are
not going to arrive. There is more work needed in that area IMO.)
* Autovacuum ceases to function. We can tolerate that for awhile,
but if bulk-update queries continue to run in the surviving client
sessions, there's eventually going to be a mess. In the worst case
the system could reach a forced shutdown to prevent XID wraparound.
* The bgwriter and walwriter are also stopped immediately, likely
resulting in performance degradation.
Hence, let's rearrange things so that the only immediate change in
behavior is refusing to let in new normal connections. Once the last
normal connection is gone, shut everything down as though we'd received
a "fast" shutdown. To implement this, remove the PM_WAIT_BACKUP and
PM_WAIT_READONLY states, instead staying in PM_RUN or PM_HOT_STANDBY
while normal connections remain. A subsidiary state variable tracks
whether or not we're letting in new connections in those states.
This also allows having just one copy of the logic for killing child
processes in smart and fast shutdown modes. I moved that logic into
PostmasterStateMachine() by inventing a new state PM_STOP_BACKENDS.
Back-patch to 9.6 where parallel query was added. In principle
this'd be a good idea in 9.5 as well, but the risk/reward ratio
is not as good there, since lack of autovacuum is not a problem
during typical uses of smart shutdown.
Per report from Bharath Rupireddy.
Patch by me, reviewed by Thomas Munro
Discussion: https://postgr.es/m/CALj2ACXAZ5vKxT9P7P89D87i3MDO9bfS+_bjMHgnWJs8uwUOOw@mail.gmail.com
|
|
The new hlCover() algorithm that I introduced in commit c9b0c678d
turns out to potentially take O(N^2) or worse time on long documents,
if there are many occurrences of individual query words but few or no
substrings that actually satisfy the query. (One way to hit this
behavior is with a "common_word & rare_word" type of query.) This
seems unavoidable given the original goal of checking every substring
of the document, so we have to back off that idea. Fortunately, it
seems unlikely that anyone would really want headlines spanning all of
a long document, so we can avoid the worse-than-linear behavior by
imposing a maximum length of substring that we'll consider.
For now, just hard-wire that maximum length as a multiple of max_words
times max_fragments. Perhaps at some point somebody will argue for
exposing it as a ts_headline parameter, but I'm hesitant to make such
a feature addition in a back-patched bug fix.
I also noted that the hlFirstIndex() function I'd added in that
commit was unnecessarily stupid: it really only needs to check whether
a HeadlineWordEntry's item pointer is null or not. This wouldn't make
all that much difference in typical cases with queries having just
a few terms, but a cycle shaved is a cycle earned.
In addition, add a CHECK_FOR_INTERRUPTS call in TS_execute_recurse.
This ensures that hlCover's loop is cancellable if it manages to take
a long time, and it may protect some other TS_execute callers as well.
Back-patch to 9.6 as the previous commit was. I also chose to add the
CHECK_FOR_INTERRUPTS call to 9.5. The old hlCover() algorithm seems
to avoid the O(N^2) behavior, at least on the test case I tried, but
nonetheless it's not very quick on a long document.
Per report from Stephen Frost.
Discussion: https://postgr.es/m/20200724160535.GW12375@tamriel.snowman.net
|
|
Backpatch an assertion (that was originally added to Postgres 12 by
commit dd299df8189) that seems broadly useful. The assertion can detect
violations of the HOT invariant (i.e. no two index tuples can point to
the same heap TID) when CREATE INDEX somehow incorrectly allows that to
take place.
For example, a IndexBuildHeapScan/heapam_index_build_range_scan bug
might result in two tuples that both point to the same heap TID. If
these two tuples also happen to be duplicates, the assertion will fail.
Discussion: https://postgr.es/m/CAH2-WzmBxu4o=pMsniur+bwHqCGCmV_AOLkuK6BuU7ngA6evqw@mail.gmail.com
Backpatch: 9.5-11 only
|
|
Convert buffile.c error handling to use ereport. This fixes cases where
I/O errors were indistinguishable from EOF or not reported. Also remove
"%m" from error messages where errno would be bogus. While we're
modifying those strings, add block numbers and short read byte counts
where appropriate.
Back-patch to all supported releases.
Reported-by: Amit Khandekar <amitdkhan.pg@gmail.com>
Reviewed-by: Melanie Plageman <melanieplageman@gmail.com>
Reviewed-by: Alvaro Herrera <alvherre@2ndquadrant.com>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Reviewed-by: Ibrar Ahmed <ibrar.ahmad@gmail.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/CA%2BhUKGJE04G%3D8TLK0DLypT_27D9dR8F1RQgNp0jK6qR0tZGWOw%40mail.gmail.com
|
|
When merging two NumericAggStates, the code missed adding the new
state's NaNcount unless its N was also nonzero; since those counts
are independent, this is wrong.
This would only have visible effect if some partial aggregate scans
found only NaNs while earlier ones found only non-NaNs; then we could
end up falsely deciding that there were no NaNs and fail to return a
NaN final result as expected. That's pretty improbable, so it's no
surprise this hasn't been reported from the field. Still, it's a bug.
I didn't try to produce a regression test that would show the bug,
but I did notice that these functions weren't being reached at all
in our regression tests, so I improved the tests to at least
exercise them. With these additions, I see pretty complete code
coverage on the aggregation-related functions in numeric.c.
Back-patch to 9.6 where this code was introduced. (I only added
the improved test case as far back as v10, though, since the
relevant part of aggregates.sql isn't there at all in 9.6.)
|
|
This issue has been present since the introduction of this code as of
a3519a2 from 2002, and has been found by buildfarm member prion that
uses RELCACHE_FORCE_RELEASE via the tests introduced recently in
e786be5.
Discussion: https://postgr.es/m/20200601022055.GB4121@paquier.xyz
Backpatch-through: 9.5
|
|
The repeat() function loops for potentially a long time without
ever checking for interrupts. This prevents, for example, a query
cancel from interrupting until the work is all done. Fix by
inserting a CHECK_FOR_INTERRUPTS() into the loop.
Backpatch to all supported versions.
Discussion: https://www.postgresql.org/message-id/flat/8692553c-7fe8-17d9-cbc1-7cddb758f4c6%40joeconway.com
|
|
Visual Studio 2015 and later versions should still be able to do the same
as Visual Studio 2012, but the declaration of locale_name is missing in
_locale_t, causing the code compilation to fail, hence this falls back
instead on to enumerating all system locales by using EnumSystemLocalesEx
to find the required locale name. If the input argument is in Unix-style
then we can get ISO Locale name directly by using GetLocaleInfoEx() with
LCType as LOCALE_SNAME.
In passing, change the documentation references of the now obsolete links.
Note that this problem occurs only with NLS enabled builds.
Author: Juan José Santamaría Flecha, Davinder Singh and Amit Kapila
Reviewed-by: Ranier Vilela and Amit Kapila
Backpatch-through: 9.5
Discussion: https://postgr.es/m/CAHzhFSFoJEWezR96um4-rg5W6m2Rj9Ud2CNZvV4NWc9tXV7aXQ@mail.gmail.com
|
|
checkcondition_str() failed to report multiple matches for a prefix
pattern correctly: it would dutifully merge the match positions, but
then after exiting that loop, if the last prefix-matching word had
had no suitable positions, it would report there were no matches.
The upshot would be failing to recognize a match that the query
should match.
It looks like you need all of these conditions to see the bug:
* a phrase search (else we don't ask for match position details)
* a prefix search item (else we don't get to this code)
* a weight restriction (else checkclass_str won't fail)
Noted while investigating a problem report from Pavel Borisov,
though this is distinct from the issue he was on about.
Back-patch to 9.6 where phrase search was added.
|
|
Writing a trailing semicolon in a macro is almost never the right thing,
because you almost always want to write a semicolon after each macro
call instead. (Even if there was some reason to prefer not to, pgindent
would probably make a hash of code formatted that way; so within PG the
rule should basically be "don't do it".) Thus, if we have a semi inside
the macro, the compiler sees "something;;". Much of the time the extra
empty statement is harmless, but it could lead to mysterious syntax
errors at call sites. In perhaps an overabundance of neatnik-ism, let's
run around and get rid of the excess semicolons whereever possible.
The only thing worse than a mysterious syntax error is a mysterious
syntax error that only happens in the back branches; therefore,
backpatch these changes where relevant, which is most of them because
most of these mistakes are old. (The lack of reported problems shows
that this is largely a hypothetical issue, but still, it could bite
us in some future patch.)
John Naylor and Tom Lane
Discussion: https://postgr.es/m/CACPNZCs0qWTqJ2QUSGJ07B7uvAvzMb-KbG2q+oo+J3tsWN5cqw@mail.gmail.com
|
|
Queries such as '!(foo<->bar)' failed to find matching rows when
implemented as a GiST or GIN index search. That's because of
failing to handle phrase searches as tri-valued when considering
a query without any position information for the target tsvector.
We can only say that the phrase operator might match, not that it
does match; and therefore its NOT also might match. The previous
coding incorrectly inverted the approximate phrase result to
decide that there was certainly no match.
To fix, we need to make TS_phrase_execute return a real ternary result,
and then bubble that up accurately in TS_execute. As long as we have
to do that anyway, we can simplify the baroque things TS_phrase_execute
was doing internally to manage tri-valued searching with only a bool
as explicit result.
For now, I left the externally-visible result of TS_execute as a plain
bool. There do not appear to be any outside callers that need to
distinguish a three-way result, given that they passed in a flag
saying what to do in the absence of position data. This might need
to change someday, but we wouldn't want to back-patch such a change.
Although tsginidx.c has its own TS_execute_ternary implementation for
use at upper index levels, that sadly managed to get this case wrong
as well :-(. Fixing it is a lot easier fortunately.
Per bug #16388 from Charles Offenbacher. Back-patch to 9.6 where
phrase search was introduced.
Discussion: https://postgr.es/m/16388-98cffba38d0b7e6e@postgresql.org
|
|
Our documentation describes four allowed input syntaxes for circles,
but the regression tests tried only three ... with predictable
consequences. Remarkably, this has been wrong since the circle
datatype was added in 1997, but nobody noticed till now.
David Zhang, with some help from me
Discussion: https://postgr.es/m/332c47fa-d951-7574-b5cc-a8f7f7201202@highgo.ca
|
|
Since the existing bit number argument can't exceed INT32_MAX, it's
not possible for these functions to manipulate bits beyond the first
256MB of a bytea value. However, it'd be good if they could do at
least that much, and not fall over entirely for longer bytea values.
Adjust the comparisons to be done in int64 arithmetic so that works.
Also tweak the error reports to show sane values in case of overflow.
Also add some test cases to improve the miserable code coverage
of these functions.
Apply patch to back branches only; HEAD has a better solution
as of commit 26a944cf2.
Extracted from a much larger patch by Movead Li
Discussion: https://postgr.es/m/20200312115135445367128@highgo.ca
|
|
A table rewritten by ALTER TABLE would lose tracking of an index usable
for CLUSTER. This setting is tracked by pg_index.indisclustered and is
controlled by ALTER TABLE, so some extra work was needed to restore it
properly. Note that ALTER TABLE only marks the index that can be used
for clustering, and does not do the actual operation.
Author: Amit Langote, Justin Pryzby
Reviewed-by: Ibrar Ahmed, Michael Paquier
Discussion: https://postgr.es/m/20200202161718.GI13621@telsasoft.com
Backpatch-through: 9.5
|
|
In 9.4 I added support to use a historical snapshot in
ScanPgRelation(), while adding logical decoding. Unfortunately a
conflict with the concurrent removal of SnapshotNow was incorrectly
resolved, leading to an unregistered snapshot being used.
It is not correct to use an unregistered (or non-active) snapshot for
anything non-trivial, because catalog invalidations can cause the
snapshot to be invalidated.
Luckily it seems unlikely to actively cause problems in practice, as
ScanPgRelation() requires that we already have a lock on the relation,
we only look for a single row, and we don't appear to rely on the
result's tid to be correct. It however is clearly wrong and potential
negative consequences would likely be hard to find. So it seems worth
backpatching the fix, even without a concrete hazard.
Discussion: https://postgr.es/m/20200229052459.wzhqnbhrriezg4v2@alap3.anarazel.de
Backpatch: 9.5-
|
|
This reverts commit cb2fd7eac285b1b0a24eeb2b8ed4456b66c5a09f. Per
numerous buildfarm members, it was incompatible with parallel query, and
a test case assumed LP64. Back-patch to 9.5 (all supported versions).
Discussion: https://postgr.es/m/20200321224920.GB1763544@rfd.leadboat.com
|
|
Until now, only selected bulk operations (e.g. COPY) did this. If a
given relfilenode received both a WAL-skipping COPY and a WAL-logged
operation (e.g. INSERT), recovery could lose tuples from the COPY. See
src/backend/access/transam/README section "Skipping WAL for New
RelFileNode" for the new coding rules. Maintainers of table access
methods should examine that section.
To maintain data durability, just before commit, we choose between an
fsync of the relfilenode and copying its contents to WAL. A new GUC,
wal_skip_threshold, guides that choice. If this change slows a workload
that creates small, permanent relfilenodes under wal_level=minimal, try
adjusting wal_skip_threshold. Users setting a timeout on COMMIT may
need to adjust that timeout, and log_min_duration_statement analysis
will reflect time consumption moving to COMMIT from commands like COPY.
Internally, this requires a reliable determination of whether
RollbackAndReleaseCurrentSubTransaction() would unlink a relation's
current relfilenode. Introduce rd_firstRelfilenodeSubid. Amend the
specification of rd_createSubid such that the field is zero when a new
rel has an old rd_node. Make relcache.c retain entries for certain
dropped relations until end of transaction.
Back-patch to 9.5 (all supported versions). This introduces a new WAL
record type, XLOG_GIST_ASSIGN_LSN, without bumping XLOG_PAGE_MAGIC. As
always, update standby systems before master systems. This changes
sizeof(RelationData) and sizeof(IndexStmt), breaking binary
compatibility for affected extensions. (The most recent commit to
affect the same class of extensions was
089e4d405d0f3b94c74a2c6a54357a84a681754b.)
Kyotaro Horiguchi, reviewed (in earlier, similar versions) by Robert
Haas. Heikki Linnakangas and Michael Paquier implemented earlier
designs that materially clarified the problem. Reviewed, in earlier
designs, by Andrew Dunstan, Andres Freund, Alvaro Herrera, Tom Lane,
Fujii Masao, and Simon Riggs. Reported by Martijn van Oosterhout.
Discussion: https://postgr.es/m/20150702220524.GA9392@svana.org
|
|
Remove an obsolete comment from AtEOXact_cleanup(). Restore formatting
of a comment in struct RelationData, mangled by the pgindent run in
commit 9af4159fce6654aa0e081b00d02bca40b978745c. Back-patch to 9.5 (all
supported versions), because another fix stacks on this.
|
|
This extends the fixes made in commit 085b6b667 to other SRFs with the
same bug, namely pg_logdir_ls(), pgrowlocks(), pg_timezone_names(),
pg_ls_dir(), and pg_tablespace_databases().
Also adjust various comments and documentation to warn against
expecting to clean up resources during a ValuePerCall SRF's final
call.
Back-patch to all supported branches, since these functions were
all born broken.
Justin Pryzby, with cosmetic tweaks by me
Discussion: https://postgr.es/m/20200308173103.GC1357@telsasoft.com
|
|
resolve_polymorphic_tupdesc() and resolve_polymorphic_argtypes() failed to
cover the case of having to resolve anyarray given only an anyrange input.
The bug was masked if anyelement was also used (as either input or
output), which probably helps account for our not having noticed.
While looking at this I noticed that resolve_generic_type() would produce
the wrong answer if asked to make that same resolution. ISTM that
resolve_generic_type() is confusingly defined and overly complex, so
rather than fix it, let's just make funcapi.c do the specific lookups
it requires for itself.
With this change, resolve_generic_type() is not used anywhere, so remove
it in HEAD. In the back branches, leave it alone (complete with bug)
just in case any external code is using it.
While we're here, make some other refactoring adjustments in funcapi.c
with an eye to upcoming future expansion of the set of polymorphic types:
* Simplify quick-exit tests by adding an overall have_polymorphic_result
flag. This is about a wash now but will be a win when there are more
flags.
* Reduce duplication of code between resolve_polymorphic_tupdesc() and
resolve_polymorphic_argtypes().
* Don't bother to validate correct matching of anynonarray or anyenum;
the parser should have done that, and even if it didn't, just doing
"return false" here would lead to a very confusing, off-point error
message. (Really, "return false" in these two functions should only
occur if the call_expr isn't supplied or we can't obtain data type
info from it.)
* For the same reason, throw an elog rather than "return false" if
we fail to resolve a polymorphic type.
The bug's been there since we added anyrange, so back-patch to
all supported branches.
Discussion: https://postgr.es/m/6093.1584202130@sss.pgh.pa.us
|
|
If an index was explicitly set as replica identity index, this setting
was lost when a table was rewritten by ALTER TABLE. Because this
setting is part of pg_index but actually controlled by ALTER
TABLE (not part of CREATE INDEX, say), we have to do some extra work
to restore it.
Based-on-patch-by: Quan Zongliang <quanzongliang@gmail.com>
Reviewed-by: Euler Taveira <euler.taveira@2ndquadrant.com>
Discussion: https://www.postgresql.org/message-id/flat/c70fcab2-4866-0d9f-1d01-e75e189db342@gmail.com
|
|
Commit fc7695891 changed CheckAttributeType to recurse into ranges,
but made it pass down the wrong collation (always InvalidOid, since
ranges as such have no collation). This would result in guaranteed
failure when considering a range type whose subtype is collatable.
Embarrassingly, we lack any regression tests that would expose such
a problem (but fortunately, somebody noticed before we shipped this
bug in any release).
Fix it to pass down the range's subtype collation property instead,
and add some regression test cases to exercise collatable-subtype
ranges a bit more. Back-patch to all supported branches, as the
previous patch was.
Report and patch by Julien Rouhaud, test cases tweaked by me
Discussion: https://postgr.es/m/CAOBaU_aBWqNweiGUFX0guzBKkcfJ8mnnyyGC_KBQmO12Mj5f_A@mail.gmail.com
|
|
I had supposed that the from_char_seq_search() call sites were
all passing the constant arrays you'd expect them to pass ...
but on looking closer, the one for DY format was passing the
days[] array not days_short[]. This accidentally worked because
the day abbreviations in English are all the same as the first
three letters of the full day names. However, once we took out
the "maximum comparison length" logic, it stopped working.
As penance for that oversight, add regression test cases covering
this, as well as every other switch case in DCH_from_char() that
was not reached according to the code coverage report.
Also, fold the DCH_RM and DCH_rm cases into one --- now that
seq_search is case independent, there's no need to pass different
comparison arrays for those cases.
Back-patch, as the previous commit was.
|