summaryrefslogtreecommitdiff
path: root/src/backend/utils/adt
AgeCommit message (Collapse)Author
2022-04-20Disallow infinite endpoints in generate_series() for timestamps.Tom Lane
Such cases will lead to infinite loops, so they're of no practical value. The numeric variant of generate_series() already threw error for this, so borrow its message wording. Per report from Richard Wesley. Back-patch to all supported branches. Discussion: https://postgr.es/m/91B44E7B-68D5-448F-95C8-B4B3B0F5DEAF@duckdblabs.com
2022-04-19Fix extract epoch from interval calculationPeter Eisentraut
The new numeric code for extract epoch from interval accidentally truncated the DAYS_PER_YEAR value to an integer, leading to results that mismatched the floating-point interval_part calculations. The commit a2da77cdb4661826482ebf2ddba1f953bc74afe4 that introduced this actually contains the regression test change that this reverts. I suppose this was missed at the time. Reported-by: Joseph Koshakow <koshy44@gmail.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://www.postgresql.org/message-id/flat/CAAvxfHd5n%3D13NYA2q_tUq%3D3%3DSuWU-CufmTf-Ozj%3DfrEgt7pXwQ%40mail.gmail.com
2022-03-27Fix NULL input behaviour of pg_stat_get_replication_slot().Andres Freund
pg_stat_get_replication_slot() accidentally was marked as non-strict, crashing when called with NULL input. As it's already released, introduce an explicit NULL check in 14, fix the catalog in HEAD. Bumps catversion in HEAD. Discussion: https://postgr.es/m/20220326212432.s5n2maw6kugnpyxw@alap3.anarazel.de Backpatch: 14-, where replication slot stats were introduced
2022-03-21Fix assorted missing logic for GroupingFunc nodes.Tom Lane
The planner needs to treat GroupingFunc like Aggref for many purposes, in particular with respect to processing of the argument expressions, which are not to be evaluated at runtime. A few places hadn't gotten that memo, notably including subselect.c's processing of outer-level aggregates. This resulted in assertion failures or wrong plans for cases in which a GROUPING() construct references an outer aggregation level. Also fix missing special cases for GroupingFunc in cost_qual_eval (resulting in wrong cost estimates for GROUPING(), although it's not clear that that would affect plan shapes in practice) and in ruleutils.c (resulting in excess parentheses in pretty-print mode). Per bug #17088 from Yaoguang Chen. Back-patch to all supported branches. Richard Guo, Tom Lane Discussion: https://postgr.es/m/17088-e33882b387de7f5c@postgresql.org
2022-03-18Fix incorrect xmlschema output for types timetz and timestamptz.Tom Lane
The output of table_to_xmlschema() and allied functions includes a regex describing valid values for these types ... but the regex was itself invalid, as it failed to escape a literal "+" sign. Report and fix by Renan Soares Lopes. Back-patch to all supported branches. Discussion: https://postgr.es/m/7f6fabaa-3f8f-49ab-89ca-59fbfe633105@me.com
2022-03-03Clean up assorted failures under clang's -fsanitize=undefined checks.Tom Lane
Most of these are cases where we could call memcpy() or other libc functions with a NULL pointer and a zero count, which is forbidden by POSIX even though every production version of libc allows it. We've fixed such things before in a piecemeal way, but apparently never made an effort to try to get them all. I don't claim that this patch does so either, but it gets every failure I observe in check-world, using clang 12.0.1 on current RHEL8. numeric.c has a different issue that the sanitizer doesn't like: "ln(-1.0)" will compute log10(0) and then try to assign the resulting -Inf to an integer variable. We don't actually use the result in such a case, so there's no live bug. Back-patch to all supported branches, with the idea that we might start running a buildfarm member that tests this case. This includes back-patching c1132aae3 (Check the size in COPY_POINTER_FIELD), which previously silenced some of these issues in copyfuncs.c. Discussion: https://postgr.es/m/CALNJ-vT9r0DSsAOw9OXVJFxLENoVS_68kJ5x0p44atoYH+H4dg@mail.gmail.com
2022-01-27Fix ordering of XIDs in ProcArrayApplyRecoveryInfoTomas Vondra
Commit 8431e296ea reworked ProcArrayApplyRecoveryInfo to sort XIDs before adding them to KnownAssignedXids. But the XIDs are sorted using xidComparator, which compares the XIDs simply as uint32 values, not logically. KnownAssignedXidsAdd() however expects XIDs in logical order, and calls TransactionIdFollowsOrEquals() to enforce that. If there are XIDs for which the two orderings disagree, an error is raised and the recovery fails/restarts. Hitting this issue is fairly easy - you just need two transactions, one started before the 4B limit (e.g. XID 4294967290), the other sometime after it (e.g. XID 1000). Logically (4294967290 <= 1000) but when compared using xidComparator we try to add them in the opposite order. Which makes KnownAssignedXidsAdd() fail with an error like this: ERROR: out-of-order XID insertion in KnownAssignedXids This only happens during replica startup, while processing RUNNING_XACTS records to build the snapshot. Once we reach STANDBY_SNAPSHOT_READY, we skip these records. So this does not affect already running replicas, but if you restart (or create) a replica while there are transactions with XIDs for which the two orderings disagree, you may hit this. Long-running transactions and frequent replica restarts increase the likelihood of hitting this issue. Once the replica gets into this state, it can't be started (even if the old transactions are terminated). Fixed by sorting the XIDs logically - this is fine because we're dealing with normal XIDs (because it's XIDs assigned to backends) and from the same wraparound epoch (otherwise the backends could not be running at the same time on the primary node). So there are no problems with the triangle inequality, which is why xidComparator compares raw values. Investigation and root cause analysis by Abhijit Menon-Sen. Patch by me. This issue is present in all releases since 9.4, however releases up to 9.6 are EOL already so backpatch to 10 only. Reviewed-by: Abhijit Menon-Sen Reviewed-by: Alvaro Herrera Backpatch-through: 10 Discussion: https://postgr.es/m/36b8a501-5d73-277c-4972-f58a4dce088a%40enterprisedb.com
2022-01-15Build inherited extended stats on partitioned tablesTomas Vondra
Commit 859b3003de disabled building of extended stats for inheritance trees, to prevent updating the same catalog row twice. While that resolved the issue, it also means there are no extended stats for declaratively partitioned tables, because there are no data in the non-leaf relations. That also means declaratively partitioned tables were not affected by the issue 859b3003de addressed, which means this is a regression affecting queries that calculate estimates for the whole inheritance tree as a whole (which includes e.g. GROUP BY queries). But because partitioned tables are empty, we can invert the condition and build statistics only for the case with inheritance, without losing anything. And we can consider them when calculating estimates. It may be necessary to run ANALYZE on partitioned tables, to collect proper statistics. For declarative partitioning there should no prior statistics, and it might take time before autoanalyze is triggered. For tables partitioned by inheritance the statistics may include data from child relations (if built 859b3003de), contradicting the current code. Report and patch by Justin Pryzby, minor fixes and cleanup by me. Backpatch all the way back to PostgreSQL 10, where extended statistics were introduced (same as 859b3003de). Author: Justin Pryzby Reported-by: Justin Pryzby Backpatch-through: 10 Discussion: https://postgr.es/m/20210923212624.GI831%40telsasoft.com
2022-01-15Ignore extended statistics for inheritance treesTomas Vondra
Since commit 859b3003de we only build extended statistics for individual relations, ignoring the child relations. This resolved the issue with updating catalog tuple twice, but we still tried to use the statistics when calculating estimates for the whole inheritance tree. When the relations contain very distinct data, it may produce bogus estimates. This is roughly the same issue 427c6b5b9 addressed ~15 years ago, and we fix it the same way - by ignoring extended statistics when calculating estimates for the inheritance tree as a whole. We still consider extended statistics when calculating estimates for individual child relations, of course. This may result in plan changes due to different estimates, but if the old statistics were not describing the inheritance tree particularly well it's quite likely the new plans is actually better. Report and patch by Justin Pryzby, minor fixes and cleanup by me. Backpatch all the way back to PostgreSQL 10, where extended statistics were introduced (same as 859b3003de). Author: Justin Pryzby Reported-by: Justin Pryzby Backpatch-through: 10 Discussion: https://postgr.es/m/20210923212624.GI831%40telsasoft.com
2022-01-14Revert error handling improvements for cryptohashesMichael Paquier
This reverts commits ab27df2, af8d530 and 3a0cced, that introduced pg_cryptohash_error(). In order to make the core code able to pass down the new error types that this introduced, some of the MD5-related routines had to be reworked, causing an ABI breakage, but we found that some external extensions rely on them. Maintaining compatibility outweights the error report benefits, so just revert the change in v14. Reported-by: Laurenz Albe Discussion: https://postgr.es/m/9f0c0a96d28cf14fc87296bbe67061c14eb53ae8.camel@cybertec.at
2022-01-13Fix ruleutils.c's dumping of whole-row Vars in more contexts.Tom Lane
Commit 7745bc352 intended to ensure that whole-row Vars would be printed with "::type" decoration in all contexts where plain "var.*" notation would result in star-expansion, notably in ROW() and VALUES() constructs. However, it missed the case of INSERT with a single-row VALUES, as reported by Timur Khanjanov. Nosing around ruleutils.c, I found a second oversight: the code for RowCompareExpr generates ROW() notation without benefit of an actual RowExpr, and naturally it wasn't in sync :-(. (The code for FieldStore also does this, but we don't expect that to generate strictly parsable SQL anyway, so I left it alone.) Back-patch to all supported branches. Discussion: https://postgr.es/m/efaba6f9-4190-56be-8ff2-7a1674f9194f@intrans.baku.az
2022-01-11Improve error handling of cryptohash computationsMichael Paquier
The existing cryptohash facility was causing problems in some code paths related to MD5 (frontend and backend) that relied on the fact that the only type of error that could happen would be an OOM, as the MD5 implementation used in PostgreSQL ~13 (the in-core implementation is used when compiling with or without OpenSSL in those older versions), could fail only under this circumstance. The new cryptohash facilities can fail for reasons other than OOMs, like attempting MD5 when FIPS is enabled (upstream OpenSSL allows that up to 1.0.2, Fedora and Photon patch OpenSSL 1.1.1 to allow that), so this would cause incorrect reports to show up. This commit extends the cryptohash APIs so as callers of those routines can fetch more context when an error happens, by using a new routine called pg_cryptohash_error(). The error states are stored within each implementation's internal context data, so as it is possible to extend the logic depending on what's suited for an implementation. The default implementation requires few error states, but OpenSSL could report various issues depending on its internal state so more is needed in cryptohash_openssl.c, and the code is shaped so as we are always able to grab the necessary information. The core code is changed to adapt to the new error routine, painting more "const" across the call stack where the static errors are stored, particularly in authentication code paths on variables that provide log details. This way, any future changes would warn if attempting to free these strings. The MD5 authentication code was also a bit blurry about the handling of "logdetail" (LOG sent to the postmaster), so improve the comments related that, while on it. The origin of the problem is 87ae969, that introduced the centralized cryptohash facility. Extra changes are done for pgcrypto in v14 for the non-OpenSSL code path to cope with the improvements done by this commit. Reported-by: Michael Mühlbeyer Author: Michael Paquier Reviewed-by: Tom Lane Discussion: https://postgr.es/m/89B7F072-5BBE-4C92-903E-D83E865D9367@trivadis.com Backpatch-through: 14
2021-12-13Fix alignment in multirange_get_range() functionAlexander Korotkov
The multirange_get_range() function fails when two boundaries of the same range have different alignments. Fix that by adding proper pointer alignment. Reported-by: Alexander Lakhin Discussion: https://postgr.es/m/17300-dced2d01ddeb1f2f%40postgresql.org Backpatch-through: 14
2021-11-24Allow Memoize to operate in binary comparison modeDavid Rowley
Memoize would always use the hash equality operator for the cache key types to determine if the current set of parameters were the same as some previously cached set. Certain types such as floating points where -0.0 and +0.0 differ in their binary representation but are classed as equal by the hash equality operator may cause problems as unless the join uses the same operator it's possible that whichever join operator is being used would be able to distinguish the two values. In which case we may accidentally return in the incorrect rows out of the cache. To fix this here we add a binary mode to Memoize to allow it to the current set of parameters to previously cached values by comparing bit-by-bit rather than logically using the hash equality operator. This binary mode is always used for LATERAL joins and it's used for normal joins when any of the join operators are not hashable. Reported-by: Tom Lane Author: David Rowley Discussion: https://postgr.es/m/3004308.1632952496@sss.pgh.pa.us Backpatch-through: 14, where Memoize was added
2021-11-17Fix display of SQL-standard function's arguments in INSERT/SELECT.Tom Lane
If a SQL-standard function body contains an INSERT ... SELECT statement, any function parameters referenced within the SELECT were always printed in $N style, rather than using the parameter name if any. While not strictly incorrect, this wasn't the intention, and it's inconsistent with the way that such parameters would be printed in any other kind of statement. The cause is that the recursion to get_query_def from get_insert_query_def neglected to pass down the context->namespaces list, passing constant NIL instead. This is a very ancient oversight, but AFAICT it had no visible consequences before commit e717a9a18 added an outermost namespace with function parameters. We don't allow INSERT ... SELECT as a sub-query, except in a top-level WITH clause, where it couldn't contain any outer references that might need to access upper namespaces. So although that's arguably a bug, I don't see any point in changing it before v14. In passing, harden the code added to get_parameter by e717a9a18 so that it won't crash if a PARAM_EXTERN Param appears in an unexpected place. Per report from Erki Eessaar. Code fix by me, regression test case by Masahiko Sawada. Discussion: https://postgr.es/m/AM9PR01MB8268347BED344848555167FAFE949@AM9PR01MB8268.eurprd01.prod.exchangelabs.com
2021-11-12Fix memory overrun when querying pg_stat_slruMichael Paquier
pg_stat_get_slru() in pgstatfuncs.c would point to one element after the end of the array PgStat_SLRUStats when finishing to scan its entries. This had no direct consequences as no data from the extra memory area was read, but static analyzers would rightfully complain here. So let's be clean. While on it, this adds one regression test in the area reserved for system views. Reported-by: Alexander Kozhemyakin, via AddressSanitizer Author: Kyotaro Horiguchi Discussion: https://postgr.es/m/17280-37da556e86032070@postgresql.org Backpatch-through: 13
2021-10-11Fix EXPLAIN of SEARCH BREADTH FIRST queries some more.Tom Lane
Commit 3f50b8263 had an oversight: formerly, to deparse expressions attached to a plan node, it was only necessary to update the deparse_namespace ancestors list alongside calling set_deparse_plan. Now it's necessary to update the ancestors list *first*, because set_deparse_plan consults it, and one call site got that wrong. This error was masked in most cases because explain.c uses just one List object for the ancestors list, updating it in-place as the plan is scanned, so that we accidentally had the right List assigned to dpns->ancestors before it was needed. It would fail only if a WorkTableScan node were the first one that we tried to deparse a subexpression of. Per report from Markus Winand. Like the previous patch, back-patch to v14. Discussion: https://postgr.es/m/648B0505-AA57-42C2-A2DA-E551DE46FA15@winand.at
2021-10-06Fix corner-case loss of precision in numeric_power().Dean Rasheed
This fixes a loss of precision that occurs when the first input is very close to 1, so that its logarithm is very small. Formerly, during the initial low-precision calculation to estimate the result weight, the logarithm was computed to a local rscale that was capped to NUMERIC_MAX_DISPLAY_SCALE (1000). However, the base may be as close as 1e-16383 to 1, hence its logarithm may be as small as 1e-16383, and so the local rscale needs to be allowed to exceed 16383, otherwise all precision is lost, leading to a poor choice of rscale for the full-precision calculation. Fix this by removing the cap on the local rscale during the initial low-precision calculation, as we already do in the full-precision calculation. This doesn't change the fact that the initial calculation is a low-precision approximation, computing the logarithm to around 8 significant digits, which is very fast, especially when the base is very close to 1. Patch by me, reviewed by Alvaro Herrera. Discussion: https://postgr.es/m/CAEZATCV-Ceu%2BHpRMf416yUe4KKFv%3DtdgXQAe5-7S9tD%3D5E-T1g%40mail.gmail.com
2021-10-01Avoid believing incomplete MCV-only stats in get_variable_range().Tom Lane
get_variable_range() would incautiously believe that statistics containing only an MCV list are sufficient to derive a range estimate. That's okay for an enum-like column that contains only MCVs, but otherwise the estimate could be pretty bad. Make it report that the range is indeterminate unless the MCVs plus nullfrac account for the whole table. I don't think this needs a dedicated test case, since a quick code coverage check verifies that the existing regression tests traverse all the alternatives. There is room to doubt that a future-proof test case could be built anyway, given that the submitted example accidentally doesn't fail before v11. Per bug #17207 from Simon Perepelitsa. Back-patch to v10. In principle this has been broken all along, but I'm hesitant to make such changes in 9.6, since if anyone is unhappy with 9.6.24's behavior there will be no second chance to fix it. Discussion: https://postgr.es/m/17207-5265aefa79e333b4@postgresql.org
2021-09-29Clarify use of "statistics objects" in the codeMichael Paquier
The code inconsistently used "statistic object" or "statistics" where the correct term, as discussed, is actually "statistics object". This improves the state of the code to be more consistent. While on it, fix an incorrect error message introduced in a4d75c8. This error should never happen, as the code states, but it would be misleading. Author: Justin Pryzby Reviewed-by: Álvaro Herrera, Michael Paquier Discussion: https://postgr.es/m/20210924215827.GS831@telsasoft.com Backpatch-through: 14
2021-09-16Fix EXPLAIN to handle SEARCH BREADTH FIRST queries.Tom Lane
The rewriter transformation for SEARCH BREADTH FIRST produces a FieldSelect on a Var of type RECORD, where the Var references the recursive union's worktable output. EXPLAIN VERBOSE failed to handle this case, because it only expected such Vars to appear in CteScans not WorkTableScans. Fix that, and add some test cases exercising EXPLAIN on SEARCH and CYCLE queries. In principle this oversight is an old bug, but it seems that the case is unreachable without SEARCH BREADTH FIRST, because the parser fails when attempting to create such a reference manually. So for today I'll just patch HEAD/v14. Someday we might find that the code portion of this patch needs to be back-patched further. Per report from Atsushi Torikoshi. Discussion: https://postgr.es/m/5bafa66ad529e11860339565c9e7c166@oss.nttdata.com
2021-09-16Message style improvementsPeter Eisentraut
2021-09-15Fix hash_arrayPeter Eisentraut
Commit 054adca641ac1279dc8d9b74fda41948ac35e9a9 neglected to initialize the type_id field of the synthesized type cache entry, so it would make a new one on every call. Also, better use the per-function memory context for this; otherwise it leaks memory. Discussion: https://www.postgresql.org/message-id/flat/17158-8a2ba823982537a4%40postgresql.org
2021-09-08Disable anonymous record hash support except in special casesPeter Eisentraut
Commit 01e658fa74 added hash support for row types. This also added support for hashing anonymous record types, using the same approach that the type cache uses for comparison support for record types: It just reports that it works, but it might fail at run time if a component type doesn't actually support the operation. We get away with that for comparison because most types support that. But some types don't support hashing, so the current state can result in failures at run time where the planner chooses hashing over sorting, whereas that previously worked if only sorting was an option. We do, however, want the record hashing support for path tracking in recursive unions, and the SEARCH and CYCLE clauses built on that. In that case, hashing is the only plan option. So enable that, this commit implements the following approach: The type cache does not report that hashing is available for the record type. This undoes that part of 01e658fa74. Instead, callers that require hashing no matter what can override that result themselves. This patch only touches the callers to make the aforementioned recursive query cases work, namely the parse analysis of unions, as well as the hash_array() function. Reported-by: Sait Talha Nisanci <sait.nisanci@microsoft.com> Bug: #17158 Discussion: https://www.postgresql.org/message-id/flat/17158-8a2ba823982537a4%40postgresql.org
2021-09-06Fix bogus timetz_zone() results for DYNTZ abbreviations.Tom Lane
timetz_zone() delivered completely wrong answers if the zone was specified by a dynamic TZ abbreviation, because it failed to account for the difference between the POSIX conventions for field values in struct pg_tm and the conventions used in PG-specific datetime code. As a stopgap fix, just adjust the tm_year and tm_mon fields to match PG conventions. This is fixed in a different way in HEAD (388e71af8) but I don't want to back-patch the change of reference point. Discussion: https://postgr.es/m/CAJ7c6TOMG8zSNEZtCn5SPe+cCk3Lfxb71ZaQwT2F4T7PJ_t=KA@mail.gmail.com
2021-09-01Don't print extra parens around expressions in extended statsTomas Vondra
The code printing expressions for extended statistics doubled the parens, producing results like ((a+1)), which is unnecessary and not consistent with how we print expressions elsewhere. Fixed by tweaking the code to produce just a single set of parens. Reported by Mark Dilger, fix by me. Backpatch to 14, where support for extended statistics on expressions was added. Reported-by: Mark Dilger Discussion: https://postgr.es/m/20210122040101.GF27167%40telsasoft.com
2021-08-31Fix missed lock acquisition while inlining new-style SQL functions.Tom Lane
When starting to use a query parsetree loaded from the catalogs, we must begin by applying AcquireRewriteLocks(), to obtain the same relation locks that the parser would have gotten if the query were entered interactively, and to do some other cleanup such as dealing with later-dropped columns. New-style SQL functions are just as subject to this rule as other stored parsetrees; however, of the places dealing with such functions, only init_sql_fcache had gotten the memo. In particular, if we successfully inlined a new-style set-returning SQL function that contained any relation references, we'd either get an assertion failure or attempt to use those relation(s) sans locks. I also added AcquireRewriteLocks calls to fmgr_sql_validator and print_function_sqlbody. Desultory experiments didn't demonstrate any failures in those, but I suspect that I just didn't try hard enough. Certainly we don't expect nearby code paths to operate without locks. On the same logic of it-ought-to-have-the-same-effects-as-the-old-code, call pg_rewrite_query() in fmgr_sql_validator, too. It's possible that neither code path there needs to bother with rewriting, but doing the analysis to prove that is beyond my goals for today. Per bug #17161 from Alexander Lakhin. Discussion: https://postgr.es/m/17161-048a1cdff8422800@postgresql.org
2021-08-25Remove redundant test.Tom Lane
The condition "context_start < context_end" is strictly weaker than "context_end - context_start >= 50", so we don't need both. Oversight in commit ffd3944ab, noted by tanghy.fnst. In passing, line-wrap a nearby test to make it more readable. Discussion: https://postgr.es/m/OS0PR01MB61137C4054774F44E3A9DC89FBC69@OS0PR01MB6113.jpnprd01.prod.outlook.com
2021-08-25Improve error message about valid value for distance in phrase operator.Fujii Masao
The distance in phrase operator must be an integer value between zero and MAXENTRYPOS inclusive. But previously the error message about its valid value included the information about its upper limit but not lower limit (i.e., zero). This commit improves the error message so that it also includes the information about its lower limit. Back-patch to v9.6 where full-text phrase search was supported. Author: Kyotaro Horiguchi Reviewed-by: Fujii Masao Discussion: https://postgr.es/m/20210819.170315.1413060634876301811.horikyota.ntt@gmail.com
2021-08-19Revert refactoring of hex code to src/common/Michael Paquier
This is a combined revert of the following commits: - c3826f8, a refactoring piece that moved the hex decoding code to src/common/. This code was cleaned up by aef8948, as it originally included no overflow checks in the same way as the base64 routines in src/common/ used by SCRAM, making it unsafe for its purpose. - aef8948, a more advanced refactoring of the hex encoding/decoding code to src/common/ that added sanity checks on the result buffer for hex decoding and encoding. As reported by Hans Buschmann, those overflow checks are expensive, and it is possible to see a performance drop in the decoding/encoding of bytea or LOs the longer they are. Simple SQLs working on large bytea values show a clear difference in perf profile. - ccf4e27, a cleanup made possible by aef8948. The reverts of all those commits bring back the performance of hex decoding and encoding back to what it was in ~13. Fow now and post-beta3, this is the simplest option. Reported-by: Hans Buschmann Discussion: https://postgr.es/m/1629039545467.80333@nidsa.net Backpatch-through: 14
2021-08-06Adjust the integer overflow tests in the numeric code.Dean Rasheed
Formerly, the numeric code tested whether an integer value of a larger type would fit in a smaller type by casting it to the smaller type and then testing if the reverse conversion produced the original value. That's perfectly fine, except that it caused a test failure on buildfarm animal castoroides, most likely due to a compiler bug. Instead, do these tests by comparing against PG_INT16/32_MIN/MAX. That matches existing code in other places, such as int84(), which is more widely tested, and so is less likely to go wrong. While at it, add regression tests covering the numeric-to-int8/4/2 conversions, and adjust the recently added tests to the style of 434ddfb79a (on the v11 branch) to make failures easier to diagnose. Per buildfarm via Tom Lane, reviewed by Tom Lane. Discussion: https://postgr.es/m/2394813.1628179479%40sss.pgh.pa.us
2021-08-06Fix wordingPeter Eisentraut
2021-08-05Fix division-by-zero error in to_char() with 'EEEE' format.Dean Rasheed
This fixes a long-standing bug when using to_char() to format a numeric value in scientific notation -- if the value's exponent is less than -NUMERIC_MAX_DISPLAY_SCALE-1 (-1001), it produced a division-by-zero error. The reason for this error was that get_str_from_var_sci() divides its input by 10^exp, which it produced using power_var_int(). However, the underflow test in power_var_int() causes it to return zero if the result scale is too small. That's not a problem for power_var_int()'s only other caller, power_var(), since that limits the rscale to 1000, but in get_str_from_var_sci() the exponent can be much smaller, requiring a much larger rscale. Fix by introducing a new function to compute 10^exp directly, with no rscale limit. This also allows 10^exp to be computed more efficiently, without any numeric multiplication, division or rounding. Discussion: https://postgr.es/m/CAEZATCWhojfH4whaqgUKBe8D5jNHB8ytzemL-PnRx+KCTyMXmg@mail.gmail.com
2021-07-31Fix corner-case errors and loss of precision in numeric_power().Dean Rasheed
This fixes a couple of related problems that arise when raising numbers to very large powers. Firstly, when raising a negative number to a very large integer power, the result should be well-defined, but the previous code would only cope if the exponent was small enough to go through power_var_int(). Otherwise it would throw an internal error, attempting to take the logarithm of a negative number. Fix this by adding suitable handling to the general case in power_var() to cope with negative bases, checking for integer powers there. Next, when raising a (positive or negative) number whose absolute value is slightly less than 1 to a very large power, the result should approach zero as the power is increased. However, in some cases, for sufficiently large powers, this would lose all precision and return 1 instead of 0. This was due to the way that the local_rscale was being calculated for the final full-precision calculation: local_rscale = rscale + (int) val - ln_dweight + 8 The first two terms on the right hand side are meant to give the number of significant digits required in the result ("val" being the estimated result weight). However, this failed to account for the fact that rscale is clipped to a maximum of NUMERIC_MAX_DISPLAY_SCALE (1000), and the result weight might be less then -1000, causing their sum to be negative, leading to a loss of precision. Fix this by forcing the number of significant digits calculated to be nonnegative. It's OK for it to be zero (when the result weight is less than -1000), since the local_rscale value then includes a few extra digits to ensure an accurate result. Finally, add additional underflow checks to exp_var() and power_var(), so that they consistently return zero for cases like this where the result is indistinguishable from zero. Some paths through this code already returned zero in such cases, but others were throwing overflow errors. Dean Rasheed, reviewed by Yugo Nagata. Discussion: http://postgr.es/m/CAEZATCW6Dvq7+3wN3tt5jLj-FyOcUgT5xNoOqce5=6Su0bCR0w@mail.gmail.com
2021-07-28Disallow negative strides in date_bin()John Naylor
It's not clear what the semantics of negative strides would be, so throw an error instead. Per report from Bauyrzhan Sakhariyev Reviewed-by: Tom Lane, Michael Paquier Discussion: https://www.postgresql.org/message-id/CAKpL73vZmLuFVuwF26FJ%2BNk11PVHhAnQRoREFcA03x7znRoFvA%40mail.gmail.com Backpatch to v14
2021-07-28Avoid using ambiguous word "non-negative" in error messages.Fujii Masao
The error messages using the word "non-negative" are confusing because it's ambiguous about whether it accepts zero or not. This commit improves those error messages by replacing it with less ambiguous word like "greater than zero" or "greater than or equal to zero". Also this commit added the note about the word "non-negative" to the error message style guide, to help writing the new error messages. When postgres_fdw option fetch_size was set to zero, previously the error message "fetch_size requires a non-negative integer value" was reported. This error message was outright buggy. Therefore back-patch to all supported versions where such buggy error message could be thrown. Reported-by: Hou Zhijie Author: Bharath Rupireddy Reviewed-by: Kyotaro Horiguchi, Fujii Masao Discussion: https://postgr.es/m/OS0PR01MB5716415335A06B489F1B3A8194569@OS0PR01MB5716.jpnprd01.prod.outlook.com
2021-07-22Fix division by zero error in date_binJohn Naylor
Bauyrzhan Sakhariyev, via Github Backpatch to v14
2021-07-18Support for unnest(multirange)Alexander Korotkov
It has been spotted that multiranges lack of ability to decompose them into individual ranges. Subscription and proper expanded object representation require substantial work, and it's too late for v14. This commit provides the implementation of unnest(multirange), which is quite trivial. unnest(multirange) is defined as a polymorphic procedure. Catversion is bumped. Reported-by: Jonathan S. Katz Discussion: https://postgr.es/m/flat/60258efe-bd7e-4886-82e1-196e0cac5433%40postgresql.org Author: Alexander Korotkov Reviewed-by: Justin Pryzby, Jonathan S. Katz, Zhihong Yu, Tom Lane Reviewed-by: Alvaro Herrera
2021-07-13Rename debug_invalidate_system_caches_always to debug_discard_caches.Tom Lane
The name introduced by commit 4656e3d66 was agreed to be unreasonably long. To match this change, rename initdb's recently-added --clobber-cache option to --discard-caches. Discussion: https://postgr.es/m/1374320.1625430433@sss.pgh.pa.us
2021-07-10Fix numeric_mul() overflow due to too many digits after decimal point.Dean Rasheed
This fixes an overflow error when using the numeric * operator if the result has more than 16383 digits after the decimal point by rounding the result. Overflow errors should only occur if the result has too many digits *before* the decimal point. Discussion: https://postgr.es/m/CAEZATCUmeFWCrq2dNzZpRj5+6LfN85jYiDoqm+ucSXhb9U2TbA@mail.gmail.com
2021-07-09Fix incorrect return value in pg_size_pretty(bigint)David Rowley
Due to how pg_size_pretty(bigint) was implemented, it's possible that when given a negative number of bytes that the returning value would not match the equivalent positive return value when given the equivalent positive number of bytes. This was due to two separate issues. 1. The function used bit shifting to convert the number of bytes into larger units. The rounding performed by bit shifting is not the same as dividing. For example -3 >> 1 = -2, but -3 / 2 = -1. These two operations are only equivalent with positive numbers. 2. The half_rounded() macro rounded towards positive infinity. This meant that negative numbers rounded towards zero and positive numbers rounded away from zero. Here we fix #1 by dividing the values instead of bit shifting. We fix #2 by adjusting the half_rounded macro always to round away from zero. Additionally, adjust the pg_size_pretty(numeric) function to be more explicit that it's using division rather than bit shifting. A casual observer might have believed bit shifting was used due to a static function being named numeric_shift_right. However, that function was calculating the divisor from the number of bits and performed division. Here we make that more clear. This change is just cosmetic and does not affect the return value of the numeric version of the function. Here we also add a set of regression tests both versions of pg_size_pretty() which test the values directly before and after the function switches to the next unit. This bug was introduced in 8a1fab36a. Prior to that negative values were always displayed in bytes. Author: Dean Rasheed, David Rowley Discussion: https://postgr.es/m/CAEZATCXnNW4HsmZnxhfezR5FuiGgp+mkY4AzcL5eRGO4fuadWg@mail.gmail.com Backpatch-through: 9.6, where the bug was introduced.
2021-06-30Replace magic constants used in pg_stat_get_replication_slot().Amit Kapila
A few variables have been using 10 as a magic constant while PG_STAT_GET_REPLICATION_SLOT_COLS can be used instead. Author: Masahiko Sawada Reviewed-By: Amit Kapila Backpatch-through: 14, where it was introduced Discussion: https://postgr.es/m/CAD21AoBvqODDfmD17DkEuPCvV2KbruukXQ2Vwrv5Xi-TsAsTJA@mail.gmail.com
2021-06-29Fixes for multirange selectivity estimationAlexander Korotkov
* Fix enumeration of the multirange operators in calc_multirangesel() and calc_multirangesel() switches. * Add more regression tests for matching to empty ranges/multiranges. Reported-by: Alexander Lakhin Discussion: https://postgr.es/m/c5269c65-f967-77c5-ff7c-15e621c47f6a%40gmail.com Author: Alexander Korotkov Backpatch-through: 14, where multiranges were introduced
2021-06-28Message style improvementsPeter Eisentraut
2021-06-27Error message refactoringPeter Eisentraut
Take some untranslatable things out of the message and replace by format placeholders, to reduce translatable strings and reduce translation mistakes.
2021-06-15Revert 29854ee8d1 due to buildfarm failuresAlexander Korotkov
Reported-by: Tom Lane Discussion: https://postgr.es/m/CAPpHfdvcnw3x7jdV3r52p4%3D5S4WUxBCzcQKB3JukQHoicv1LSQ%40mail.gmail.com
2021-06-15Support for unnest(multirange) and cast multirange as an array of rangesAlexander Korotkov
It has been spotted that multiranges lack of ability to decompose them into individual ranges. Subscription and proper expanded object representation require substantial work, and it's too late for v14. This commit provides the implementation of unnest(multirange) and cast multirange as an array of ranges, which is quite trivial. unnest(multirange) is defined as a polymorphic procedure. The catalog description of the cast underlying procedure is duplicated for each multirange type because we don't have anyrangearray polymorphic type to use here. Catversion is bumped. Reported-by: Jonathan S. Katz Discussion: https://postgr.es/m/flat/60258efe-bd7e-4886-82e1-196e0cac5433%40postgresql.org Author: Alexander Korotkov Reviewed-by: Justin Pryzby, Jonathan S. Katz, Zhihong Yu
2021-06-12Ensure pg_filenode_relation(0, 0) returns NULL.Tom Lane
Previously, a zero value for the relfilenode resulted in a confusing error message about "unexpected duplicate". This function returns NULL for other invalid relfilenode values, so zero should be treated likewise. It's been like this all along, so back-patch to all supported branches. Justin Pryzby Discussion: https://postgr.es/m/20210612023324.GT16435@telsasoft.com
2021-06-10Reconsider the handling of procedure OUT parameters.Tom Lane
Commit 2453ea142 redefined pg_proc.proargtypes to include the types of OUT parameters, for procedures only. While that had some advantages for implementing the SQL-spec behavior of DROP PROCEDURE, it was pretty disastrous from a number of other perspectives. Notably, since the primary key of pg_proc is name + proargtypes, this made it possible to have multiple procedures with identical names + input arguments and differing output argument types. That would make it impossible to call any one of the procedures by writing just NULL (or "?", or any other data-type-free notation) for the output argument(s). The change also seems likely to cause grave confusion for client applications that examine pg_proc and expect the traditional definition of proargtypes. Hence, revert the definition of proargtypes to what it was, and undo a number of complications that had been added to support that. To support the SQL-spec behavior of DROP PROCEDURE, when there are no argmode markers in the command's parameter list, we perform the lookup both ways (that is, matching against both proargtypes and proallargtypes), succeeding if we get just one unique match. In principle this could result in ambiguous-function failures that would not happen when using only one of the two rules. However, overloading of procedure names is thought to be a pretty rare usage, so this shouldn't cause many problems in practice. Postgres-specific code such as pg_dump can defend against any possibility of such failures by being careful to specify argmodes for all procedure arguments. This also fixes a few other bugs in the area of CALL statements with named parameters, and improves the documentation a little. catversion bump forced because the representation of procedures with OUT arguments changes. Discussion: https://postgr.es/m/3742981.1621533210@sss.pgh.pa.us
2021-06-08Reorder superuser check in pg_log_backend_memory_contexts()Michael Paquier
The use of this function is limited to superusers and the code includes a hardcoded check for that. However, the code would look for the PGPROC entry to signal for the memory dump before checking if the user is a superuser or not, which does not make sense if we know that an error will be returned. Note that the code would let one know if a process was a PostgreSQL process or not even for non-authorized users, which is not the case now, but this avoids taking ProcArrayLock that will most likely finish by being unnecessary. Thanks to Julien Rouhaud and Tom Lane for the discussion. Discussion: https://postgr.es/m/YLxw1uVGIAP5uMPl@paquier.xyz