summaryrefslogtreecommitdiff
path: root/src
AgeCommit message (Collapse)Author
2024-01-24Adjust populate_record_field() to handle errors softlyAmit Langote
This adds a Node *escontext parameter to it and a bunch of functions downstream to it, replacing any ereport()s in that path by either errsave() or ereturn() as appropriate. This also adds code to those functions where necessary to return early upon encountering a soft error. The changes here are mainly intended to suppress errors in the functions of jsonfuncs.c. Functions in any external modules, such as arrayfuncs.c, that those functions may in turn call are not changed here based on the assumption that the various checks in jsonfuncs.c functions should ensure that only values that are structurally valid get passed to the functions in those external modules. An exception is made for domain_check() to allow handling domain constraint violation errors softly. For testing, this adds a function jsonb_populate_record_valid(), which returns true if jsonb_populate_record() would finish without causing an error for the provided JSON object, false otherwise. Note that jsonb_populate_record() internally calls populate_record(), which in turn uses populate_record_field(). Extracted from a much larger patch to add SQL/JSON query functions. Author: Nikita Glukhov <n.gluhov@postgrespro.ru> Author: Teodor Sigaev <teodor@sigaev.ru> Author: Oleg Bartunov <obartunov@gmail.com> Author: Alexander Korotkov <aekorotkov@gmail.com> Author: Andrew Dunstan <andrew@dunslane.net> Author: Amit Langote <amitlangote09@gmail.com> Reviewers have included (in no particular order) Andres Freund, Alexander Korotkov, Pavel Stehule, Andrew Alsup, Erik Rijkers, Zihong Yu, Himanshu Upadhyaya, Daniel Gustafsson, Justin Pryzby, Álvaro Herrera, Jian He, Peter Eisentraut Discussion: https://postgr.es/m/cd0bb935-0158-78a7-08b5-904886deac4b@postgrespro.ru Discussion: https://postgr.es/m/20220616233130.rparivafipt6doj3@alap3.anarazel.de Discussion: https://postgr.es/m/abd9b83b-aa66-f230-3d6d-734817f0995d%40postgresql.org Discussion: https://postgr.es/m/CA+HiwqHROpf9e644D8BRqYvaAPmgBZVup-xKMDPk-nd4EpgzHw@mail.gmail.com Discussion: https://postgr.es/m/CA+HiwqE4XTdfb1nW=Ojoy_tQSRhYt-q_kb6i5d4xcKyrLC1Nbg@mail.gmail.com
2024-01-24Add soft error handling to some expression nodesAmit Langote
This adjusts the code for CoerceViaIO and CoerceToDomain expression nodes to handle errors softly. For CoerceViaIo, this adds a new ExprEvalStep opcode EEOP_IOCOERCE_SAFE, which is implemented in the new accompanying function ExecEvalCoerceViaIOSafe(). The only difference from EEOP_IOCOERCE's inline implementation is that the input function receives an ErrorSaveContext via the function's FunctionCallInfo.context, which it can use to handle errors softly. For CoerceToDomain, this simply entails replacing the ereport() in ExecEvalConstraintNotNull() and ExecEvalConstraintCheck() by errsave() passing it the ErrorSaveContext passed in the expression's ExprEvalStep. In both cases, the ErrorSaveContext to be used is passed by setting ExprState.escontext to point to it before calling ExecInitExprRec() on the expression tree whose errors are to be handled softly. Note that there's no functional change as of this commit as no call site of ExecInitExprRec() has been changed. This is intended for implementing new SQL/JSON expression nodes in future commits. Extracted from a much larger patch to add SQL/JSON query functions. Author: Nikita Glukhov <n.gluhov@postgrespro.ru> Author: Teodor Sigaev <teodor@sigaev.ru> Author: Oleg Bartunov <obartunov@gmail.com> Author: Alexander Korotkov <aekorotkov@gmail.com> Author: Andrew Dunstan <andrew@dunslane.net> Author: Amit Langote <amitlangote09@gmail.com> Reviewers have included (in no particular order) Andres Freund, Alexander Korotkov, Pavel Stehule, Andrew Alsup, Erik Rijkers, Zihong Yu, Himanshu Upadhyaya, Daniel Gustafsson, Justin Pryzby, Álvaro Herrera, Jian He, Peter Eisentraut Discussion: https://postgr.es/m/cd0bb935-0158-78a7-08b5-904886deac4b@postgrespro.ru Discussion: https://postgr.es/m/20220616233130.rparivafipt6doj3@alap3.anarazel.de Discussion: https://postgr.es/m/abd9b83b-aa66-f230-3d6d-734817f0995d%40postgresql.org Discussion: https://postgr.es/m/CA+HiwqHROpf9e644D8BRqYvaAPmgBZVup-xKMDPk-nd4EpgzHw@mail.gmail.com Discussion: https://postgr.es/m/CA+HiwqE4XTdfb1nW=Ojoy_tQSRhYt-q_kb6i5d4xcKyrLC1Nbg@mail.gmail.com
2024-01-24Fix ALTER TABLE .. ADD COLUMN with complex inheritance treesMichael Paquier
This command, when used to add a column on a parent table with a complex inheritance tree, tried to update multiple times the same tuple in pg_attribute for a child table when incrementing attinhcount, causing failures with "tuple already updated by self" because of a missing CommandCounterIncrement() between two updates. This exists for a rather long time, so backpatch all the way down. Reported-by: Alexander Lakhin Author: Tender Wang Reviewed-by: Richard Guo Discussion: https://postgr.es/m/18297-b04cd83a55b51e35@postgresql.org Backpatch-through: 12
2024-01-23Support shared libraries on Android (using make)Peter Eisentraut
While the rest of the make build system maps Android to Linux, Android uses unversioned shared libraries (like "libpq.so"). This patch makes it so. (Meson already supported it.) Reported-by: Matthias Kuhn <matthias@opengis.ch> Discussion: https://www.postgresql.org/message-id/flat/CAC7zN94TdsHhY88XkroJzSMx7E%3DBQpV9LKKjNSEnTM04ihoWCA%40mail.gmail.com
2024-01-23Fix makefiles for newly added filesPeter Eisentraut
The new files added in 9b1a6f50b9 need to be mentioned in a few more places in the makefiles. Discussion: https://www.postgresql.org/message-id/flat/E1rSAY2-002hIk-4y%40gemulon.postgresql.org
2024-01-23Revert "libpqwalreceiver: Convert to libpq-be-fe-helpers.h"Heikki Linnakangas
This reverts commit 728f86fec65537eade8d9e751961782ddb527934. The signal handling was a few bricks shy of a load in that commit, which made the walreceiver non-responsive to SIGTERM while it was waiting for the connection to be established. That prevented a standby from being promoted. Since it was non-essential refactoring, let's revert it to make v16 work the same as earlier releases. I reverted it in 'master' too, to keep the branches in sync. The refactoring was a good idea as such, but it needs a bit more work. Once we have developed a complete patch with this issue fixed, let's re-apply that to 'master'. Reported-by: Kyotaro Horiguchi Backpatch-through: 16 Discussion: https://www.postgresql.org/message-id/20231231.200741.1078989336605759878.horikyota.ntt@gmail.com
2024-01-23Generate syscache info from catalog filesPeter Eisentraut
Add a new genbki macros MAKE_SYSCACHE that specifies the syscache ID macro, the underlying index, and the number of buckets. From that, we can generate the existing tables in syscache.h and syscache.c via genbki.pl. Reviewed-by: John Naylor <johncnaylorls@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/75ae5875-3abc-dafc-8aec-73247ed41cde@eisentraut.org
2024-01-23Improve stability of recovery test 035_standby_logical_decodingMichael Paquier
This commit tweaks a couple of things in 035_standby_logical_decoding to hopefully stabilize it: - Autovacuum is now disabled, as it could hold a global xmin with a transaction. - Conflicts are generated with command sequences that removed rows (on catalogs, shared or non-shared, or just plain relations) followed by a VACUUM. This was unstable because these did not check that the horizon moved between the SQL commands and the VACUUM. The logic is refactored as follows, to ensure that VACUUM removes dead rows before testing for slot invalidation on a standby (idea suggested by Andres Freund): -- Grab the current horizon. -- Launch SQL commands removing rows. -- Check that the snapshot horizon has been updated. -- Launch VACUUM on the relation whose rows have been removed by the first step. Note that there are still some issues because of standby snapshot WAL records generated by the bgwriter, but this makes the test much more stable. Per reports from buildfarm members dikkop and skink, with analysis and tests from Alexander Lakhin. While on it, fix a couple of incorrect comments. Author: Bertrand Drouvot Reviewed-by: Alexander Lakhin, Michael Paquier Discussion: https://postgr.es/m/OSZPR01MB6310ED3CEDB531BCEDBC6AF2FD479@OSZPR01MB6310.jpnprd01.prod.outlook.com Discussion: https://postgr.es/m/bf67e076-b163-9ba3-4ade-b9fc51a3a8f6@gmail.com Backpatch-through: 16
2024-01-23Add better handling of redundant IS [NOT] NULL qualsDavid Rowley
Until now PostgreSQL has not been very smart about optimizing away IS NOT NULL base quals on columns defined as NOT NULL. The evaluation of these needless quals adds overhead. Ordinarily, anyone who came complaining about that would likely just have been told to not include the qual in their query if it's not required. However, a recent bug report indicates this might not always be possible. Bug 17540 highlighted that when we optimize Min/Max aggregates the IS NOT NULL qual that the planner adds to make the rewritten plan ignore NULLs can cause issues with poor index choice. That particular case demonstrated that other quals, especially ones where no statistics are available to allow the planner a chance at estimating an approximate selectivity for can result in poor index choice due to cheap startup paths being prefered with LIMIT 1. Here we take generic approach to fixing this by having the planner check for NOT NULL columns and just have the planner remove these quals (when they're not needed) for all queries, not just when optimizing Min/Max aggregates. Additionally, here we also detect IS NULL quals on a NOT NULL column and transform that into a gating qual so that we don't have to perform the scan at all. This also works for join relations when the Var is not nullable by any outer join. This also helps with the self-join removal work as it must replace strict join quals with IS NOT NULL quals to ensure equivalence with the original query. Author: David Rowley, Richard Guo, Andy Fan Reviewed-by: Richard Guo, David Rowley Discussion: https://postgr.es/m/CAApHDvqg6XZDhYRPz0zgOcevSMo0d3vxA9DvHrZtKfqO30WTnw@mail.gmail.com Discussion: https://postgr.es/m/17540-7aa1855ad5ec18b4%40postgresql.org
2024-01-22Fix possible NULL pointer dereference in GetNamedDSMSegment().Nathan Bossart
GetNamedDSMSegment() doesn't check whether dsm_attach() returns NULL, which creates the possibility of a NULL pointer dereference soon after. To fix, emit an ERROR if dsm_attach() returns NULL. This shouldn't happen, but it would be nice to avoid a segfault if it does. In passing, tidy up the surrounding code. Reported-by: Tom Lane Reviewed-by: Michael Paquier, Bharath Rupireddy Discussion: https://postgr.es/m/3348869.1705854106%40sss.pgh.pa.us
2024-01-23Fix ERROR message in injection_point.cMichael Paquier
This commit fixes an error message that failed to show the correct function and library names when a function cannot be loaded. While on it, adjust the call to load_external_function() so as this ERROR can be reached, by making load_external_function() return NULL rather than fail if a function cannot be found for a given injection point. Thinkos in d86d20f0ba79.
2024-01-22Fix two memcpy() bugs in the new injection point codeHeikki Linnakangas
1. The memcpy()s in InjectionPointAttach() would copy garbage from beyond the end of input string to the buffer in shared memory. You won't usually notice, but if there is not enough valid mapped memory beyond the end of the string, the read of unmapped memory will segfault. This was flagged by the Cirrus CI build with address sanitizer enabled. 2. The memcpy() in injection_point_cache_add() failed to copy the NULL terminator. Discussion: https://www.postgresql.org/message-id/0615a424-b726-4157-afa7-4245629f9512%40iki.fi
2024-01-22Abort pgbench if script end is reached with an open pipelineAlvaro Herrera
When a pipeline is opened with \startpipeline and not closed, pgbench will either error on the next transaction with a "already in pipeline mode" error or successfully end if this was the last transaction -- despite not sending anything that was piped in the pipeline. Make it an error to reach end of script is reached while there's an open pipeline. Backpatch to 14, where pgbench got support for pipelines. Author: Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com> Reported-by: Michael Paquier <michael@paquier.xyz> Discussion: https://postgr.es/m/Za4IObZkDjrO4TcS@paquier.xyz
2024-01-22Test EXPLAIN (FORMAT JSON) ... XMLTABLEAlvaro Herrera
Also, add an alias to the XMLTABLE expression in an existing test. This covers some code in explain.c that wasn't previously covered. I patched xml_2.out blindly :-( Discussion: https://postgr.es/m/202401181146.fuoeskfzriq7@alvherre.pgsql
2024-01-22Re-disallow Memoize for parameterized nested loops with join filtersDavid Rowley
This was previously fixed in 9e215378d but got broken again as a result of 2489d76c4. It seems that commit causes ppi_clauses to contain duplicate clauses and it's no longer safe to check the list_length of that list to determine if there are join conditions other than what's mentioned in ppi_clauses. Here we adjust the check to count the distinct rinfo_serial mentioned in ppi_clauses. We expect that extra->restrictlist won't have duplicate rinfo_serials. Reported-by: Amadeo Gallardo Author: Richard Guo Discussion: https://postgr.es/m/CADFREbW-BLJd7-a5J%2B5wjVumeFG1ByXiSOFzMtkmY_SDWckTxw%40mail.gmail.com Backpatch-through: 16, where 2489d76c4 was introduced.
2024-01-22Fix some typosMichael Paquier
Author: Yongtao Huang Discussion: https://postgr.es/m/CAOe1Go1F99o5JsphtXdDC5bxm7AzetU8q3AxLh4AAVGKu1AzEQ@mail.gmail.com
2024-01-22Add test module injection_pointsMichael Paquier
This provides basic coverage for injection points within a single process, while providing some callbacks that can be used for other tests. There are plans to extend this module later with more advanced capabilities for tests. Author: Michael Paquier, with comment fixes from Ashutosh Bapat. Reviewed-by: Ashutosh Bapat, Nathan Bossart, Álvaro Herrera, Dilip Kumar, Amul Sul, Nazir Bilal Yavuz Discussion: https://postgr.es/m/ZTiV8tn_MIb_H2rE@paquier.xyz
2024-01-22Add backend support for injection pointsMichael Paquier
Injection points are a new facility that makes possible for developers to run custom code in pre-defined code paths. Its goal is to provide ways to design and run advanced tests, for cases like: - Race conditions, where processes need to do actions in a controlled ordered manner. - Forcing a state, like an ERROR, FATAL or even PANIC for OOM, to force recovery, etc. - Arbitrary sleeps. This implements some basics, and there are plans to extend it more in the future depending on what's required. Hence, this commit adds a set of routines in the backend that allows developers to attach, detach and run injection points: - A code path calling an injection point can be declared with the macro INJECTION_POINT(name). - InjectionPointAttach() and InjectionPointDetach() to respectively attach and detach a callback to/from an injection point. An injection point name is registered in a shmem hash table with a library name and a function name, which will be used to load the callback attached to an injection point when its code path is run. Injection point names are just strings, so as an injection point can be declared and run by out-of-core extensions and modules, with callbacks defined in external libraries. This facility is hidden behind a dedicated switch for ./configure and meson, disabled by default. Note that backends use a local cache to store callbacks already loaded, cleaning up their cache if a callback has found to be removed on a best-effort basis. This could be refined further but any tests but what we have here was fine with the tests I've written while implementing these backend APIs. Author: Michael Paquier, with doc suggestions from Ashutosh Bapat. Reviewed-by: Ashutosh Bapat, Nathan Bossart, Álvaro Herrera, Dilip Kumar, Amul Sul, Nazir Bilal Yavuz Discussion: https://postgr.es/m/ZTiV8tn_MIb_H2rE@paquier.xyz
2024-01-21Fix table name collision in tests in 0452b461bcAlexander Korotkov
2024-01-21Explore alternative orderings of group-by pathkeys during optimization.Alexander Korotkov
When evaluating a query with a multi-column GROUP BY clause, we can minimize sort operations or avoid them if we synchronize the order of GROUP BY clauses with the ORDER BY sort clause or sort order, which comes from the underlying query tree. Grouping does not imply any ordering, so we can compare the keys in arbitrary order, and a Hash Agg leverages this. But for Group Agg, we simply compared keys in the order specified in the query. This commit explores alternative ordering of the keys, trying to find a cheaper one. The ordering of group keys may interact with other parts of the query, some of which may not be known while planning the grouping. For example, there may be an explicit ORDER BY clause or some other ordering-dependent operation higher up in the query, and using the same ordering may allow using either incremental sort or even eliminating the sort entirely. The patch always keeps the ordering specified in the query, assuming the user might have additional insights. This introduces a new GUC enable_group_by_reordering so that the optimization may be disabled if needed. Discussion: https://postgr.es/m/7c79e6a5-8597-74e8-0671-1c39d124c9d6%40sigaev.ru Author: Andrei Lepikhov, Teodor Sigaev Reviewed-by: Tomas Vondra, Claudio Freire, Gavin Flower, Dmitry Dolgov Reviewed-by: Robert Haas, Pavel Borisov, David Rowley, Zhihong Yu Reviewed-by: Tom Lane, Alexander Korotkov, Richard Guo, Alena Rybakina
2024-01-21Generalize the common code of adding sort before processing of groupingAlexander Korotkov
Extract the repetitive code pattern into a new function make_ordered_path(). Discussion: https://postgr.es/m/CAPpHfdtzaVa7S4onKy3YvttF2rrH5hQNHx9HtcSTLbpjx%2BMJ%2Bw%40mail.gmail.com Author: Andrei Lepikhov
2024-01-20Add hint about not qualifying UPDATE...SET target with relation name.Tom Lane
Target columns in UPDATE ... SET must not be qualified with the target table; we disallow this because it'd create ambiguity about which name is the column name in case of field-qualified names. However, newbies have been seen to expect that they could qualify a target name just like other names. The error message when they do is confusing: "column "foo" of relation "foo" does not exist". To improve matters, issue a HINT if the invalid name is qualified and matches the relation's alias. James Coleman (editorialized a bit by me) Discussion: https://postgr.es/m/CAAaqYe8S2Qa060UV-YF5GoSd5PkEhLV94x-fEi3=TOtpaXCV+w@mail.gmail.com
2024-01-20Add planner support functions for range operators <@ and @>.Tom Lane
These support functions will transform expressions with constant range values into direct comparisons on the range bound values, which are frequently better-optimizable. The transformation is skipped however if it would require double evaluation of a volatile or expensive element expression. Along the way, add the range opfamily OID to range typcache entries, since load_rangetype_info has to compute that anyway and it seems silly to duplicate the work later. Kim Johan Andersson and Jian He, reviewed by Laurenz Albe Discussion: https://postgr.es/m/94f64d1f-b8c0-b0c5-98bc-0793a34e0851@kimmet.dk
2024-01-19Introduce the dynamic shared memory registry.Nathan Bossart
Presently, the most straightforward way for a shared library to use shared memory is to request it at server startup via a shmem_request_hook, which requires specifying the library in shared_preload_libraries. Alternatively, the library can create a dynamic shared memory (DSM) segment, but absent a shared location to store the segment's handle, other backends cannot use it. This commit introduces a registry for DSM segments so that these other backends can look up existing segments with a library-specified string. This allows libraries to easily use shared memory without needing to request it at server startup. The registry is accessed via the new GetNamedDSMSegment() function. This function handles allocating the segment and initializing it via a provided callback. If another backend already created and initialized the segment, it simply attaches the segment. GetNamedDSMSegment() locks the registry appropriately to ensure that only one backend initializes the segment and that all other backends just attach it. The registry itself is comprised of a dshash table that stores the DSM segment handles keyed by a library-specified string. Reviewed-by: Michael Paquier, Andrei Lepikhov, Nikita Malakhov, Robert Haas, Bharath Rupireddy, Zhang Mingli, Amul Sul Discussion: https://postgr.es/m/20231205034647.GA2705267%40nathanxps13
2024-01-19Fix name collision in c64086b79dbaAlexander Korotkov
Reported-by: Erik Rijkers, Tom Lane Discussion: https://postgr.es/m/E1rQqeS-002A0s-Qm%40gemulon.postgresql.org
2024-01-19Reorder actions in ProcArrayApplyRecoveryInfo()Alexander Korotkov
Since 5a1dfde8334b, 2PC filenames use FullTransactionId. Thus, it needs to convert TransactionId to FullTransactionId in StandbyTransactionIdIsPrepared() using TransamVariables->nextXid. However, ProcArrayApplyRecoveryInfo() first releases locks with usage StandbyTransactionIdIsPrepared(), then advances TransamVariables->nextXid. This sequence of actions could cause errors. This commit makes ProcArrayApplyRecoveryInfo() advance TransamVariables->nextXid before releasing locks. Reported-by: Thomas Munro, Michael Paquier Discussion: https://postgr.es/m/CA%2BhUKGLj_ve1_pNAnxwYU9rDcv7GOhsYXJt7jMKSA%3D5-6ss-Cw%40mail.gmail.com Discussion: https://postgr.es/m/Zadp9f4E1MYvMJqe%40paquier.xyz
2024-01-19Add stratnum GiST support functionPeter Eisentraut
This is support function 12 for the GiST AM and translates "well-known" RT*StrategyNumber values into whatever strategy number is used by the opclass (since no particular numbers are actually required). We will use this to support temporal PRIMARY KEY/UNIQUE/FOREIGN KEY/FOR PORTION OF functionality. This commit adds two implementations, one for internal GiST opclasses (just an identity function) and another for btree_gist opclasses. It updates btree_gist from 1.7 to 1.8, adding the support function for all its opclasses. Author: Paul A. Jungwirth <pj@illuminatedcomputing.com> Reviewed-by: Peter Eisentraut <peter@eisentraut.org> Reviewed-by: jian he <jian.universality@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/CA+renyUApHgSZF9-nd-a0+OPGharLQLO=mDHcY4_qQ0+noCUVg@mail.gmail.com
2024-01-19Rename COPY option from SAVE_ERROR_TO to ON_ERRORAlexander Korotkov
The option names now are "stop" (default) and "ignore". The future options could be "file 'filename.log'" and "table 'tablename'". Discussion: https://postgr.es/m/20240117.164859.2242646601795501168.horikyota.ntt%40gmail.com Author: Jian He Reviewed-by: Atsushi Torikoshi
2024-01-19Fixed misspelled byteswap function for big endian machinesJohn Naylor
Per members lora and mamba
2024-01-19Add optimized C string hashingJohn Naylor
Given an already-initialized hash state and a NUL-terminated string, accumulate the hash of the string into the hash state and return the length for the caller to (optionally) save for the finalizer. This avoids a strlen call. If the string pointer is aligned, we can use a word-at-a-time algorithm for NUL lookahead. The aligned case is only used on 64-bit platforms, since it's not worth the extra complexity for 32-bit. Handling the tail of the string after finishing the word-wise loop was inspired by NetBSD's strlen(), but no code was taken since that is written in assembly language. As demonstration, use this in the search path cache. This brings the general case performance closer to the special case optimization done in commit a86c61c9ee. There are other places that could benefit, but that is left for future work. Jeff Davis and John Naylor Reviewed by Heikki Linnakangas, Jian He, Junwang Zhao Discussion: https://postgr.es/m/3820f030fd008ff14134b3e9ce5cc6dd623ed479.camel%40j-davis.com Discussion: https://postgr.es/m/b40292c99e623defe5eadedab1d438cf51a4107c.camel%40j-davis.com
2024-01-19Add inline incremental hash functions for in-memory useJohn Naylor
It can be useful for a hash function to expose separate initialization, accumulation, and finalization steps. In particular, this is useful for building inline hash functions for simplehash. Instead of trying to whack around hash_bytes while maintaining its current behavior on all platforms, we base this work on fasthash (MIT licensed) which is simple, faster than hash_bytes for inputs over 12 bytes long, and also passes the hash function testing suite SMHasher. The fasthash functions have been reimplemented using our added-on incremental interface to validate that this method will still give the same answer, provided we have the input length ahead of time. This functionality lives in a new header hashfn_unstable.h. The name implies we have the freedom to change things across versions that would be unacceptable for our other hash functions that are used for e.g. hash indexes and hash partitioning. As such, these should only be used for in-memory data structures like hash tables. There is also no guarantee of being independent of endianness or pointer size. As demonstration, use fasthash for pgstat_hash_hash_key. Previously this called the 32-bit murmur finalizer on the three elements, then joined them with hash_combine(). The new function is simpler, faster and takes up less binary space. While the collision and bias behavior were almost certainly fine with the previous coding, now we have objective confidence of that. There are other places that could benefit from this, but that is left for future work. Reviewed by Jeff Davis, Heikki Linnakangas, Jian He, Junwang Zhao Credit to Andres Freund for the idea Discussion: https://postgr.es/m/20231122223432.lywt4yz2bn7tlp27%40awork3.anarazel.de
2024-01-19psql: Add ignore_slash_options in bind's inactive branchMichael Paquier
All commands accepting arguments, handling them with OT_NORMAL, OT_SQLID or OT_SQLIDHACK, should call ignore_slash_options() in inactive branch to scan and discard extra arguments. All the backslash commands that handle arguments do so, except \bind. This commit adds the missing ignore_slash_options to \bind's inactive branch. This inconsistency is a logic bug, however the behavior happens to be unchanged as any extra arguments are discarded later in HandleSlashCmds(), so no backpatch is done. While on it, this adds \bind to the list of backslash commands where inactive \if branches are checked in the tests for psql. Reported-by: Jelte Fennema-Nio Author: Anthonin Bonnefoy Discussion: https://postgr.es/m/CAGECzQR1+udGKz+FbHiCQ7CWDiF1fCGi2xYuvQUODdMAfJbaLA@mail.gmail.com
2024-01-19Fix incorrect placeholder in walreceiver.cMichael Paquier
Author: Yongtao Huang Discussion: https://postgr.es/m/CAOe1Go3H7CgrSceO+HBhnoptk-mJhii-YT8D19CikKintjwumQ@mail.gmail.com
2024-01-18Improve some documentation about the bootstrap superuser.Nathan Bossart
This commit adds some notes about the inability to remove superuser privileges from the bootstrap superuser. This has been blocked since commit e530be2c5c, but it wasn't intended be a supported feature before that, either. In passing, change "bootstrap user" to "bootstrap superuser" in a couple places. Author: Yurii Rashkovskii Reviewed-by: Vignesh C, David G. Johnston Discussion: https://postgr.es/m/CA%2BRLCQzSx_eTC2Fch0EzeNHD3zFUcPvBYOoB%2BpPScFLch1DEQw%40mail.gmail.com
2024-01-18Fix buildfarm error from commit 5c31669058.Jeff Davis
Skip test when not using unix domain sockets. Discussion: https://postgr.es/m/CALDaNm29-8OozsBWo9H6DN_Tb_3yA1QjRJput-KhaN8ncDJtJA@mail.gmail.com Backpatch-through: 16
2024-01-19Fix broken Bitmapset optimization in DiscreteKnapsack()David Rowley
Some code in DiscreteKnapsack() attempted to zero all words in a Bitmapset by performing bms_del_members() to delete all the members from itself before replacing those members with members from another set. When that code was written, this was a valid way to manipulate the set in such a way to save from palloc having to be called to allocate a new Bitmapset. However, 00b41463c modified Bitmapsets so that an empty set is *always* represented as NULL and this breaks the optimization as the Bitmapset code will always pfree the memory when the set becomes empty. Since DiscreteKnapsack() has been coded to avoid as many unneeded allocations as possible, it seems risky to not fix this. Here we add bms_replace_members() to effectively perform an in-place copy of another set, reusing the memory of the existing set, when possible. This got broken in v16, but no backpatch for now as there've been no complaints. Reviewed-by: Richard Guo Discussion: https://postgr.es/m/CAApHDvoTCBkBU2PJghNOFUiO0q=QP4WAWHi5sJP6_4=b2WodrA@mail.gmail.com
2024-01-18Fix plpgsql to allow new-style SQL CREATE FUNCTION as a SQL command.Tom Lane
plpgsql fails on new-style CREATE FUNCTION/PROCEDURE commands within a routine or DO block, because make_execsql_stmt believes that a semicolon token always terminates a SQL command. Now, that's actually been wrong since the day it was written, because CREATE RULE has long allowed multiple rule actions separated by semicolons. But there are few enough people using multi-action rules that there was never an attempt to fix it. New-style SQL functions, though, are popular. psql has this same problem of "does this semicolon really terminate the command?". It deals with CREATE RULE by counting parenthesis nesting depth: a semicolon within parens doesn't end a command. Commits e717a9a18 and 029c5ac03 created a similar heuristic to count matching BEGIN/END pairs (but only within CREATEs, so as not to be fooled by plain BEGIN). That's survived several releases now without trouble reports, so let's just absorb those heuristics into plpgsql. Per report from Samuel Dussault. Back-patch to v14 where new-style SQL function syntax came in. Discussion: https://postgr.es/m/YT2PR01MB88552C3E9AD40A6C038774A781722@YT2PR01MB8855.CANPRD01.PROD.OUTLOOK.COM
2024-01-18Remove LVPagePruneState.Robert Haas
Commit cb970240f13df2b63f0410f81f452179a2b78d6f moved some code from lazy_scan_heap() to lazy_scan_prune(), and now some things that used to need to be passed back and forth are completely local to lazy_scan_prune(). Hence, this struct is mostly obsolete. The only thing that still needs to be passed back to the caller is has_lpdead_items, and that's also passed back by lazy_scan_noprune(), so do it the same way in both cases. Melanie Plageman, reviewed and slightly revised by me. Discussion: http://postgr.es/m/CAAKRu_aM=OL85AOr-80wBsCr=vLVzhnaavqkVPRkFBtD0zsuLQ@mail.gmail.com
2024-01-18Move VM update code from lazy_scan_heap() to lazy_scan_prune().Robert Haas
Most of the output parameters of lazy_scan_prune() were being used to update the VM in lazy_scan_heap(). Moving that code into lazy_scan_prune() simplifies lazy_scan_heap() and requires less communication between the two. This change permits some further code simplification, but that is left for a separate commit. Melanie Plageman, reviewed by me. Discussion: http://postgr.es/m/CAAKRu_aM=OL85AOr-80wBsCr=vLVzhnaavqkVPRkFBtD0zsuLQ@mail.gmail.com
2024-01-18Optimize vacuuming of relations with no indexes.Robert Haas
If there are no indexes on a relation, items can be marked LP_UNUSED instead of LP_DEAD when pruning. This significantly reduces WAL volume, since we no longer need to emit one WAL record for pruning and a second to change the LP_DEAD line pointers thus created to LP_UNUSED. Melanie Plageman, reviewed by Andres Freund, Peter Geoghegan, and me Discussion: https://postgr.es/m/CAAKRu_bgvb_k0gKOXWzNKWHt560R0smrGe3E8zewKPs8fiMKkw%40mail.gmail.com
2024-01-18Error message capitalisationPeter Eisentraut
per style guidelines Author: Peter Smith <peter.b.smith@fujitsu.com> Discussion: https://www.postgresql.org/message-id/flat/CAHut%2BPtzstExQ4%3DvFH%2BWzZ4g4xEx2JA%3DqxussxOdxVEwJce6bw%40mail.gmail.com
2024-01-18Fix an issue in PostgreSQL::Test::Cluster:psql()Peter Eisentraut
Due to the commit c5385929 which made all Perl warnings to fatal, use of PostgreSQL::Test::Cluster:psql() and safe_psql() with timeout started to fail with the following error: Use of uninitialized value $ret in bitwise and (&) at ..src/test/perl/PostgreSQL/Test/Cluster.pm line 2015. Fix that by placing $ret conversion code in psql() in an if (defined $ret) block. With this change, the behavior of psql() becomes same as before, that is, the whole function returns undef on timeout, which is usefully different from returning 0. Author: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/06f899fd-1826-05ab-42d6-adeb1fd5e200%40eisentraut.org
2024-01-18Improve handling of dropped partitioned indexes for REINDEX INDEXMichael Paquier
A REINDEX INDEX done on a partitioned index builds a list of the indexes to work on before processing its partitions in individual transactions. When combined with a DROP of the partitioned index, there was a window where it was possible to see some unexpected "could not open relation with OID", synonym of relation lookup error. The code was robust enough to handle the case where the parent relation is missing, but not the case where an index would be gone missing. This is similar to 1d65416661bb. Support for REINDEX on partitioned relations has been introduced in a6642b3ae060, so backpatch down to 14. Author: Fei Changhong Discussion: https://postgr.es/m/tencent_6A52106095ACDE55333E3AD33F304C0C3909@qq.com Backpatch-through: 14
2024-01-18Add try_index_open(), conditional variant of index_open()Michael Paquier
try_index_open() is able to open an index if its relkind fits, except that it would return NULL instead of generated an error if the relation does not exist. This new routine will be used by an upcoming patch to make REINDEX on partitioned relations more robust when an index in a partition tree is dropped. Extracted from a larger patch by the same author. Author: Fei Changhong Discussion: https://postgr.es/m/tencent_6A52106095ACDE55333E3AD33F304C0C3909@qq.com Backpatch-through: 14
2024-01-17Remove the flaky check in event_trigger_login regression testAlexander Korotkov
The query checks that pg_database.dathasloginevt is unset on connect when there are no event triggers. However, unsetting this flag is implemented in a non-blocking way, so a concurrent autovacuum connection breaks this check. It doesn't seem we can do much with this, at least within a regression test. So, remove it. Reported-by: Alexander Lakhin Discussion: https://postgr.es/m/44807d19-81a6-3884-3e0f-22dd99aac3ed%40gmail.com
2024-01-17Fix spelling in noticeAlexander Korotkov
Reported-by: Atsushi Torikoshi Discussion: https://postgr.es/m/762d7dd4d5aa9e5ecffec2ae6a255a28%40oss.nttdata.com
2024-01-17Fix incorrect comment on how BackendStatusArray is indexedHeikki Linnakangas
The comment was copy-pasted from the call to ProcSignalInit() in AuxiliaryProcessMain(), which uses a similar scheme of having reserved slots for aux processes after MaxBackends slots for backends. However, ProcSignalInit() indexing starts from 1, whereas BackendStatusArray starts from 0. The code is correct, but the comment was wrong. Discussion: https://www.postgresql.org/message-id/f3ecd4cb-85ee-4e54-8278-5fabfb3a4ed0@iki.fi Backpatch-through: v14
2024-01-17Close socket in case of errors in setting non-blockingDaniel Gustafsson
If configuring the newly created socket non-blocking fails we error out and return INVALID_SOCKET, but the socket that had been created wasn't closed. Fix by issuing closesocket in the errorpath. Backpatch to all supported branches. Author: Ranier Vilela <ranier.vf@gmail.com> Discussion: https://postgr.es/m/CAEudQApmU5CrKefH85VbNYE2y8H=-qqEJbg6RAPU65+vCe+89A@mail.gmail.com Backpatch-through: v12
2024-01-17Fix description of DecodeInsert() in decode.cMichael Paquier
This incorrectly referred to deletes. Author: Yongtao Huang Reviewed-by: Richard Guo Description: https://postgr.es/m/CAOe1Go0Czgvo9eiDqeFpaABwJu=gBK6qjrYzZGZLn=tKDX8AUw@mail.gmail.com
2024-01-17Remove some comments related to pqPipelineSync() and PQsendPipelineSync()Michael Paquier
These comments explained how these functions behave internally, and the equivalent is described in the documentation section dedicated to the pipeline mode of libpq. Let's remove these comments, getting rid of the duplication with the docs. Reported-by: Álvaro Herrera Reviewed-by: Álvaro Herrera Discussion: https://postgr.es/m/202401150949.wq7ynlmqxphy@alvherre.pgsql