summaryrefslogtreecommitdiff
path: root/src
AgeCommit message (Collapse)Author
2024-01-18Move VM update code from lazy_scan_heap() to lazy_scan_prune().Robert Haas
Most of the output parameters of lazy_scan_prune() were being used to update the VM in lazy_scan_heap(). Moving that code into lazy_scan_prune() simplifies lazy_scan_heap() and requires less communication between the two. This change permits some further code simplification, but that is left for a separate commit. Melanie Plageman, reviewed by me. Discussion: http://postgr.es/m/CAAKRu_aM=OL85AOr-80wBsCr=vLVzhnaavqkVPRkFBtD0zsuLQ@mail.gmail.com
2024-01-18Optimize vacuuming of relations with no indexes.Robert Haas
If there are no indexes on a relation, items can be marked LP_UNUSED instead of LP_DEAD when pruning. This significantly reduces WAL volume, since we no longer need to emit one WAL record for pruning and a second to change the LP_DEAD line pointers thus created to LP_UNUSED. Melanie Plageman, reviewed by Andres Freund, Peter Geoghegan, and me Discussion: https://postgr.es/m/CAAKRu_bgvb_k0gKOXWzNKWHt560R0smrGe3E8zewKPs8fiMKkw%40mail.gmail.com
2024-01-18Error message capitalisationPeter Eisentraut
per style guidelines Author: Peter Smith <peter.b.smith@fujitsu.com> Discussion: https://www.postgresql.org/message-id/flat/CAHut%2BPtzstExQ4%3DvFH%2BWzZ4g4xEx2JA%3DqxussxOdxVEwJce6bw%40mail.gmail.com
2024-01-18Fix an issue in PostgreSQL::Test::Cluster:psql()Peter Eisentraut
Due to the commit c5385929 which made all Perl warnings to fatal, use of PostgreSQL::Test::Cluster:psql() and safe_psql() with timeout started to fail with the following error: Use of uninitialized value $ret in bitwise and (&) at ..src/test/perl/PostgreSQL/Test/Cluster.pm line 2015. Fix that by placing $ret conversion code in psql() in an if (defined $ret) block. With this change, the behavior of psql() becomes same as before, that is, the whole function returns undef on timeout, which is usefully different from returning 0. Author: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/06f899fd-1826-05ab-42d6-adeb1fd5e200%40eisentraut.org
2024-01-18Improve handling of dropped partitioned indexes for REINDEX INDEXMichael Paquier
A REINDEX INDEX done on a partitioned index builds a list of the indexes to work on before processing its partitions in individual transactions. When combined with a DROP of the partitioned index, there was a window where it was possible to see some unexpected "could not open relation with OID", synonym of relation lookup error. The code was robust enough to handle the case where the parent relation is missing, but not the case where an index would be gone missing. This is similar to 1d65416661bb. Support for REINDEX on partitioned relations has been introduced in a6642b3ae060, so backpatch down to 14. Author: Fei Changhong Discussion: https://postgr.es/m/tencent_6A52106095ACDE55333E3AD33F304C0C3909@qq.com Backpatch-through: 14
2024-01-18Add try_index_open(), conditional variant of index_open()Michael Paquier
try_index_open() is able to open an index if its relkind fits, except that it would return NULL instead of generated an error if the relation does not exist. This new routine will be used by an upcoming patch to make REINDEX on partitioned relations more robust when an index in a partition tree is dropped. Extracted from a larger patch by the same author. Author: Fei Changhong Discussion: https://postgr.es/m/tencent_6A52106095ACDE55333E3AD33F304C0C3909@qq.com Backpatch-through: 14
2024-01-17Remove the flaky check in event_trigger_login regression testAlexander Korotkov
The query checks that pg_database.dathasloginevt is unset on connect when there are no event triggers. However, unsetting this flag is implemented in a non-blocking way, so a concurrent autovacuum connection breaks this check. It doesn't seem we can do much with this, at least within a regression test. So, remove it. Reported-by: Alexander Lakhin Discussion: https://postgr.es/m/44807d19-81a6-3884-3e0f-22dd99aac3ed%40gmail.com
2024-01-17Fix spelling in noticeAlexander Korotkov
Reported-by: Atsushi Torikoshi Discussion: https://postgr.es/m/762d7dd4d5aa9e5ecffec2ae6a255a28%40oss.nttdata.com
2024-01-17Fix incorrect comment on how BackendStatusArray is indexedHeikki Linnakangas
The comment was copy-pasted from the call to ProcSignalInit() in AuxiliaryProcessMain(), which uses a similar scheme of having reserved slots for aux processes after MaxBackends slots for backends. However, ProcSignalInit() indexing starts from 1, whereas BackendStatusArray starts from 0. The code is correct, but the comment was wrong. Discussion: https://www.postgresql.org/message-id/f3ecd4cb-85ee-4e54-8278-5fabfb3a4ed0@iki.fi Backpatch-through: v14
2024-01-17Close socket in case of errors in setting non-blockingDaniel Gustafsson
If configuring the newly created socket non-blocking fails we error out and return INVALID_SOCKET, but the socket that had been created wasn't closed. Fix by issuing closesocket in the errorpath. Backpatch to all supported branches. Author: Ranier Vilela <ranier.vf@gmail.com> Discussion: https://postgr.es/m/CAEudQApmU5CrKefH85VbNYE2y8H=-qqEJbg6RAPU65+vCe+89A@mail.gmail.com Backpatch-through: v12
2024-01-17Fix description of DecodeInsert() in decode.cMichael Paquier
This incorrectly referred to deletes. Author: Yongtao Huang Reviewed-by: Richard Guo Description: https://postgr.es/m/CAOe1Go0Czgvo9eiDqeFpaABwJu=gBK6qjrYzZGZLn=tKDX8AUw@mail.gmail.com
2024-01-17Remove some comments related to pqPipelineSync() and PQsendPipelineSync()Michael Paquier
These comments explained how these functions behave internally, and the equivalent is described in the documentation section dedicated to the pipeline mode of libpq. Let's remove these comments, getting rid of the duplication with the docs. Reported-by: Álvaro Herrera Reviewed-by: Álvaro Herrera Discussion: https://postgr.es/m/202401150949.wq7ynlmqxphy@alvherre.pgsql
2024-01-17Add support for parsing of large XML data (>= 10MB)Michael Paquier
This commit adds XML_PARSE_HUGE to the libxml2 functions used in core for the parsing of XML objects, raising up the original limit of 10MB supported by libxml2. In most code paths of upstream, XML_MAX_TEXT_LENGTH (10^7) is the historical limit that gets upgraded to XML_MAX_HUGE_LENGTH (10^9) once XML_PARSE_HUGE is given to the parser calls. These are still limited by any palloc() calls for text, up to 1GB. This offers the possibility to handle within the backend XML objects larger than 10MB in general, with also a higher depth limit. This change affects the contrib module xml2, the xml data type and SQL/XML. Author: Dmitry Koval Reviewed-by: Tom Lane, Michael Paquier Discussion: https://postgr.es/m/18274-98d16bc03520665f@postgresql.org
2024-01-17Fix format specifier for NOTICE in copyfrom.cAlexander Korotkov
It's incorrect to use %lz for 64-bit numbers on 32-bit machine.
2024-01-16Add new COPY option SAVE_ERROR_TOAlexander Korotkov
Currently, when source data contains unexpected data regarding data type or range, the entire COPY fails. However, in some cases, such data can be ignored and just copying normal data is preferable. This commit adds a new option SAVE_ERROR_TO, which specifies where to save the error information. When this option is specified, COPY skips soft errors and continues copying. Currently, SAVE_ERROR_TO only supports "none". This indicates error information is not saved and COPY just skips the unexpected data and continues running. Later works are expected to add more choices, such as 'log' and 'table'. Author: Damir Belyalov, Atsushi Torikoshi, Alex Shulgin, Jian He Discussion: https://postgr.es/m/87k31ftoe0.fsf_-_%40commandprompt.com Reviewed-by: Pavel Stehule, Andres Freund, Tom Lane, Daniel Gustafsson, Reviewed-by: Alena Rybakina, Andy Fan, Andrei Lepikhov, Masahiko Sawada Reviewed-by: Vignesh C, Atsushi Torikoshi
2024-01-17Fix REALLOCATE_BITMAPSETS codeDavid Rowley
7d58f2342 added a compile-time option to have bitmapset.c reallocate the set before returning when a set is modified. That commit failed to do its job in various cases and returned the input set when it shouldn't have in these cases. Here we fix those missing cases. This commit also adds some documentation about what REALLOCATE_BITMAPSETS is for. This is important as future functions that go inside bitmapset.c need to know if they need to do anything special when this compile-time option is defined. Also, between 71a3e8c43 and 7d58f2342 some Asserts seem to have become duplicated. Tidy these up. Rather than having the Assert check each aspect of what makes a set invalid, here we introduce a helper function which returns false when a set is invalid and have the Asserts use this instead. Also, make a pass on improving the comments in bitmapset.c. Various comments mentioned the input sets being "recycled". This could be interpreted to mean that the output set will always point to the same memory as the given input parameter. Here we try to make it clear that this must not be relied upon and that callers must ensure that all references to a given set are updated on each modification. In passing, improve comments for bms_union(), bms_intersect() and bms_difference() to detail what they do. I (David) have too often had to remind myself by reading the code each time to find out if I need, for example, to use bms_union() or bms_join(). I also removed some low-value comments that were trying to convey information about "these operations" without mentioning which operations it was talking about. It seems better to document these things in the function header comment instead. Author: Richard Guo, David Rowley Discussion: https://postgr.es/m/CAMbWs4-djy9qYux2gZrtmxA0StrYXJjvB-oqLxn-d7J88t=PQQ@mail.gmail.com
2024-01-16Be more consistent about whether to update the FSM while vacuuming.Robert Haas
Previously, when lazy_scan_noprune() was called and returned true, we would update the FSM immediately if the relation had no indexes or if the page contained no dead items. On the other hand, when lazy_scan_prune() was called, we would update the FSM if either of those things was true or if index vacuuming was disabled. Eliminate that behavioral difference by considering vacrel->do_index_vacuuming in both cases. Also, make lazy_scan_heap() responsible for deciding whether to update the FSM, instead of doing it inside lazy_scan_noprune(). This is more consistent with the lazy_scan_prune() case. lazy_scan_noprune() still needs an output parameter for whether there are LP_DEAD items on the page, but the real decision-making now happens in the caller. Patch by me, reviewed by Peter Geoghegan and Melanie Plageman. Discussion: http://postgr.es/m/CA+TgmoaOzvN1TcHd9iej=PR3fY40En1USxzOnXSR2CxCLaRM0g@mail.gmail.com
2024-01-16Support identity columns in partitioned tablesPeter Eisentraut
Previously, identity columns were disallowed on partitioned tables. (The reason was mainly that no one had gotten around to working through all the details to make it work.) This makes it work now. Some details on the behavior: * A newly created partition inherits identity property The partitions of a partitioned table are integral part of the partitioned table. A partition inherits identity columns from the partitioned table. An identity column of a partition shares the identity space with the corresponding column of the partitioned table. In other words, the same identity column across all partitions of a partitioned table share the same identity space. This is effected by sharing the same underlying sequence. When INSERTing directly into a partition, the sequence associated with the topmost partitioned table is used to calculate the value of the corresponding identity column. In regular inheritance, identity columns and their properties in a child table are independent of those in its parent tables. A child table does not inherit identity columns or their properties automatically from the parent. (This is unchanged.) * Attached partition inherits identity column A table being attached as a partition inherits the identity property from the partitioned table. This should be fine since we expect that the partition table's column has the same type as the partitioned table's corresponding column. If the table being attached is a partitioned table, the identity properties are propagated down its partition hierarchy. An identity column in the partitioned table is also marked as NOT NULL. The corresponding column in the partition needs to be marked as NOT NULL for the attach to succeed. * Drop identity property when detaching partition A partition's identity column shares the identity space (i.e. underlying sequence) as the corresponding column of the partitioned table. If a partition is detached it can longer share the identity space as before. Hence the identity columns of the partition being detached loose their identity property. When identity of a column of a regular table is dropped it retains the NOT NULL constraint that came with the identity property. Similarly the columns of the partition being detached retain the NOT NULL constraints that came with identity property, even though the identity property itself is lost. The sequence associated with the identity property is linked to the partitioned table (and not the partition being detached). That sequence is not dropped as part of detach operation. * Partitions with their own identity columns are not allowed. * The usual ALTER operations (add identity column, add identity property to existing column, alter properties of an indentity column, drop identity property) are supported for partitioned tables. Changing a column only in a partitioned table or a partition is not allowed; the change needs to be applied to the whole partition hierarchy. Author: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> Reviewed-by: Peter Eisentraut <peter@eisentraut.org> Discussion: https://www.postgresql.org/message-id/flat/CAExHW5uOykuTC+C6R1yDSp=o8Q83jr8xJdZxgPkxfZ1Ue5RRGg@mail.gmail.com
2024-01-16Add missing PGDLLIMPORT markingsHeikki Linnakangas
Since commit 8ec569479f, we have a policy of marking all backend variables with PGDLLIMPORT. Reported-by: Anton A. Melnikov Discussion: https://www.postgresql.org/message-id/0b78546c-ffef-4cd9-9ba1-d1e6aab88cea@postgrespro.ru
2024-01-16struct XmlTableRoutine: use C99 designated initializersAlvaro Herrera
As in c27f8621eed et al. Not as critical as other cases we've handled, but I figure if we're going to add JsonbTableRoutine using TableFuncRoutine, this makes it easier to jump around the code.
2024-01-16Don't test already-referenced pointer for nullnessAlvaro Herrera
Commit b8ba7344e9eb added in PQgetResult a derefence to a pointer returned by pqPrepareAsyncResult(), before some other code that was already testing that pointer for nullness. But since commit 618c16707a6d (in Postgres 15), pqPrepareAsyncResult() doesn't ever return NULL (a statically-allocated result is returned if OOM). So in branches 15 and up, we can remove the redundant pointer check with no harm done. However, in branch 14, pqPrepareAsyncResult() can indeed return NULL if it runs out of memory. Fix things there by adding a null pointer check before dereferencing the pointer. This should hint Coverity that the preexisting check is not redundant but necessary. Backpatch to 14, like b8ba7344e9eb. Per Coverity.
2024-01-16Assert that partition inherits from only one parent in MergeAttributes()Peter Eisentraut
A partition inherits only from one partitioned table and thus inherits a column definition only once. Assert the same in MergeAttributes() and simplify a condition accordingly. Similar definition exists about line 3068 in the same function. Author: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/CAExHW5uOykuTC+C6R1yDSp=o8Q83jr8xJdZxgPkxfZ1Ue5RRGg@mail.gmail.com
2024-01-16libpq: Add PQsendPipelineSync()Michael Paquier
This new function is equivalent to PQpipelineSync(), except that it does not flush anything to the server except if the size threshold of the output buffer is reached; the user must subsequently call PQflush() instead. Its purpose is to reduce the system call overhead of pipeline mode, by giving to applications more control over the timing of the flushes when manipulating commands in pipeline mode. Author: Anton Kirilov Reviewed-by: Jelte Fennema-Nio, Robert Haas, Álvaro Herrera, Denis Laxalde, Michael Paquier Discussion: https://postgr.es/m/CACV6eE5arHFZEA717=iKEa_OewpVFfWJOmsOdGrqqsr8CJVfWQ@mail.gmail.com
2024-01-16Fix a typo and some doc indentation related to libpq pipeline functionsMichael Paquier
Noticed while reviewing the area for a different patch. This is cosmetic, so no backpatch is done.
2024-01-15Fix typos.Robert Haas
Alexander Lakhin Discussion: http://postgr.es/m/212b0987-83e5-e2ae-c5e8-b8170fdaf3a0@gmail.com
2024-01-15Fix 'negative bitmapset member' errorAlexander Korotkov
When removing a useless join, we'd remove PHVs that are not used at join partner rels or above the join. A PHV that references the join's relid in ph_eval_at is logically "above" the join and thus should not be removed. We have the following check for that: !bms_is_member(ojrelid, phinfo->ph_eval_at) However, in the case of SJE removing a useless inner join, 'ojrelid' is set to -1, which would trigger the "negative bitmapset member not allowed" error in bms_is_member(). Fix it by skipping examining ojrelid for inner joins in this check. Reported-by: Zuming Jiang Bug: #18260 Discussion: https://postgr.es/m/18260-1b6a0c4ae311b837%40postgresql.org Author: Richard Guo Reviewed-by: Andrei Lepikhov
2024-01-15Avoid useless ReplicationOriginExitCleanup lockingAlvaro Herrera
When session_replication_state is NULL, we can know there's nothing to do with no lock acquisition. Do that. Author: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> Discussion: https://postgr.es/m/CALj2ACX+YaeRU5xJqR4C7kLsTO_F7DBRNF8WgeHvJZcKtNuK_A@mail.gmail.com
2024-01-15Reduce dependency to money data type in main regression test suiteMichael Paquier
Most of these tests have been introduced in 6dd8b0080787, to check for behaviors related to hashing and hash plans, and money is a data type with btree support but no hash functions. These tests are switched to use varbit instead, to provide the same coverage. Some other tests historically used money but don't really need it for what they wanted to test (see rules.sql). Plans and coverage are unchanged after the modifications done here. Support for money may be removed a a later point, but this needs more discussion. Discussion: https://postgr.es/m/18240-c5da758d7dc1ecf0@postgresql.org
2024-01-14Prevent access to an unpinned buffer in BEFORE ROW UPDATE triggers.Tom Lane
When ExecBRUpdateTriggers switches to a new target tuple as a result of the EvalPlanQual logic, it must form a new proposed update tuple. Since commit 86dc90056, that tuple (the result of ExecGetUpdateNewTuple) has been a virtual tuple that might contain pointers to by-ref fields of the new target tuple (in "oldslot"). However, immediately after that we materialize oldslot, causing it to drop its buffer pin, whereupon the by-ref pointers are unsafe to use. This is a live bug only when the new target tuple is in a different page than the original target tuple, since we do still hold a pin on the original one. (Before 86dc90056, there was no bug because the EPQ plantree would hold a pin on the new target tuple; but now that's not assured.) To fix, forcibly materialize the new tuple before we materialize oldslot. This costs nothing since we would have done that shortly anyway. The real-world impact of this is probably minimal. A visible failure could occur if the new target tuple's buffer were recycled for some other page in the short interval before we materialize newslot within the trigger-calling loop; but that's quite unlikely given that we'd just touched that page. There's a larger hazard that some other process could prune and repack that page within the window. We have lock on the new target tuple, but that wouldn't prevent it being moved on the page. Alexander Lakhin and Tom Lane, per bug #17798 from Alexander Lakhin. Back-patch to v14 where 86dc90056 came in. Discussion: https://postgr.es/m/17798-0907404928dcf0dd@postgresql.org
2024-01-14pg_dump: Remove obsolete trigger supportPeter Eisentraut
Remove for dumping triggers from pre-9.2 servers. This should have been removed as part of 30e7c175b81. Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://www.postgresql.org/message-id/flat/56c8f5bf-de47-48c1-a592-588fb526e9e6%40eisentraut.org
2024-01-14Remove useless AssertPeter Eisentraut
It's already included in the subsequent intVal() call. Fixup for 4f622503d6.
2024-01-13Escape output of pg_amcheck testPeter Eisentraut
The pg_amcheck test reports a skip message if the layout of the index does not match expectations. That message includes the bytes that were expected and the ones that were found. But the found ones are arbitrary bytes, which can have funny effects on the terminal when they are printed. To avoid that, escape non-word characters before printing. Reviewed-by: Aleksander Alekseev <aleksander@timescale.com> Discussion: https://www.postgresql.org/message-id/flat/3f96f079-64e5-468a-8a19-cb481f0d31e5%40eisentraut.org
2024-01-13Re-pgindent catcache.c after previous commit.Tom Lane
Discussion: https://postgr.es/m/1393953.1698353013@sss.pgh.pa.us Discussion: https://postgr.es/m/CAGjhLkOoBEC9mLsnB42d3CO1vcMx71MLSEuigeABbQ8oRdA6gw@mail.gmail.com
2024-01-13Cope with catcache entries becoming stale during detoasting.Tom Lane
We've long had a policy that any toasted fields in a catalog tuple should be pulled in-line before entering the tuple in a catalog cache. However, that requires access to the catalog's toast table, and we'll typically do AcceptInvalidationMessages while opening the toast table. So it's possible that the catalog tuple is outdated by the time we finish detoasting it. Since no cache entry exists yet, we can't mark the entry stale during AcceptInvalidationMessages, and instead we'll press forward and build an apparently-valid cache entry. The upshot is that we have a race condition whereby an out-of-date entry could be made in a backend's catalog cache, and persist there indefinitely causing indeterminate misbehavior. To fix, use the existing systable_recheck_tuple code to recheck whether the catalog tuple is still up-to-date after we finish detoasting it. If not, loop around and restart the process of searching the catalog and constructing cache entries from the top. The case is rare enough that this shouldn't create any meaningful performance penalty, even in the SearchCatCacheList case where we need to tear down and reconstruct the whole list. Indeed, the case is so rare that AFAICT it doesn't occur during our regression tests, and there doesn't seem to be any easy way to build a test that would exercise it reliably. To allow testing of the retry code paths, add logic (in USE_ASSERT_CHECKING builds only) that randomly pretends that the recheck failed about one time out of a thousand. This is enough to ensure that we'll pass through the retry paths during most regression test runs. By adding an extra level of looping, this commit creates a need to reindent most of SearchCatCacheMiss and SearchCatCacheList. I'll do that separately, to allow putting those changes in .git-blame-ignore-revs. Patch by me; thanks to Alexander Lakhin for having built a test case to prove the bug is real, and to Xiaoran Wang for review. Back-patch to all supported branches. Discussion: https://postgr.es/m/1393953.1698353013@sss.pgh.pa.us Discussion: https://postgr.es/m/CAGjhLkOoBEC9mLsnB42d3CO1vcMx71MLSEuigeABbQ8oRdA6gw@mail.gmail.com
2024-01-13Make attstattarget nullablePeter Eisentraut
This changes the pg_attribute field attstattarget into a nullable field in the variable-length part of the row. If no value is set by the user for attstattarget, it is now null instead of previously -1. This saves space in pg_attribute and tuple descriptors for most practical scenarios. (ATTRIBUTE_FIXED_PART_SIZE is reduced from 108 to 104.) Also, null is the semantically more correct value. The ANALYZE code internally continues to represent the default statistics target by -1, so that that code can avoid having to deal with null values. But that is now contained to the ANALYZE code. Only the DDL code deals with attstattarget possibly null. For system columns, the field is now always null. The ANALYZE code skips system columns anyway. To set a column's statistics target to the default value, the new command form ALTER TABLE ... SET STATISTICS DEFAULT can be used. (SET STATISTICS -1 still works.) Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org> Discussion: https://www.postgresql.org/message-id/flat/4da8d211-d54d-44b9-9847-f2a9f1184c76@eisentraut.org
2024-01-12Fix memory leak in connection string validation.Jeff Davis
Introduced in commit c3afe8cf5a. Discussion: https://postgr.es/m/066a65233d3cb4ea27a9e0778d2f1d0dc764b222.camel@j-davis.com Reviewed-by: Nathan Bossart, Tom Lane Backpatch-through: 16
2024-01-12Add empty placeholder LINGUAS file for pg_walsummaryAlvaro Herrera
Like bbf1f1340800.
2024-01-12Re-validate connection string in libpqrcv_connect().Jeff Davis
A superuser may create a subscription with password_required=true, but which uses a connection string without a password. Previously, if the owner of such a subscription was changed to a non-superuser, the non-superuser was able to utilize a password from another source (like a password file or the PGPASSWORD environment variable), which should not have been allowed. This commit adds a step to re-validate the connection string before connecting. Reported-by: Jeff Davis Author: Vignesh C Reviewed-by: Peter Smith, Robert Haas, Amit Kapila Discussion: https://www.postgresql.org/message-id/flat/e5892973ae2a80a1a3e0266806640dae3c428100.camel%40j-davis.com Backpatch-through: 16
2024-01-12Refactor ATExecAddColumn() to use BuildDescForRelation()Peter Eisentraut
BuildDescForRelation() has all the knowledge for converting a ColumnDef into pg_attribute/tuple descriptor. ATExecAddColumn() can make use of that, instead of duplicating all that logic. We just pass a one-element list of ColumnDef and we get back exactly the data structure we need. Note that we don't even need to touch BuildDescForRelation() to make this work. Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org> Discussion: https://www.postgresql.org/message-id/flat/52a125e4-ff9a-95f5-9f61-b87cf447e4da@eisentraut.org
2024-01-12Fix pg_walsummary's .gitignoreMichael Paquier
It missed a entry for tmp_check/ generated by the tests. While on it, append a slash at the beginning of "pg_walsummary" to restrict its check to the current directory, like anywhere else. Oversights in ee1bfd168390.
2024-01-12Refactor code checking for file existenceMichael Paquier
jit.c and dfgr.c had a copy of the same code to check if a file exists or not, with a twist: jit.c did not check for EACCES when failing the stat() call for the path whose existence is tested. This refactored routine will be used by an upcoming patch. Reviewed-by: Ashutosh Bapat Discussion: https://postgr.es/m/ZTiV8tn_MIb_H2rE@paquier.xyz
2024-01-12Rework how logirep launchers are stopped during pg_upgradeMichael Paquier
This is a rework of 7021d3b17664, where we relied on forcing max_logical_replication_workers to 0 in the postgres command. This commit now prevents logical replication launchers to start using -b and a backend-side check based on IsBinaryUpgrade, effective when upgrading from 17 and newer versions. This commit improves the comments explaining why this restriction is necessary. This discussion was on hold until we were sure how to add support for subscribers in pg_upgrade, something now done thanks to 9a17be1e244a. Reviewed-by: Álvaro Herrera, Amit Kapila, Tom Lane Discussion: https://postgr.es/m/ZU2TeVkUg5qEi7Oy@paquier.xyz
2024-01-11Fix some inconsistent whitespace in Perl filePeter Eisentraut
2024-01-11Fix incorrect format placeholderPeter Eisentraut
2024-01-11Cleanup for unicode-update build target and test.Jeff Davis
In preparation for adding more Unicode tables. Discussion: https://postgr.es/m/63cd8625-68fa-4760-844a-6b7f643336f2@ardentperf.com Reviewed-by: Jeremy Schneider
2024-01-11Allow subquery pullup to wrap a PlaceHolderVar in another one.Tom Lane
The code for wrapping subquery output expressions in PlaceHolderVars believed that if the expression already was a PlaceHolderVar, it was never necessary to wrap that in another one. That's wrong if the expression is underneath an outer join and involves a lateral reference to outside that scope: failing to add an additional PHV risks evaluating the expression at the wrong place and hence not forcing it to null when the outer join should do so. This is an oversight in commit 9e7e29c75, which added logic to forcibly wrap lateral-reference Vars in PlaceHolderVars, but didn't see that the adjacent case for PlaceHolderVars needed the same treatment. The test case we have for this doesn't fail before 4be058fe9, but now that I see the problem I wonder if it is possible to demonstrate related errors before that. That's moot though, since all such branches are out of support. Per bug #18284 from Holger Reise. Back-patch to all supported branches. Discussion: https://postgr.es/m/18284-47505a20c23647f8@postgresql.org
2024-01-11Try to fix pg_walsummary buildfarm failures.Robert Haas
Apparently the new tuple isn't guaranteed to end up at the end of the relation, so make the test not depend on that happening.
2024-01-11Remove hastup from LVPagePruneState.Robert Haas
Instead, just have lazy_scan_prune() and lazy_scan_noprune() update LVRelState->nonempty_pages directly. This makes the two functions more similar and also removes makes lazy_scan_noprune need one fewer output parameters. Melanie Plageman, reviewed by Andres Freund, Michael Paquier, and me Discussion: http://postgr.es/m/CAAKRu_btji_wQdg=ok-5E4v_bGVxKYnnFFe7RA6Frc1EcOwtSg@mail.gmail.com
2024-01-11Reindent after commit d9ef650fca7bc574586f4171cd929cfd5240326e.Robert Haas
2024-01-11Repair various defects in dc212340058b4e7ecfc5a7a81ec50e7a207bf288.Robert Haas
pg_combinebackup had various problems: * strncpy was used in various places where strlcpy should be used instead, to avoid any possibility of the result not being \0-terminated. * scan_for_existing_tablespaces() failed to close the directory, and an error when opening the directory was reported with the wrong pathname. * write_reconstructed_file() contained some redundant and therefore dead code. * flush_manifest() didn't check the result of pg_checksum_update() as we do in other places, and misused a local pathname variable that shouldn't exist at all. In pg_basebackup, the wrong variable name was used in one place, due to a copy and paste that was not properly adjusted. In blkreftable.c, the loop incorrectly doubled chunkno instead of max_chunks. Fix that. Also remove a nearby assertion per repeated off-list complaints from Tom Lane. Per Coverity and subsequent code inspection by me and by Tom Lane. Discussion: http://postgr.es/m/CA+Tgmobvqqj-DW9F7uUzT-cQqs6wcVb-Xhs=w=hzJnXSE-kRGw@mail.gmail.com