summaryrefslogtreecommitdiff
path: root/src
AgeCommit message (Collapse)Author
2025-01-31Fix comment of StrategySyncStart()Michael Paquier
The top comment of StrategySyncStart() mentions BufferSync(), but this function calls BgBufferSync(), not BufferSync(). Oversight in 9cd00c457e6a. Author: Ashutosh Bapat Discussion: https://postgr.es/m/CAExHW5tgkjag8i-s=RFrCn5KAWDrC4zEPPkfUKczfccPOxBRQQ@mail.gmail.com Backpatch-through: 13
2025-01-30Avoid integer overflow while testing wal_skip_threshold condition.Tom Lane
smgrDoPendingSyncs had two distinct risks of integer overflow while deciding which way to ensure durability of a newly-created relation. First, it accumulated the total size of all forks in a variable of type BlockNumber (uint32). While we restrict an individual fork's size to fit in that, I don't believe there's such a restriction on all of them added together. Second, it proceeded to multiply the sum by BLCKSZ, which most certainly could overflow a uint32. (The exact expression is total_blocks * BLCKSZ / 1024. The compiler might choose to optimize that to total_blocks * 8, which is not at quite as much risk of overflow as a literal reading would be, but it's still wrong.) If an overflow did occur it could lead to a poor choice to shove a very large relation into WAL instead of fsync'ing it. This wouldn't be fatal, but it could be inefficient. Change total_blocks to uint64 which should be plenty, and rearrange the comparison calculation to be overflow-safe. I noticed this while looking for ramifications of the proposed change in MAX_KILOBYTES. It's not entirely clear to me why wal_skip_threshold is limited to MAX_KILOBYTES in the first place, but in any case this code is unsafe regardless of the range of wal_skip_threshold. Oversight in c6b92041d which introduced wal_skip_threshold, so back-patch to v13. Discussion: https://postgr.es/m/1a01f0-66ec2d80-3b-68487680@27595217 Backpatch-through: 13
2025-01-29Handle default NULL insertion a little better.Tom Lane
If a column is omitted in an INSERT, and there's no column default, the code in preptlist.c generates a NULL Const to be inserted. Furthermore, if the column is of a domain type, we wrap the Const in CoerceToDomain, so as to throw a run-time error if the domain has a NOT NULL constraint. That's fine as far as it goes, but there are two problems: 1. We're being sloppy about the type/typmod that the Const is labeled with. It really should have the domain's base type/typmod, since it's the input to CoerceToDomain not the output. This can result in coerce_to_domain inserting a useless length-coercion function (useless because it's being applied to a null). The coercion would typically get const-folded away later, but it'd be better not to create it in the first place. 2. We're not applying expression preprocessing (specifically, eval_const_expressions) to the resulting expression tree. The planner's primary expression-preprocessing pass already happened, so that means the length coercion step and CoerceToDomain node miss preprocessing altogether. This is at the least inefficient, since it means the length coercion and CoerceToDomain will actually be executed for each inserted row, though they could be const-folded away in most cases. Worse, it seems possible that missing preprocessing for the length coercion could result in an invalid plan (for example, due to failing to perform default-function-argument insertion). I'm not aware of any live bug of that sort with core datatypes, and it might be unreachable for extension types as well because of restrictions of CREATE CAST, but I'm not entirely convinced that it's unreachable. Hence, it seems worth back-patching the fix (although I only went back to v14, as the patch doesn't apply cleanly at all in v13). There are several places in the rewriter that are building null domain constants the same way as preptlist.c. While those are before the planner and hence don't have any reachable bug, they're still applying a length coercion that will be const-folded away later, uselessly wasting cycles. Hence, make a utility routine that all of these places can call to do it right. Making this code more careful about the typmod assigned to the generated NULL constant has visible but cosmetic effects on some of the plans shown in contrib/postgres_fdw's regression tests. Discussion: https://postgr.es/m/1865579.1738113656@sss.pgh.pa.us Backpatch-through: 14
2025-01-29Avoid breaking SJIS encoding while de-backslashing Windows paths.Tom Lane
When running on Windows, canonicalize_path() converts '\' to '/' to prevent confusing the Windows command processor. It was doing that in a non-encoding-aware fashion; but in SJIS there are valid two-byte characters whose second byte matches '\'. So encoding corruption ensues if such a character is used in the path. We can fairly easily fix this if we know which encoding is in use, but a lot of our utilities don't have much of a clue about that. After some discussion we decided we'd settle for fixing this only in psql, and assuming that its value of client_encoding matches what the user is typing. It seems hopeless to get the server to deal with the problematic characters in database path names, so we'll just declare that case to be unsupported. That means nothing need be done in the server, nor in utility programs whose only contact with file path names is for database paths. But psql frequently deals with client-side file paths, so it'd be good if it didn't mess those up. Bug: #18735 Reported-by: Koichi Suzuki <koichi.suzuki@enterprisedb.com> Author: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Koichi Suzuki <koichi.suzuki@enterprisedb.com> Discussion: https://postgr.es/m/18735-4acdb3998bb9f2b1@postgresql.org Backpatch-through: 13
2025-01-25At update of non-LP_NORMAL TID, fail instead of corrupting page header.Noah Misch
The right mix of DDL and VACUUM could corrupt a catalog page header such that PageIsVerified() durably fails, requiring a restore from backup. This affects only catalogs that both have a syscache and have DDL code that uses syscache tuples to construct updates. One of the test permutations shows a variant not yet fixed. This makes !TransactionIdIsValid(TM_FailureData.xmax) possible with TM_Deleted. I think core and PGXN are indifferent to that. Per bug #17821 from Alexander Lakhin. Back-patch to v13 (all supported versions). The test case is v17+, since it uses INJECTION_POINT. Discussion: https://postgr.es/m/17821-dd8c334263399284@postgresql.org
2025-01-25Test ECPG decadd(), decdiv(), decmul(), and decsub() for risnull() input.Noah Misch
Since commit 757fb0e5a9a61ac8d3a67e334faeea6dc0084b3f, these Informix-compat functions return 0 without changing the output parameter. Initialize the output parameter before the test call, making that obvious. Before this, the expected test output has been depending on freed stack memory. "gcc -ftrivial-auto-var-init=pattern" revealed that. Back-patch to v13 (all supported versions). Discussion: https://postgr.es/m/20250106192748.cf.nmisch@google.com
2025-01-25Use the correct sizeof() in BufFileLoadBufferTomas Vondra
The sizeof() call should reference buffer.data, because that's the buffer we're reading data into, not the whole PGAlignedBuffer union. This was introduced by 44cac93464, which replaced the simple buffer with a PGAlignedBuffer field. It's benign, because the buffer is the largest field of the union, so the sizes are the same. But it's easy to trip over this in a patch, so fix and backpatch. Commit 44cac93464 went into 12, but that's EOL. Backpatch-through: 13 Discussion: https://postgr.es/m/928bdab1-6567-449f-98c4-339cd2203b87@vondra.me
2025-01-23Don't ask for bug reports about pthread_is_threaded_np() != 0.Tom Lane
We thought that this condition was unreachable in ExitPostmaster, but actually it's possible if you have both a misconfigured locale setting and some other mistake that causes PostmasterMain to bail out before reaching its own check of pthread_is_threaded_np(). Given the lack of other reports, let's not ask for bug reports if this occurs; instead just give the same hint as in PostmasterMain. Bug: #18783 Reported-by: anani191181515@gmail.com Author: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Noah Misch <noah@leadboat.com> Discussion: https://postgr.es/m/18783-d1873b95a59b9103@postgresql.org Discussion: https://postgr.es/m/206317.1737656533@sss.pgh.pa.us Backpatch-through: 13
2025-01-22Repair incorrect handling of AfterTriggerSharedData.ats_modifiedcols.Tom Lane
This patch fixes two distinct errors that both ultimately trace to commit 71d60e2aa, which added the ats_modifiedcols field. The more severe error is that ats_modifiedcols wasn't accounted for in afterTriggerAddEvent's scanning loop that looks for a pre-existing duplicate AfterTriggerSharedData. Thus, a new event could be incorrectly matched to an AfterTriggerSharedData that has a different value of ats_modifiedcols, resulting in the wrong tg_updatedcols bitmap getting passed to the trigger whenever it finally gets fired. We'd not noticed because (a) few triggers consult tg_updatedcols, and (b) we had no tests exercising a case where such a trigger was called as an AFTER trigger. In the test case added by this commit, contrib/lo's trigger fails to remove a large object when expected because (without this fix) it thinks the LO OID column hasn't changed. The other problem was introduced by commit ce5aaea8c, which copied the modified-columns bitmap into trigger-related storage. It made a copy for every trigger event, whereas what we really want is to make a new copy only when we make a new AfterTriggerSharedData entry. (We could imagine adding extra logic to reduce the number of bitmap copies still more, but it doesn't look worthwhile at the moment.) In a simple test of an UPDATE of 10000000 rows with a single AFTER trigger, this thinko roughly tripled the amount of memory consumed by the pending-triggers data structures, from 160446744 to 480443440 bytes. Fixing the first problem requires introducing a bms_equal() call into afterTriggerAddEvent's scanning loop, which is slightly annoying from a speed perspective. However, getting rid of the excessive bms_copy() calls from the second problem balances that out; overall speed of trigger operations is the same or slightly better, in my tests. Discussion: https://postgr.es/m/3496294.1737501591@sss.pgh.pa.us Backpatch-through: 13
2025-01-20Update time zone data files to tzdata release 2025a.Tom Lane
DST law changes in Paraguay. Historical corrections for the Philippines. Backpatch-through: 13
2025-01-20Avoid using timezone Asia/Manila in regression tests.Tom Lane
The freshly-released 2025a version of tzdata has a refined estimate for the longitude of Manila, changing their value for LMT in pre-standardized-timezone days. This changes the output of one of our test cases. Since we need to be able to run with system tzdata files that may or may not contain this update, we'd better stop making that specific test. I switched it to use Asia/Singapore, which has a roughly similar UTC offset. That LMT value hasn't changed in tzdb since 2003, so we can hope that it's well established. I also noticed that this set of make_timestamptz tests only exercises zones east of Greenwich, which seems rather sad, and was not the original intent AFAICS. (We've already changed these tests once to stabilize their results across tzdata updates, cf 66b737cd9; it looks like I failed to consider the UTC-offset-sign aspect then.) To improve that, add a test with Pacific/Honolulu. That LMT offset is also quite old in tzdb, so we'll cross our fingers that it doesn't get improved. Reported-by: Christoph Berg <cb@df7cb.de> Discussion: https://postgr.es/m/Z46inkznCxesvDEb@msg.df7cb.de Backpatch-through: 13
2025-01-20Fix header check for continuation records where standbys could be stuckMichael Paquier
XLogPageRead() checks immediately for an invalid WAL record header on a standby, to be able to handle the case of continuation records that need to be read across two different sources. As written, the check was too generic, applying to any target LSN. Based on an analysis by Kyotaro Horiguchi, what really matters is to make sure that the page header is checked when attempting to read a LSN at the boundary of a segment, to handle the case of a continuation record that spawns across multiple pages when dealing with multiple segments, as WAL receivers are spawned they request WAL from the beginning of a segment. This fix has been proposed by Kyotaro Horiguchi. This could cause standbys to loop infinitely when dealing with a continuation record during a timeline jump, in the case where the contents of the record in the follow-up page are invalid. Some regression tests are added to check such scenarios, able to reproduce the original problem. In the test, the contents of a continuation record are overwritten with junk zeros on its follow-up page, and replayed on standbys. This is inspired by 039_end_of_wal.pl, and is enough to show how standbys should react on promotion by not being stuck. Without the fix, the test would fail with a timeout. The test to reproduce the problem has been written by Alexander Kukushkin. The original check has been introduced in 066871980183, for a similar problem. Author: Kyotaro Horiguchi, Alexander Kukushkin Reviewed-by: Michael Paquier Discussion: https://postgr.es/m/CAFh8B=mozC+e1wGJq0H=0O65goZju+6ab5AU7DEWCSUA2OtwDg@mail.gmail.com Backpatch-through: 13
2025-01-18Fix readlink() for non-PostgreSQL junction points on Windows.Thomas Munro
Since commit c5cb8f3b taught stat() to follow symlinks, and since initdb uses pg_mkdir_p(), and that examines parent directories, our humble readlink() implementation can now be exposed to junction points not of PostgreSQL origin. Those might be corrupted by our naive path mangling, which doesn't really understand NT paths in general. Simply decline to transform paths that don't look like a drive absolute path. That means that readlink() returns the NT path directly when checking a parent directory of PGDATA that happen to point to a drive using "rooted" format. That works for the purposes of our stat() emulation. Reported-by: Roman Zharkov <r.zharkov@postgrespro.ru> Reviewed-by: Roman Zharkov <r.zharkov@postgrespro.ru> Discussion: https://postgr.es/m/4590c37927d7b8ee84f9855d83229018%40postgrespro.ru Discussion: https://postgr.es/m/CA%2BhUKG%2BajSQ_8eu2AogTncOnZ5me2D-Cn66iN_-wZnRjLN%2Bicg%40mail.gmail.com Backpatched commit f71007fb as above by Thomas Munro into releases 13 thru 15 Discussion: https://postgr.es/m/CA+hUKGLbnv+pe3q1fYOVkLD3pMra7GuihfMxUN-1831YH9RYQg@mail.gmail.com
2025-01-18Fix stat() for recursive junction points on Windows.Thomas Munro
Commit c5cb8f3b supposed that we'd only ever have to follow one junction point in stat(), because we don't construct longer chains of them ourselves. When examining a parent directory supplied by the user, we should really be able to cope with longer chains, just in case someone has their system set up that way. Choose an arbitrary cap of 8, to match the minimum acceptable value of SYMLOOP_MAX in POSIX. Previously I'd avoided reporting ELOOP thinking Windows didn't have it, but it turns out that it does, so we can use the proper error number. Reviewed-by: Roman Zharkov <r.zharkov@postgrespro.ru> Discussion: https://postgr.es/m/CA%2BhUKGJ7JDGWYFt9%3D-TyJiRRy5q9TtPfqeKkneWDr1XPU1%2Biqw%40mail.gmail.com Discussion: https://postgr.es/m/CA%2BhUKG%2BajSQ_8eu2AogTncOnZ5me2D-Cn66iN_-wZnRjLN%2Bicg%40mail.gmail.com Backpatched commit 4517358e as above by Thomas Munro into releases 13 thru 15 Discussion: https://postgr.es/m/CA+hUKGLbnv+pe3q1fYOVkLD3pMra7GuihfMxUN-1831YH9RYQg@mail.gmail.com
2025-01-17Revert recent changes related to handling of 2PC files at recoveryMichael Paquier
This commit reverts 8f67f994e8ea (down to v13) and c3de0f9eed38 (down to v17), as these are proving to not be completely correct regarding two aspects: - In v17 and newer branches, c3de0f9eed38's check for epoch handling is incorrect, and does not correctly handle frozen epochs. A logic closer to widen_snapshot_xid() should be used. The 2PC code should try to integrate deeper with FullTransactionIds, 5a1dfde8334b being not enough. - In v13 and newer branches, 8f67f994e8ea is a workaround for the real issue, which is that we should not attempt CLOG lookups without reaching consistency. This exists since 728bd991c3c4, and this is reachable with ProcessTwoPhaseBuffer() called by restoreTwoPhaseData() at the beginning of recovery. Per discussion with Noah Misch. Discussion: https://postgr.es/m/20250116010051.f3.nmisch@google.com Backpatch-through: 13
2025-01-16Fix setrefs.c's failure to do expression processing on prune steps.Tom Lane
We should run the expression subtrees of PartitionedRelPruneInfo structs through fix_scan_expr. Failure to do so means that AlternativeSubPlans within those expressions won't be cleaned up properly, resulting in "unrecognized node type" errors since v14. It seems fairly likely that at least some of the other steps done by fix_scan_expr are important here as well, resulting in as-yet- undetected bugs. Therefore, I've chosen to back-patch this to all supported branches including v13, even though the known symptom doesn't manifest in v13. Per bug #18778 from Alexander Lakhin. Discussion: https://postgr.es/m/18778-24cd399df6c806af@postgresql.org
2025-01-16Move routines to manipulate WAL into PostgreSQL::Test::ClusterMichael Paquier
These facilities were originally in the recovery TAP test 039_end_of_wal.pl. A follow-up bug fix with a TAP test doing similar WAL manipulations requires them, and all these had better not be duplicated due to their complexity. The routine names are tweaked to use "wal" more consistently, similarly to the existing "advance_wal". In v14 and v13, the new routines are moved to PostgresNode.pm. 039_end_of_wal.pl is updated to use the refactored routines, without changing its coverage. Reviewed-by: Alexander Kukushkin Discussion: https://postgr.es/m/CAFh8B=mozC+e1wGJq0H=0O65goZju+6ab5AU7DEWCSUA2OtwDg@mail.gmail.com Backpatch-through: 13
2025-01-14Avoid symbol collisions between pqsignal.c and legacy-pqsignal.c.Tom Lane
In the name of ABI stability (that is, to avoid a library major version bump for libpq), libpq still exports a version of pqsignal() that we no longer want to use ourselves. However, since that has the same link name as the function exported by src/port/pqsignal.c, there is a link ordering dependency determining which version will actually get used by code that uses libpq as well as libpgport.a. It now emerges that the wrong version has been used by pgbench and psql since commit 06843df4a rearranged their link commands. This can result in odd failures in pgbench with the -T switch, since its SIGALRM handler will now not be marked SA_RESTART. psql may have some edge-case problems in \watch, too. Since we don't want to depend on link ordering effects anymore, let's fix this in the same spirit as b6c7cfac8: use macros to change the actual link names of the competing functions. We cannot change legacy-pqsignal.c's exported name of course, so the victim has to be src/port/pqsignal.c. In master, rename its exported name to be pqsignal_fe in frontend or pqsignal_be in backend. (We could perhaps have gotten away with using the same symbol in both cases, but since the FE and BE versions now work a little differently, it seems advisable to use different names.) In back branches, rename to pqsignal_fe in frontend but keep it as pqsignal in backend. The frontend change could affect third-party code that is calling pqsignal from libpgport.a or libpgport_shlib.a, but only if the code is compiled against port.h from a different minor release than libpgport. Since we don't support using libpgport as a shared library, it seems unlikely that there will be such a problem. I left the backend symbol unchanged to avoid an ABI break for extensions. This means that the link ordering hazard still exists for any extension that links against libpq. However, none of our own extensions use both pqsignal() and libpq, and we're not making things any worse for third-party extensions that do. Report from Andy Fan, diagnosis by Fujii Masao, patch by me. Back-patch to all supported branches, as 06843df4a was. Discussion: https://postgr.es/m/87msfz5qv2.fsf@163.com
2025-01-15ecpg: Restore detection of unsupported COPY FROM STDIN.Fujii Masao
The ecpg command includes code to warn about unsupported COPY FROM STDIN statements in input files. However, since commit 3d009e45bd, this functionality has been broken due to a bug introduced in that commit, causing ecpg to fail to detect the statement. This commit resolves the issue, restoring ecpg's ability to detect COPY FROM STDIN and issue a warning as intended. Back-patch to all supported versions. Author: Ryo Kanbayashi Reviewed-by: Hayato Kuroda, Tom Lane Discussion: https://postgr.es/m/CANOn0Ez_t5uDCUEV8c1YORMisJiU5wu681eEVZzgKwOeiKhkqQ@mail.gmail.com
2025-01-14Fix catcache invalidation of a list entry that's being builtHeikki Linnakangas
If a new catalog tuple is inserted that belongs to a catcache list entry, and cache invalidation happens while the list entry is being built, the list entry might miss the newly inserted tuple. To fix, change the way we detect concurrent invalidations while a catcache entry is being built. Keep a stack of entries that are being built, and apply cache invalidation to those entries in addition to the real catcache entries. This is similar to the in-progress list in relcache.c. Back-patch to all supported versions. Reviewed-by: Noah Misch Discussion: https://www.postgresql.org/message-id/2234dc98-06fe-42ed-b5db-ac17384dc880@iki.fi
2025-01-14Fix potential integer overflow in bringetbitmap()Michael Paquier
This function expects an "int64" as result and stores the number of pages to add to the index scan bitmap as an "int", multiplying its final result by 10. For a relation large enough, this can theoretically overflow if counting more than (INT32_MAX / 10) pages, knowing that the number of pages is upper-bounded by MaxBlockNumber. To avoid the overflow, this commit redefines "totalpages", used to calculate the result, to be an "int64" rather than an "int". Reported-by: Evgeniy Gorbanyov Author: James Hunter Discussion: https://www.postgresql.org/message-id/07704817-6fa0-460c-b1cf-cd18f7647041@basealt.ru Backpatch-through: 13
2025-01-12Fix HBA option countDaniel Gustafsson
Commit 27a1f8d108 missed updating the max HBA option count to account for the new option added. Fix by bumping the counter and adjust the relevant comment to match. Backpatch down to all supported branches like the erroneous commit. Reported-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/286764.1736697356@sss.pgh.pa.us Backpatch-through: v13
2025-01-12Fix XMLTABLE() deparsing to quote namespace names if necessary.Dean Rasheed
When deparsing an XMLTABLE() expression, XML namespace names were not quoted. However, since they are parsed as ColLabel tokens, some names require double quotes to ensure that they are properly interpreted. Fix by using quote_identifier() in the deparsing code. Back-patch to all supported versions. Dean Rasheed, reviewed by Tom Lane. Discussion: https://postgr.es/m/CAEZATCXTpAS%3DncfLNTZ7YS6O5puHeLg_SUYAit%2Bcs7wsrd9Msg%40mail.gmail.com
2025-01-11Repair memory leaks in plpython.Tom Lane
PLy_spi_execute_plan (PLyPlan.execute) and PLy_cursor_plan (plpy.cursor) use PLy_output_convert to convert Python values into Datums that can be passed to the query-to-execute. But they failed to pay much attention to its warning that it can leave "cruft generated along the way" behind. Repeated use of these methods can result in a substantial memory leak for the duration of the calling plpython function. To fix, make a temporary memory context to invoke PLy_output_convert in. This also lets us get rid of the rather fragile code that was here for retail pfree's of the converted Datums. Indeed, we don't need the PLyPlanObject.values field anymore at all, though I left it in place in the back branches in the name of ABI stability. Mat Arye and Tom Lane, per report from Mat Arye. Back-patch to all supported branches. Discussion: https://postgr.es/m/CADsUR0DvVgnZYWwnmKRK65MZg7YLUSTDLV61qdnrwtrAJgU6xw@mail.gmail.com
2025-01-10Fix missing ldapscheme option in pg_hba_file_rules()Daniel Gustafsson
The ldapscheme option was missed when inspecing the HbaLine for assembling rows for the pg_hba_file_rules function. Backpatch to all supported versions. Author: Laurenz Albe <laurenz.albe@cybertec.at> Reported-by: Laurenz Albe <laurenz.albe@cybertec.at> Reviewed-by: Daniel Gustafsson <daniel@yesql.se> Bug: 18769 Discussion: https://postgr.es/m/18769-dd8610cbc0405172@postgresql.org Backpatch-through: v13
2025-01-09Fix off_t overflow in pg_basebackup on Windows.Thomas Munro
walmethods.c used off_t to navigate around a pg_wal.tar file that could exceed 2GB, which doesn't work on Windows and would fail with misleading errors. Use pgoff_t instead. Back-patch to all supported branches. Author: Davinder Singh <davinder.singh@enterprisedb.com> Reported-by: Jakub Wartak <jakub.wartak@enterprisedb.com> Discussion: https://postgr.es/m/CAKZiRmyM4YnokK6Oenw5JKwAQ3rhP0YTz2T-tiw5dAQjGRXE3Q%40mail.gmail.com
2025-01-09Provide 64-bit ftruncate() and lseek() on Windows.Thomas Munro
Change our ftruncate() macro to use the 64-bit variant of chsize(), and add a new macro to redirect lseek() to _lseeki64(). Back-patch to all supported releases, in preparation for a bug fix. Tested-by: Davinder Singh <davinder.singh@enterprisedb.com> Discussion: https://postgr.es/m/CAKZiRmyM4YnokK6Oenw5JKwAQ3rhP0YTz2T-tiw5dAQjGRXE3Q%40mail.gmail.com
2025-01-08Fix C error reported by Oracle compiler.Thomas Munro
Commit 66aaabe7 (branches 13 - 17 only) was not acceptable to the Oracle Developer Studio compiler on build farm animal wrasse. It accidentally used a C++ style return statement to wrap a void function. None of the usual compilers complained, but it is right, that is not allowed in C. Fix. Reported-by: Michael Paquier <michael@paquier.xyz> Discussion: https://postgr.es/m/Z33vgfVgvOnbFLN9%40paquier.xyz
2025-01-08Fix memory leak in pgoutput with relation attribute mapMichael Paquier
pgoutput caches the attribute map of a relation, that is free()'d only when validating a RelationSyncEntry. However, this code path is not taken when calling any of the SQL functions able to do some logical decoding, like pg_logical_slot_{get,peek}_changes(), leaking some memory into CacheMemoryContext on repeated calls. This is a follow-up of c9b3d4909bbf, this time for v13 and v14. The relation attribute map is stored in a dedicated memory context, tracked with a static variable whose state is reset with a MemoryContext reset callback attached to PGOutputData->context. This implementation is similar to the approach taken by cfd6cbcf9be0. Reported-by: Masahiko Sawada Author: Vignesh C Reviewed-by: Hou Zhijie Discussion: https://postgr.es/m/CAD21AoDkAhQVSukOfH3_reuF-j4EU0-HxMqU3dU+bSTxsqT14Q@mail.gmail.com Discussion: https://postgr.es/m/CALDaNm1hewNAsZ_e6FF52a=9drmkRJxtEPrzCB6-9mkJyeBBqA@mail.gmail.com Backpatch-through: 13
2025-01-08Restore smgrtruncate() prototype in back-branches.Thomas Munro
It's possible that external code is calling smgrtruncate(). Any external callers might like to consider the recent changes to RelationTruncate(), but commit 38c579b0 should not have changed the function prototype in the back-branches, per ABI stability policy. Restore smgrtruncate()'s traditional argument list in the back-branches, but make it a wrapper for a new function smgrtruncate2(). The three callers in core can use smgrtruncate2() directly. In master (18-to-be), smgrtruncate2() is effectively renamed to smgrtruncate(), so this wart is cleaned up. Reviewed-by: Michael Paquier <michael@paquier.xyz> Discussion: https://postgr.es/m/CA%2BhUKG%2BThae6x6%2BjmQiuALQBT2Ae1ChjMh1%3DkMvJ8y_SBJZrvA%40mail.gmail.com
2024-12-30Fix handling of orphaned 2PC files in the future at recoveryMichael Paquier
Before 728bd991c3c4, that has improved the support for 2PC files during recovery, the initial logic scanning files in pg_twophase was done so as files in the future of the transaction ID horizon were checked first, followed by a check if a transaction ID is aborted or committed which could involve a pg_xact lookup. After this commit, these checks have been done in reverse order. Files detected as in the future do not have a state that can be checked in pg_xact, hence this caused recovery to fail abruptly should an orphaned 2PC file in the future of the transaction ID horizon exist in pg_twophase at the beginning of recovery. A test is added to check for this scenario, using an empty 2PC with a transaction ID large enough to be in the future when running the test. This test is added in 16 and older versions for now. 17 and newer versions are impacted by a second bug caused by the addition of the epoch in the 2PC file names. An equivalent test will be added in these branches in a follow-up commit, once the second set of issues reported are fixed. Author: Vitaly Davydov, Michael Paquier Discussion: https://postgr.es/m/11e597-676ab680-8d-374f23c0@145466129 Backpatch-through: 13
2024-12-28Exclude parallel workers from connection privilege/limit checks.Tom Lane
Cause parallel workers to not check datallowconn, rolcanlogin, and ACL_CONNECT privileges. The leader already checked these things (except for rolcanlogin which might have been checked for a different role). Re-checking can accomplish little except to induce unexpected failures in applications that might not even be aware that their query has been parallelized. We already had the principle that parallel workers rely on their leader to pass a valid set of authorization information, so this change just extends that a bit further. Also, modify the ReservedConnections, datconnlimit and rolconnlimit logic so that these limits are only enforced against regular backends, and only regular backends are counted while checking if the limits were already reached. Previously, background processes that had an assigned database or role were subject to these limits (with rather random exclusions for autovac workers and walsenders), and the set of existing processes that counted against each limit was quite haphazard as well. The point of these limits, AFAICS, is to ensure the availability of PGPROC slots for regular backends. Since all other types of processes have their own separate pools of PGPROC slots, it makes no sense either to enforce these limits against them or to count them while enforcing the limit. While edge-case failures of these sorts have been possible for a long time, the problem got a good deal worse with commit 5a2fed911 (CVE-2024-10978), which caused parallel workers to make some of these checks using the leader's current role where before we had used its AuthenticatedUserId, thus allowing parallel queries to fail after SET ROLE. The previous behavior was fairly accidental and I have no desire to return to it. This patch includes reverting 73c9f91a1, which was an emergency hack to suppress these same checks in some cases. It wasn't complete, as shown by a recent bug report from Laurenz Albe. We can also revert fd4d93d26 and 492217301, which hacked around the same problems in one regression test. In passing, remove the special case for autovac workers in CheckMyDatabase; it seems cleaner to have AutoVacWorkerMain pass the INIT_PG_OVERRIDE_ALLOW_CONNS flag, now that that does what's needed. Like 5a2fed911, back-patch to supported branches (which sadly no longer includes v12). Discussion: https://postgr.es/m/1808397.1735156190@sss.pgh.pa.us
2024-12-28In REASSIGN OWNED of a database, lock the tuple as mandated.Noah Misch
Commit aac2c9b4fde889d13f859c233c2523345e72d32b mandated such locking and attempted to fulfill that mandate, but it missed REASSIGN OWNED. Hence, it remained possible to lose VACUUM's inplace update of datfrozenxid if a REASSIGN OWNED processed that database at the same time. This didn't affect the other inplace-updated catalog, pg_class. For pg_class, REASSIGN OWNED calls ATExecChangeOwner() instead of the generic AlterObjectOwner_internal(), and ATExecChangeOwner() fulfills the locking mandate. Like in GRANT, implement this by following the locking protocol for any catalog subject to the generic AlterObjectOwner_internal(). It would suffice to do this for IsInplaceUpdateOid() catalogs only. Back-patch to v13 (all supported versions). Kirill Reshke. Reported by Alexander Kukushkin. Discussion: https://postgr.es/m/CAFh8B=mpKjAy4Cuun-HP-f_vRzh2HSvYFG3rhVfYbfEBUhBAGg@mail.gmail.com
2024-12-23Fix memory leak in pgoutput with publication list cacheMichael Paquier
The pgoutput module caches publication names in a list and frees it upon invalidation. However, the code forgot to free the actual publication names within the list elements, as publication names are pstrdup()'d in GetPublication(). This would cause memory to leak in CacheMemoryContext, bloating it over time as this context is not cleaned. This is a problem for WAL senders running for a long time, as an accumulation of invalidation requests would bloat its cache memory usage. A second case, where this leak is easier to see, involves a backend calling SQL functions like pg_logical_slot_{get,peek}_changes() which create a new decoding context with each execution. More publications create more bloat. To address this, this commit adds a new memory context within the logical decoding context and resets it each time the publication names cache is invalidated, based on a suggestion from Amit Kapila. This ensures that the lifespan of the publication names aligns with that of the logical decoding context. Contrary to the HEAD-only commit f0c569d71515 that has changed PGOutputData to track this new child memory context, the context is tracked with a static variable whose state is reset with a MemoryContext reset callback attached to PGOutputData->context, so as ABI compatibility is preserved in stable branches. This approach is based on an suggestion from Amit Kapila. Analyzed-by: Michael Paquier, Jeff Davis Author: Masahiko Sawada Reviewed-by: Amit Kapila, Michael Paquier, Euler Taveira, Hou Zhijie Discussion: https://postgr.es/m/Z0khf9EVMVLOc_YY@paquier.xyz Backpatch-through: 13
2024-12-21Update TransactionXmin when MyProc->xmin is updatedHeikki Linnakangas
GetSnapshotData() set TransactionXmin = MyProc->xmin, but when SnapshotResetXmin() advanced MyProc->xmin, it did not advance TransactionXmin correspondingly. That meant that TransactionXmin could be older than MyProc->xmin, and XIDs between than TransactionXmin and the real MyProc->xmin could be vacuumed away. One known consequence is in pg_subtrans lookups: we might try to look up the status of an XID that was already truncated away. Back-patch to all supported versions. Reviewed-by: Andres Freund Discussion: https://www.postgresql.org/message-id/d27a046d-a1e4-47d1-a95c-fbabe41debb4@iki.fi
2024-12-20Fix corruption when relation truncation fails.Thomas Munro
RelationTruncate() does three things, while holding an AccessExclusiveLock and preventing checkpoints: 1. Logs the truncation. 2. Drops buffers, even if they're dirty. 3. Truncates some number of files. Step 2 could previously be canceled if it had to wait for I/O, and step 3 could and still can fail in file APIs. All orderings of these operations have data corruption hazards if interrupted, so we can't give up until the whole operation is done. When dirty pages were discarded but the corresponding blocks were left on disk due to ERROR, old page versions could come back from disk, reviving deleted data (see pgsql-bugs #18146 and several like it). When primary and standby were allowed to disagree on relation size, standbys could panic (see pgsql-bugs #18426) or revive data unknown to visibility management on the primary (theorized). Changes: * WAL is now unconditionally flushed first * smgrtruncate() is now called in a critical section, preventing interrupts and causing PANIC on file API failure * smgrtruncate() has a new parameter for existing fork sizes, because it can't call smgrnblocks() itself inside a critical section The changes apply to RelationTruncate(), smgr_redo() and pg_truncate_visibility_map(). That last is also brought up to date with other evolutions of the truncation protocol. The VACUUM FileTruncate() failure mode had been discussed in older reports than the ones referenced below, with independent analysis from many people, but earlier theories on how to fix it were too complicated to back-patch. The more recently invented cancellation bug was diagnosed by Alexander Lakhin. Other corruption scenarios were spotted by me while iterating on this patch and earlier commit 75818b3a. Back-patch to all supported releases. Reviewed-by: Michael Paquier <michael@paquier.xyz> Reviewed-by: Robert Haas <robertmhaas@gmail.com> Reported-by: rootcause000@gmail.com Reported-by: Alexander Lakhin <exclusion@gmail.com> Discussion: https://postgr.es/m/18146-04e908c662113ad5%40postgresql.org Discussion: https://postgr.es/m/18426-2d18da6586f152d6%40postgresql.org
2024-12-20Replace durable_rename_excl() by durable_rename(), take twoMichael Paquier
durable_rename_excl() attempts to avoid overwriting any existing files by using link() and unlink(), and it falls back to rename() on some platforms (aka WIN32), which offers no such overwrite protection. Most callers use durable_rename_excl() just in case there is an existing file, but in practice there shouldn't be one (see below for more details). Furthermore, failures during durable_rename_excl() can result in multiple hard links to the same file. As per Nathan's tests, it is possible to end up with two links to the same file in pg_wal after a crash just before unlink() during WAL recycling. Specifically, the test produced links to the same file for the current WAL file and the next one because the half-recycled WAL file was re-recycled upon restarting, leading to WAL corruption. This change replaces all the calls of durable_rename_excl() to durable_rename(). This removes the protection against accidentally overwriting an existing file, but some platforms are already living without it and ordinarily there shouldn't be one. The function itself is left around in case any extensions are using it. It will be removed on HEAD via a follow-up commit. Here is a summary of the existing callers of durable_rename_excl() (see second discussion link at the bottom), replaced by this commit. First, basic_archive used it to avoid overwriting an archive concurrently created by another server, but as mentioned above, it will still overwrite files on some platforms. Second, xlog.c uses it to recycle past WAL segments, where an overwrite should not happen (origin of the change at f0e37a8) because there are protections about the WAL segment to select when recycling an entry. The third and last area is related to the write of timeline history files. writeTimeLineHistory() will write a new timeline history file at the end of recovery on promotion, so there should be no such files for the same timeline. What remains is writeTimeLineHistoryFile(), that can be used in parallel by a WAL receiver and the startup process, and some digging of the buildfarm shows that EEXIST from a WAL receiver can happen with an error of "could not link file \"pg_wal/xlogtemp.NN\" to \"pg_wal/MM.history\", which would cause an automatic restart of the WAL receiver as it is promoted to FATAL, hence this should improve the stability of the WAL receiver as rename() would overwrite an existing TLI history file already fetched by the startup process at recovery. This is the second time this change is attempted, ccfbd92 being the first one, but this time no assertions are added for the case of a TLI history file written concurrently by the WAL receiver or the startup process because we can expect one to exist (some of the TAP tests are able to trigger with a proper timing). This commit has been originally applied on v16~ as of dac1ff30906b, and we have received more reports of this issue, where clusters can become corrupted at replay in older stable branches with multiple links pointing to the same physical WAL segment file. This backpatch addresses the problem for the v13~v15 range. Author: Nathan Bossart Reviewed-by: Robert Haas, Kyotaro Horiguchi, Michael Paquier Discussion: https://postgr.es/m/20220407182954.GA1231544@nathanxps13 Discussion: https://postgr.es/m/Ym6GZbqQdlalSKSG@paquier.xyz Discussion: https://postgr.es/m/CAJhEC04tBkYPF4q2uS_rCytauvNEVqdBAzasBEokfceFhF=KDQ@mail.gmail.com
2024-12-19Fix Assert failure in WITH RECURSIVE UNION queriesDavid Rowley
If the non-recursive part of a recursive CTE ended up using TTSOpsBufferHeapTuple as the table slot type, then a duplicate value could cause an Assert failure in CheckOpSlotCompatibility() when checking the hash table for the duplicate value. The expected slot type for the deform step was TTSOpsMinimalTuple so the Assert failed when the TTSOpsBufferHeapTuple slot was used. This is a long-standing bug which we likely didn't notice because it seems much more likely that the non-recursive term would have required projection and used a TTSOpsVirtual slot, which CheckOpSlotCompatibility is ok with. There doesn't seem to be any harm done here other than the Assert failure. Both TTSOpsMinimalTuple and TTSOpsBufferHeapTuple slot types require tuple deformation, so the EEOP_*_FETCHSOME ExprState step would have properly existed in the ExprState. The solution is to pass NULL for the ExecBuildGroupingEqual's 'lops' parameter. This means the ExprState's EEOP_*_FETCHSOME step won't expect a fixed slot type. This makes CheckOpSlotCompatibility() happy as no checking is performed when the ExprEvalStep is not expecting a fixed slot type. Reported-by: Richard Guo Reviewed-by: Tom Lane Discussion: https://postgr.es/m/CAMbWs4-8U9q2LAtf8+ghV11zeUReA3AmrYkxzBEv0vKnDxwkKA@mail.gmail.com Backpatch-through: 13, all supported versions
2024-12-17Accommodate very large dshash tables.Nathan Bossart
If a dshash table grows very large (e.g., the dshash table for cumulative statistics when there are millions of tables), resizing it may fail with an error like: ERROR: invalid DSA memory alloc request size 1073741824 To fix, permit dshash resizing to allocate more than 1 GB by providing the DSA_ALLOC_HUGE flag. Reported-by: Andreas Scherbaum Author: Matthias van de Meent Reviewed-by: Cédric Villemain, Michael Paquier, Andres Freund Discussion: https://postgr.es/m/80a12d59-0d5e-4c54-866c-e69cd6536471%40pgug.de Backpatch-through: 13
2024-12-16Make 009_twophase.pl test pass with recovery_min_apply_delay setHeikki Linnakangas
The test failed if you ran the regression tests with TEMP_CONFIG with recovery_min_apply_delay = '500ms'. Fix the race condition by waiting for transaction to be applied in the replica, like in a few other tests. The failing test was introduced in commit cbfbda7841. Backpatch to all supported versions like that commit (except v12, which is no longer supported). Reported-by: Alexander Lakhin Discussion: https://www.postgresql.org/message-id/09e2a70a-a6c2-4b5c-aeae-040a7449c9f2@gmail.com
2024-12-15pgbench: fix misprocessing of some nested \if constructs.Tom Lane
An \if command appearing within a false (not-to-be-executed) \if branch was incorrectly treated the same as \elif. This could allow statements within the inner \if to be executed when they should not be. Also the missing inner \if stack entry would result in an assertion failure (in assert-enabled builds) when the final \endif is reached. Report and patch by Michail Nikolaev. Back-patch to all supported branches. Discussion: https://postgr.es/m/CANtu0oiA1ke=SP6tauhNqkUdv5QFsJtS1p=aOOf_iU+EhyKkjQ@mail.gmail.com
2024-12-13Fix possible crash in pg_dump with identity sequences.Tom Lane
If an owned sequence is considered interesting, force its owning table to be marked interesting too. This ensures, in particular, that we'll fetch the owning table's column names so we have the data needed for ALTER TABLE ... ADD GENERATED. Previously there were edge cases where pg_dump could get SIGSEGV due to not having filled in the column names. (The known case is where the owning table has been made part of an extension while its identity sequence is not a member; but there may be others.) Also, if it's an identity sequence, force its dumped-components mask to exactly match the owning table: dump definition only if we're dumping the table's definition, dump data only if we're dumping the table's data, etc. This generalizes the code introduced in commit b965f2617 that set the sequence's dump mask to NONE if the owning table's mask is NONE. That's insufficient to prevent failures, because for example the table's mask might only request dumping ACLs, which would lead us to still emit ALTER TABLE ADD GENERATED even though we didn't create the table. It seems better to treat an identity sequence as though it were an inseparable aspect of the table, matching the treatment used in the backend's dependency logic. Perhaps this policy needs additional refinement, but let's wait to see some field use-cases before changing it further. While here, add a comment in pg_dump.h warning against writing tests like "if (dobj->dump == DUMP_COMPONENT_NONE)", which was a bug in this case. There is one other example in getPublicationNamespaces, which if it's not a bug is at least remarkably unclear and under-documented. Changing that requires a separate discussion, however. Per report from Artur Zakirov. Back-patch to all supported branches. Discussion: https://postgr.es/m/CAKNkYnwXFBf136=u9UqUxFUVagevLQJ=zGd5BsLhCsatDvQsKQ@mail.gmail.com
2024-12-12Backpatch critical performance fixes to pgarch.cÁlvaro Herrera
This backpatches commits beb4e9ba1652 and 1fb17b190341 (originally appearing in previously in REL_15_STABLE) to REL_14_STABLE. Performance of the WAL archiver can become pretty critical at times, and reports exist of users getting in serious trouble (hours of downtime, loss of replicas) because of lack of this optimization. We'd like to backpatch these to REL_13_STABLE too, but because of the very invasive changes made by commit d75288fb27b8 in the 14 timeframe, we deem it too risky :-( Original commit messages appear below. Discussion: https://postgr.es/m/202411131605.m66syq5i5ucl@alvherre.pgsql commit beb4e9ba1652a04f66ff20261444d06f678c0b2d Author: Robert Haas <rhaas@postgresql.org> AuthorDate: Thu Nov 11 15:02:53 2021 -0500 Improve performance of pgarch_readyXlog() with many status files. Presently, the archive_status directory was scanned for each file to archive. When there are many status files, say because archive_command has been failing for a long time, these directory scans can get very slow. With this change, the archiver remembers several files to archive during each directory scan, speeding things up. To ensure timeline history files are archived as quickly as possible, XLogArchiveNotify() forces the archiver to do a new directory scan as soon as the .ready file for one is created. Nathan Bossart, per a long discussion involving many people. It is not clear to me exactly who out of all those people reviewed this particular patch. Discussion: http://postgr.es/m/CA+TgmobhAbs2yabTuTRkJTq_kkC80-+jw=pfpypdOJ7+gAbQbw@mail.gmail.com Discussion: http://postgr.es/m/620F3CE1-0255-4D66-9D87-0EADE866985A@amazon.com commit 1fb17b1903414676bd371068739549cd2966fe87 Author: Tom Lane <tgl@sss.pgh.pa.us> AuthorDate: Wed Dec 29 17:02:50 2021 -0500 Fix issues in pgarch's new directory-scanning logic. The arch_filenames[] array elements were one byte too small, so that a maximum-length filename would get corrupted if another entry were made after it. (Noted by Thomas Munro, fix by Nathan Bossart.) Move these arrays into a palloc'd struct, so that we aren't wasting a few kilobytes of static data in each non-archiver process. Add a binaryheap_reset() call to make it plain that we start the directory scan with an empty heap. I don't think there's any live bug of that sort, but it seems fragile, and this is very cheap insurance. Cleanup for commit beb4e9ba1, so no back-patch needed. Discussion: https://postgr.es/m/CA+hUKGLHAjHuKuwtzsW7uMJF4BVPcQRL-UMZG_HM-g0y7yLkUg@mail.gmail.com
2024-12-10Fix elog(FATAL) before PostmasterMain() or just after fork().Noah Misch
Since commit 97550c0711972a9856b5db751539bbaf2f88884c, these failed with "PANIC: proc_exit() called in child process" due to uninitialized or stale MyProcPid. That was reachable if close() failed in ClosePostmasterPorts() or setlocale(category, "C") failed, both unlikely. Back-patch to v13 (all supported versions). Discussion: https://postgr.es/m/20241208034614.45.nmisch@google.com
2024-12-09Simplify executor's determination of whether to use parallelism.Tom Lane
Our parallel-mode code only works when we are executing a query in full, so ExecutePlan must disable parallel mode when it is asked to do partial execution. The previous logic for this involved passing down a flag (variously named execute_once or run_once) from callers of ExecutorRun or PortalRun. This is overcomplicated, and unsurprisingly some of the callers didn't get it right, since it requires keeping state that not all of them have handy; not to mention that the requirements for it were undocumented. That led to assertion failures in some corner cases. The only state we really need for this is the existing QueryDesc.already_executed flag, so let's just put all the responsibility in ExecutePlan. (It could have been done in ExecutorRun too, leading to a slightly shorter patch -- but if there's ever more than one caller of ExecutePlan, it seems better to have this logic in the subroutine than the callers.) This makes those ExecutorRun/PortalRun parameters unnecessary. In master it seems okay to just remove them, returning the API for those functions to what it was before parallelism. Such an API break is clearly not okay in stable branches, but for them we can just leave the parameters in place after documenting that they do nothing. Per report from Yugo Nagata, who also reviewed and tested this patch. Back-patch to all supported branches. Discussion: https://postgr.es/m/20241206062549.710dc01cf91224809dd6c0e1@sraoss.co.jp
2024-12-07Ensure that pg_amop/amproc entries depend on their lefttype/righttype.Tom Lane
Usually an entry in pg_amop or pg_amproc does not need a dependency on its amoplefttype/amoprighttype/amproclefttype/amprocrighttype types, because there is an indirect dependency via the argument types of its referenced operator or procedure, or via the opclass it belongs to. However, for some support procedures in some index AMs, the argument types of the support procedure might not mention the column data type at all. Also, the amop/amproc entry might be treated as "loose" in the opfamily, in which case it lacks a dependency on any particular opclass; or it might be a cross-type entry having a reference to a datatype that is not its opclass' opcintype. The upshot of all this is that there are cases where a datatype can be dropped while leaving behind amop/amproc entries that mention it, because there is no path in pg_depend showing that those entries depend on that type. Such entries are harmless in normal activity, because they won't get used, but they cause problems for maintenance actions such as dropping the operator family. They also cause pg_dump to produce bogus output. The previous commit put a band-aid on the DROP OPERATOR FAMILY failure, but a real fix is needed. To fix, add pg_depend entries showing that a pg_amop/pg_amproc entry depends on its lefttype/righttype. To avoid bloating pg_depend too much, skip this if the referenced operator or function has that type as an input type. (I did not bother with considering the possible indirect dependency via the opclass' opcintype; at least in the reported case, that wouldn't help anyway.) Probably, the reason this has escaped notice for so long is that add-on datatypes and relevant opclasses/opfamilies are usually packaged as extensions nowadays, so that there's no way to drop a type without dropping the referencing opclasses/opfamilies too. Still, in the absence of pg_depend entries there's nothing that constrains DROP EXTENSION to drop the opfamily entries before the datatype, so it seems possible for a DROP failure to occur anyway. The specific case that was reported doesn't fail in v13, because v13 prefers to attach the support procedure to the opclass not the opfamily. But it's surely possible to construct other edge cases that do fail in v13, so patch that too. Per report from Yoran Heling. Back-patch to all supported branches. Discussion: https://postgr.es/m/Z1MVCOh1hprjK5Sf@gmai021
2024-12-07Make getObjectDescription robust against dangling amproc type links.Tom Lane
Yoran Heling reported a case where a data type could be dropped while references to its OID remain behind in pg_amproc. This causes getObjectDescription to fail, which blocks dropping the operator family (since our DROP code likes to construct descriptions of everything it's dropping). The proper fix for this requires adding more pg_depend entries. But to allow DROP to go through with already-corrupt catalogs, tweak getObjectDescription to print "???" for the type instead of failing when it processes such an entry. I changed the logic for pg_amop similarly, for consistency, although it is not known that the problem can manifest in pg_amop. Per report from Yoran Heling. Back-patch to all supported branches (although the problem may be unreachable in v13). Discussion: https://postgr.es/m/Z1MVCOh1hprjK5Sf@gmai021
2024-12-07Fix is_digit labeling of to_timestamp's FFn format codes.Tom Lane
These format codes produce or consume strings of digits, so they should be labeled with is_digit = true, but they were not. This has effect in only one place, where is_next_separator() is checked to see if the preceding format code should slurp up all the available digits. Thus, with a format such as '...SSFF3' with remaining input '12345', the 'SS' code would consume all five digits (and then complain about seconds being out of range) when it should eat only two digits. Per report from Nick Davies. This bug goes back to d589f9446 where the FFn codes were introduced, so back-patch to v13. Discussion: https://postgr.es/m/AM8PR08MB6356AC979252CFEA78B56678B6312@AM8PR08MB6356.eurprd08.prod.outlook.com
2024-12-05Avoid low-probability crash on out-of-memory.Tom Lane
check_restrict_nonsystem_relation_kind() correctly uses guc_malloc() in v16 and later. But in older branches it must use malloc() directly, and it forgot to check for failure return. Faulty backpatching of 66e94448a. Karina Litskevich Discussion: https://postgr.es/m/CACiT8iZ=atkguKVbpN4HmJFMb4+T9yEowF5JuPZG8W+kkZ9L6w@mail.gmail.com
2024-12-03RelationTruncate() must set DELAY_CHKPT_START.Thomas Munro
Previously, it set only DELAY_CHKPT_COMPLETE. That was important, because it meant that if the XLOG_SMGR_TRUNCATE record preceded a XLOG_CHECKPOINT_ONLINE record in the WAL, then the truncation would also happen on disk before the XLOG_CHECKPOINT_ONLINE record was written. However, it didn't guarantee that the sync request for the truncation was processed before the XLOG_CHECKPOINT_ONLINE record was written. By setting DELAY_CHKPT_START, we guarantee that if an XLOG_SMGR_TRUNCATE record is written to WAL before the redo pointer of a concurrent checkpoint, the sync request queued by that operation must be processed by that checkpoint, rather than being left for the following one. This is a refinement of commit 412ad7a5563. Back-patch to all supported releases, like that commit. Author: Robert Haas <robertmhaas@gmail.com> Reported-by: Thomas Munro <thomas.munro@gmail.com> Discussion: https://postgr.es/m/CA%2BhUKG%2B-2rjGZC2kwqr2NMLBcEBp4uf59QT1advbWYF_uc%2B0Aw%40mail.gmail.com