summaryrefslogtreecommitdiff
path: root/src/backend/backup
AgeCommit message (Collapse)Author
2024-06-21parse_manifest: Use const char *Peter Eisentraut
This adapts the manifest parsing code to take advantage of the const-ified jsonapi. Reviewed-by: Andrew Dunstan <andrew@dunslane.net> Discussion: https://www.postgresql.org/message-id/flat/f732b014-f614-4600-a437-dba5a2c3738b%40eisentraut.org
2024-06-12Harmonize function parameter names for Postgres 17.Peter Geoghegan
Make sure that function declarations use names that exactly match the corresponding names from function definitions in a few places. These inconsistencies were all introduced during Postgres 17 development. pg_bsd_indent still has a couple of similar inconsistencies, which I (pgeoghegan) have left untouched for now. This commit was written with help from clang-tidy, by mechanically applying the same rules as similar clean-up commits (the earliest such commit was commit 035ce1fe).
2024-04-14Fix unnecessary padding in incremental backupsTomas Vondra
Commit 10e3226ba13d added padding to incremental backups to ensure the block data is properly aligned. The code in sendFile() however failed to consider that the header may be a multiple of BLCKSZ and thus already aligned, adding a full BLCKSZ of unnecessary padding. Not only does this make the incremental file a bit larger, but the other places calculating the amount of padding did realize it's not needed and did not include it in the formula. This resulted in pg_basebackup getting confused while parsing the data stream, trying to access files with invalid filenames (e.g. with binary data etc.) and failing.
2024-04-12Assorted minor cleanups in the test_json_parser moduleAndrew Dunstan
Per gripes from Michael Paquier Discussion: https://postgr.es/m/ZhTQ6_w1vwOhqTQI@paquier.xyz Along the way, also clean up a handful of typos in 3311ea86ed and ea7b4e9a2a, found by Alexander Lakhin, and a couple of stylistic snafus noted by Daniel Westermann and Daniel Gustafsson.
2024-04-12Fix some memory leaks associated with parsing json and manifestsAndrew Dunstan
Coverity complained about not freeing some memory associated with incrementally parsing backup manifests. To fix that, provide and use a new shutdown function for the JsonManifestParseIncrementalState object, in line with a suggestion from Tom Lane. While analysing the problem, I noticed a buglet in freeing memory for incremental json lexers. To fix that remove a bogus condition on freeing the memory allocated for them.
2024-04-11Fix grammar.Thomas Munro
Reported-by: Michael Paquier <michael@paquier.xyz> Discussion: https://postgr.es/m/ZhdKqj5DwoOzirFv%40paquier.xyz
2024-04-11Fix potential stack overflow in incremental backup.Thomas Munro
The user can set RELSEG_SIZE to a high number at compile time, so we can't use it to control the size of an array on the stack: it could be many gigabytes in size. On closer inspection, we don't really need that intermediate array anyway. Let's just write directly into the output array, and then perform the absolute->relative adjustment in place. This fixes new code from commit dc212340058. Reviewed-by: Robert Haas <robertmhaas@gmail.com> Discussion: https://postgr.es/m/CA%2BhUKG%2B2hZ0sBztPW4mkLfng0qfkNtAHFUfxOMLizJ0BPmi5%2Bg%40mail.gmail.com
2024-04-07Remove useless duplicate call of defGetBoolean().Tom Lane
Seems to be a copy-and-paste error dating to dc2123400. Noted while reviewing a related documentation patch.
2024-04-05Align blocks in incremental backups to BLCKSZTomas Vondra
Align blocks stored in incremental files to BLCKSZ, so that the incremental backups work well with CoW filesystems. The header of the incremental file is padded with \0 to a multiple of BLCKSZ, so that the block data (also BLCKSZ) is aligned to BLCKSZ. The padding is added only to files containing block data, so files with just the header remain small. This adds a bit of extra space, but as the number of blocks increases the overhead gets negligible very quickly. And as the padding is \0 bytes, it does compress extremely well. The alignment is important for CoW filesystems that usually require the blocks to be aligned to filesystem page size for features like block sharing, deduplication etc. to work well. With the variable sized header the blocks in the increments were not aligned at all, negating the benefits of the CoW filesystems. This matters even for non-CoW filesystems, for example when placed on a RAID array. If the block is not aligned, it may easily span multiple devices, causing read and write amplification. It might be better to align the blocks to the filesystem page, not BLCKSZ, but we have no good way to determine that. Even if we determine the page size at the time of taking the backup, the backup may move. For now the BLCKSZ seems sufficient - the filesystem page is usually 4K, so the default BLCKSZ (8K by default) is aligned to that. Author: Tomas Vondra Reviewed-by: Robert Haas, Jakub Wartak Discussion: https://postgr.es/m/3024283a-7491-4240-80d0-421575f6bb23%40enterprisedb.com
2024-04-04Use incremental parsing of backup manifests.Andrew Dunstan
This changes the three callers to json_parse_manifest() to use json_parse_manifest_incremental_chunk() if appropriate. In the case of the backend caller, since we don't know the size of the manifest in advance we always call the incremental parser. Author: Andrew Dunstan Reviewed-By: Jacob Champion Discussion: https://postgr.es/m/7b0a51d6-0d9d-7366-3a1a-f74397a02f55@dunslane.net
2024-03-20Revert "Temporary patch to help debug pg_walsummary test failures."Nathan Bossart
Thanks to commits ea18eb7d62, b6ee30ec08, and 19a829a327, the 002_blocks.pl test now consistently passes, so we can remove this temporary debugging code. This reverts commit 5ddf9973477729cf161b4ad0a1efd52f4fea9c88. Discussion: https://postgr.es/m/20240314210010.GA3056455%40nathanxps13
2024-03-16Add destroyStringInfo function for cleaning up StringInfosDaniel Gustafsson
destroyStringInfo() is a counterpart to makeStringInfo(), freeing a palloc'd StringInfo and its data. This is a convenience function to align the StringInfo API with the PQExpBuffer API. Originally added in the OAuth patchset, it was extracted and committed separately in order to aid upcoming JSON work. Author: Daniel Gustafsson <daniel@yesql.se> Author: Jacob Champion <jacob.champion@enterprisedb.com> Reviewed-by: Michael Paquier <michael@paquier.xyz> Discussion: https://postgr.es/m/CAOYmi+mWdTd6ujtyF7MsvXvk7ToLRVG_tYAcaGbQLvf=N4KrQw@mail.gmail.com
2024-03-13Add the system identifier to backup manifests.Robert Haas
Before this patch, if you took a full backup on server A and then tried to use the backup manifest to take an incremental backup on server B, it wouldn't know that the manifest was from a different server and so the incremental backup operation could potentially complete without error. When you later tried to run pg_combinebackup, you'd find out that your incremental backup was and always had been invalid. That's poor timing, because nobody likes finding out about backup problems only at restore time. With this patch, you'll get an error when trying to take the (invalid) incremental backup, which seems a lot nicer. Amul Sul, revised by me. Review by Michael Paquier. Discussion: http://postgr.es/m/CA+TgmoYLZzbSAMM3cAjV4Y+iCRZn-bR9H2+Mdz7NdaJFU1Zb5w@mail.gmail.com
2024-03-13Make the order of the header file includes consistentPeter Eisentraut
Similar to commit 7e735035f20. Author: Richard Guo <guofenglinux@gmail.com> Reviewed-by: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/CAMbWs4-WhpCFMbXCjtJ%2BFzmjfPrp7Hw1pk4p%2BZpU95Kh3ofZ1A%40mail.gmail.com
2024-03-04Fix pgindent damage.Robert Haas
Apparently, I neglected to pgindent the prior commit. Per buildfarm.
2024-03-04Fix incremental backup interaction with XLOG_DBASE_CREATE_FILE_COPY.Robert Haas
After XLOG_DBASE_CREATE_FILE_COPY, a correct incremental backup needs to copy in full everything with the database and tablespace OID mentioned in that record; but that record doesn't specifically mention the blocks, or even the relfilenumbers, of the affected relations. As a result, we were failing to copy data that we should have copied. To fix, enter the DB OID and tablespace OID into the block reference table with relfilenumber 0 and limit block 0; and treat that as a limit block of 0 for every relfilenumber whose DB OID and tablespace OID match. Also, add a test case. Patch by me, reviewed by Noah Misch. Discussion: http://postgr.es/m/CA+Tgmob0xa=ByvGLMdAgkUZyVQE=r4nyYZ_VEa40FCfEDFnTKA@mail.gmail.com
2024-03-04Remove unused #include's from backend .c filesPeter Eisentraut
as determined by include-what-you-use (IWYU) While IWYU also suggests to *add* a bunch of #include's (which is its main purpose), this patch does not do that. In some cases, a more specific #include replaces another less specific one. Some manual adjustments of the automatic result: - IWYU currently doesn't know about includes that provide global variable declarations (like -Wmissing-variable-declarations), so those includes are being kept manually. - All includes for port(ability) headers are being kept for now, to play it safe. - No changes of catalog/pg_foo.h to catalog/pg_foo_d.h, to keep the patch from exploding in size. Note that this patch touches just *.c files, so nothing declared in header files changes in hidden ways. As a small example, in src/backend/access/transam/rmgr.c, some IWYU pragma annotations are added to handle a special case there. Discussion: https://www.postgresql.org/message-id/flat/af837490-6b2f-46df-ba05-37ea6a6653fc%40eisentraut.org
2024-03-03Replace BackendIds with 0-based ProcNumbersHeikki Linnakangas
Now that BackendId was just another index into the proc array, it was redundant with the 0-based proc numbers used in other places. Replace all usage of backend IDs with proc numbers. The only place where the term "backend id" remains is in a few pgstat functions that expose backend IDs at the SQL level. Those IDs are now in fact 0-based ProcNumbers too, but the documentation still calls them "backend ids". That term still seems appropriate to describe what the numbers are, so I let it be. One user-visible effect is that pg_temp_0 is now a valid temp schema name, for backend with ProcNumber 0. Reviewed-by: Andres Freund Discussion: https://www.postgresql.org/message-id/8171f1aa-496f-46a6-afc3-c46fe7a9b407@iki.fi
2024-02-26Fix incorrect format placeholderPeter Eisentraut
Not only did the format placeholder not match the variable, the variable also didn't match the function it was getting its value from.
2024-02-16Use new overflow-safe integer comparison functions.Nathan Bossart
Commit 6b80394781 introduced integer comparison functions designed to be as efficient as possible while avoiding overflow. This commit makes use of these functions in many of the in-tree qsort() comparators to help ensure transitivity. Many of these comparator functions should also see a small performance boost. Author: Mats Kindahl Reviewed-by: Andres Freund, Fabrízio de Royes Mello Discussion: https://postgr.es/m/CA%2B14426g2Wa9QuUpmakwPxXFWG_1FaY0AsApkvcTBy-YfS6uaw%40mail.gmail.com
2024-02-13Skip .DS_Store files in server side utilsDaniel Gustafsson
The macOS Finder application creates .DS_Store files in directories when opened, which creates problems for serverside utilities which expect all files to be PostgreSQL specific files. Skip these files when encountered in pg_checksums, pg_rewind and pg_basebackup. This was extracted from a larger patchset for skipping hidden files and system files, where the concencus was to just skip these. Since this is equally likely to happen in every version, backpatch to all supported versions. Reported-by: Mark Guertin <markguertin@gmail.com> Reviewed-by: Michael Paquier <michael@paquier.xyz> Reviewed-by: Tobias Bussmann <t.bussmann@gmx.net> Discussion: https://postgr.es/m/E258CE50-AB0E-455D-8AAD-BB4FE8F882FB@gmail.com Backpatch-through: v12
2024-01-31Revise pg_walsummary's 002_blocks test to avoid spurious failures.Robert Haas
Analysis of buildfarm results showed that the code that was intended to wait for the inserts performed by this test to complete did not actually do so. Try to make that logic more robust. Improve error checking elsewhere in the script, too, so that we don't miss things like poll_query_until failing. Along the way, fix a bit of pgindent damage introduced by commit 5ddf9973477729cf161b4ad0a1efd52f4fea9c88, which aimed to help us debug the failures that this commit is trying to fix. It's making the buildfarm sad. Discussion: http://postgr.es/m/CA+TgmobWFb8NqyfC31YnKAbZiXf9tLuwmyuvx=iYMXMniPQ4nw@mail.gmail.com
2024-01-26Temporary patch to help debug pg_walsummary test failures.Robert Haas
The tests in 002_blocks.pl are failing in the buildfarm from time to time, but we don't know how to reproduce the failure elsewhere. The most obvious explanation seems to be the unexpected disappearance of a WAL summary file, so bump up the logging level in RemoveWalSummaryIfOlderThan to try to help us spot such problems, and print the cutoff time in addition to the removed filename. Also adjust 002_blocks.pl to dump out a directory listing of the relevant directory at various points. This patch should be reverted once we sort out what's happening here. Patch by me, reviewed by Nathan Bossart, who also reported the issue. Discussion: http://postgr.es/m/20240124170846.GA2643050@nathanxps13
2024-01-11Add new function pg_get_wal_summarizer_state().Robert Haas
This makes it possible to access information about the progress of WAL summarization from SQL. The previously-added functions pg_available_wal_summaries() and pg_wal_summary_contents() only examine on-disk state, but this function exposes information from the server's shared memory. Discussion: http://postgr.es/m/CA+Tgmobvqqj-DW9F7uUzT-cQqs6wcVb-Xhs=w=hzJnXSE-kRGw@mail.gmail.com
2024-01-03Update copyright for 2024Bruce Momjian
Reported-by: Michael Paquier Discussion: https://postgr.es/m/ZZKTDPxBBMt3C0J9@paquier.xyz Backpatch-through: 12
2024-01-03Fix defects in PrepareForIncrementalBackup.Robert Haas
Swap the arguments to TimestampDifferenceMilliseconds so that we get a positive answer instead of zero. Then use the result of that computation instead of ignoring it. Per reports from Alexander Lakhin. Discussion: http://postgr.es/m/8b686764-7ac1-74c3-70f9-b64685a2535f@gmail.com
2023-12-27Fix incorrect data type choices in some read and write calls.Tom Lane
Recently-introduced code in reconstruct.c was using "unsigned" to store the result of read(), pg_pread(), or write(). This is completely bogus: it breaks subsequent tests for the result being negative, as we're being reminded of by a chorus of buildfarm warnings. Switch to "int" as was doubtless intended. (There are several other uses of "unsigned" in this file that also look poorly chosen to me, but for now I'm just trying to clean up the buildfarm.) A larger problem is that "int" is not necessarily wide enough to hold the result: per POSIX, all these functions return ssize_t. In places where the requested read or write length clearly fits in int, that's academic. It may be academic anyway as long as we constrain individual data files to 1GB, since even a readv or writev-like operation would then not be responsible for transferring more than 1GB. Nonetheless it seems like trouble waiting to happen, so I made a pass over readv and writev calls and fixed the result variables where that seemed appropriate. We might want to think about changing some of the fd.c functions to return ssize_t too, for future-proofing; but I didn't tackle that here. Discussion: https://postgr.es/m/1672202.1703441340@sss.pgh.pa.us
2023-12-27Fix typo and case in messagesJohn Naylor
Follow up to dc2123400 Kyotaro Horiguchi Discussion: https://postgr.es/m/20231222.154939.1509525390095583358.horikyota.ntt@gmail.com Discussion: https://postgr.es/m/20231225.145124.1745560266993421173.horikyota.ntt@gmail.com
2023-12-21Fix numerous typos in incremental backup commits.Robert Haas
Apparently, spell check would have been a really good idea. Alexander Lakhin, with a few additions as per an off-list report from Andres Freund. Discussion: http://postgr.es/m/f08f7c60-1ad3-0b57-d580-54b11f07cddf@gmail.com
2023-12-20Add support for incremental backup.Robert Haas
To take an incremental backup, you use the new replication command UPLOAD_MANIFEST to upload the manifest for the prior backup. This prior backup could either be a full backup or another incremental backup. You then use BASE_BACKUP with the INCREMENTAL option to take the backup. pg_basebackup now has an --incremental=PATH_TO_MANIFEST option to trigger this behavior. An incremental backup is like a regular full backup except that some relation files are replaced with files with names like INCREMENTAL.${ORIGINAL_NAME}, and the backup_label file contains additional lines identifying it as an incremental backup. The new pg_combinebackup tool can be used to reconstruct a data directory from a full backup and a series of incremental backups. Patch by me. Reviewed by Matthias van de Meent, Dilip Kumar, Jakub Wartak, Peter Eisentraut, and Álvaro Herrera. Thanks especially to Jakub for incredibly helpful and extensive testing. Discussion: http://postgr.es/m/CA+TgmoYOYZfMCyOXFyC-P+-mdrZqm5pP2N7S-r0z3_402h9rsA@mail.gmail.com
2023-12-20Add a new WAL summarizer process.Robert Haas
When active, this process writes WAL summary files to $PGDATA/pg_wal/summaries. Each summary file contains information for a certain range of LSNs on a certain TLI. For each relation, it stores a "limit block" which is 0 if a relation is created or destroyed within a certain range of WAL records, or otherwise the shortest length to which the relation was truncated during that range of WAL records, or otherwise InvalidBlockNumber. In addition, it stores a list of blocks which have been modified during that range of WAL records, but excluding blocks which were removed by truncation after they were modified and never subsequently modified again. In other words, it tells us which blocks need to copied in case of an incremental backup covering that range of WAL records. But this doesn't yet add the capability to actually perform an incremental backup; the next patch will do that. A new parameter summarize_wal enables or disables this new background process. The background process also automatically deletes summary files that are older than wal_summarize_keep_time, if that parameter has a non-zero value and the summarizer is configured to run. Patch by me, with some design help from Dilip Kumar and Andres Freund. Reviewed by Matthias van de Meent, Dilip Kumar, Jakub Wartak, Peter Eisentraut, and Álvaro Herrera. Discussion: http://postgr.es/m/CA+TgmoYOYZfMCyOXFyC-P+-mdrZqm5pP2N7S-r0z3_402h9rsA@mail.gmail.com
2023-11-14Change how a base backup decides which files have checksums.Robert Haas
Previously, it thought that any plain file located under global, base, or a tablespace directory had checksums unless it was in a short list of excluded files. Now, it thinks that files in those directories have checksums if parse_filename_for_nontemp_relation says that they are relation files. (Temporary relation files don't matter because they're excluded from the backup anyway.) This changes the behavior if you have stray files not managed by PostgreSQL in the relevant directories. Previously, you'd get some kind of checksum-related complaint if such files existed, assuming that the cluster had checksums enabled and that the base backup wasn't run with NOVERIFY_CHECKSUMS. Now, you won't get those complaints any more. That seems like an improvement to me, because those files were presumably not created by PostgreSQL and so there is no reason to think that they would be checksummed like a PostgreSQL relation file. (If we want to complain about such files, we should complain about them existing at all, not just about their checksums.) The point of this change is to make the code more consistent. sendDir() was already calling parse_filename_for_nontemp_relation() as part of an effort to determine which files to include in the backup. So, it already had the information about whether a certain file was a relation file. sendFile() then used a separate method, embodied in is_checksummed_file(), to make what is essentially the same determination. It's better not to make the same decision using two different methods, especially in closely-related code. Patch by me. Reviewed by Dilip Kumar and Álvaro Herrera. Thanks also to Jakub Wartak and Peter Eisentraut for comments, suggestions, and testing on the larger patch set of which this is a part. Discussion: http://postgr.es/m/CAFiTN-snhaKkWhi2Gz5i3cZeKefun6sYL==wBoqqnTXxX4_mFA@mail.gmail.com Discussion: http://postgr.es/m/202311141312.u4qx5gtpvfq3@alvherre.pgsql
2023-11-13Extend sendFileWithContent() to handle custom content length in basebackup.cMichael Paquier
sendFileWithContent() previously got the content length by using strlen(), assuming that the content given is always a string. Some patches are under discussion to pass binary contents to a base backup stream, where an arbitrary length needs to be given by the caller instead. The patch extends sendFileWithContent() to be able to handle this case, where len < 0 can be used to indicate an arbitrary length rather than rely on strlen() for the content length. A comment in sendFileWithContent() mentioned the backup_label file. However, this routine is used by more file types, like the tablespace map, so adjust it in passing. Author: David Steele Discussion: https://postgr.es/m/2daf8adc-8db7-4204-a7f2-a7e94e2bfa4b@pgmasters.net
2023-10-23Change struct tablespaceinfo's oid member from 'char *' to 'Oid'Robert Haas
This shouldn't change behavior except in the unusual case where there are file in the tablespace directory that have entirely numeric names but are nevertheless not possible names for a tablespace directory, either because their names have leading zeroes that shouldn't be there, or the value is actually zero, or because the value is too large to represent as an OID. In those cases, the directory would previously have made it into the list of tablespaceinfo objects and no longer will. Thus, base backups will now ignore such directories, instead of treating them as legitimate tablespace directories. Similarly, if entries for such tablespaces occur in a tablespace_map file, they will now be rejected as erroneous, instead of being honored. This is infrastructure for future work that wants to be able to know the tablespace of each relation that is part of a backup *as an OID*. By strengthening the up-front validation, we don't have to worry about weird cases later, and can more easily avoid repeated string->integer conversions. Patch by me, reviewed by David Steele. Discussion: http://postgr.es/m/CA+TgmoZNVeBzoqDL8xvr-nkaepq815jtDR4nJzPew7=3iEuM1g@mail.gmail.com
2023-10-23Refactor parse_filename_for_nontemp_relation to parse more.Robert Haas
Instead of returning the number of characters in the RelFileNumber, return the RelFileNumber itself. Continue to return the fork number, as before, and additionally return the segment number. parse_filename_for_nontemp_relation now rejects a RelFileNumber or segment number that begins with a leading zero. Before, we accepted such cases as relation filenames, but if we continued to do so after this change, the function might return the same values for two different files (e.g. 1234.5 and 001234.5 or 1234.005) which could be annoying for callers. Since we don't actually ever generate filenames with leading zeroes in the names, any such files that we find must have been created by something other than PostgreSQL, and it is therefore reasonable to treat them as non-relation files. Along the way, change unlogged_relation_entry to store a RelFileNumber rather than an OID. This update should have been made in 851f4cc75cdd8c831f1baa9a7abf8c8248b65890, but it was overlooked. It's trivial to make the update as part of this commit, perhaps more trivial than it would have been without it, so do that. Patch by me, reviewed by David Steele. Discussion: http://postgr.es/m/CA+TgmoZNVeBzoqDL8xvr-nkaepq815jtDR4nJzPew7=3iEuM1g@mail.gmail.com
2023-10-03In basebackup.c, refactor to create read_file_data_into_buffer.Robert Haas
This further reduces the length and complexity of sendFile(), hopefully make it easier to understand and modify. In addition to moving some logic into a new function, I took this opportunity to make a few slight adjustments to sendFile() itself, including renaming the 'len' variable to 'bytes_done', since we use it to represent the number of bytes we've already handled so far, not the total length of the file. Patch by me, reviewed by David Steele. Discussion: http://postgr.es/m/CA+TgmoYt5jXH4U6cu1dm9Oe2FTn1aae6hBNhZzJJjyjbE_zYig@mail.gmail.com
2023-10-03In basebackup.c, refactor to create verify_page_checksum.Robert Haas
If checksum verification fails for a particular page, we reread the page and try one more time. The code that does this somewhat complex and difficult to follow. Move some of the logic into a new function and rearrange the code a bit to try to make it clearer. This way, we don't need the block_retry Boolean, a couple of other variables move from sendFile() into the new function, and some code is now less deeply indented. Patch by me, reviewed by David Steele. Discussion: http://postgr.es/m/CA+TgmoYt5jXH4U6cu1dm9Oe2FTn1aae6hBNhZzJJjyjbE_zYig@mail.gmail.com
2023-09-05Move PG_TEMP_FILE* macros to file_utils.h.Nathan Bossart
Presently, frontend code that needs to use these macros must either include storage/fd.h, which declares several frontend-unsafe functions, or duplicate the macros. This commit moves these macros to common/file_utils.h, which is safe for both frontend and backend code. Consequently, we can also remove the duplicated macros in pg_checksums and stop including storage/fd.h in pg_rewind. Reviewed-by: Michael Paquier Discussion: https://postgr.es/m/ZOP5qoUualu5xl2Z%40paquier.xyz
2023-08-22Introduce macros for protocol characters.Nathan Bossart
This commit introduces descriptively-named macros for the identifiers used in wire protocol messages. These new macros are placed in a new header file so that they can be easily used by third-party code. Author: Dave Cramer Reviewed-by: Alvaro Herrera, Tatsuo Ishii, Peter Smith, Robert Haas, Tom Lane, Peter Eisentraut, Michael Paquier Discussion: https://postgr.es/m/CADK3HHKbBmK-PKf1bPNFoMC%2BoBt%2BpD9PH8h5nvmBQskEHm-Ehw%40mail.gmail.com
2023-07-10Message wording improvementsPeter Eisentraut
2023-05-19Pre-beta mechanical code beautification.Tom Lane
Run pgindent, pgperltidy, and reformat-dat-files. This set of diffs is a bit larger than typical. We've updated to pg_bsd_indent 2.1.2, which properly indents variable declarations that have multi-line initialization expressions (the continuation lines are now indented one tab stop). We've also updated to perltidy version 20230309 and changed some of its settings, which reduces its desire to add whitespace to lines to make assignments etc. line up. Going forward, that should make for fewer random-seeming changes to existing code. Discussion: https://postgr.es/m/20230428092545.qfb3y5wcu4cm75ur@alvherre.pgsql
2023-04-19Fix various typos and incorrect/outdated name referencesDavid Rowley
Author: Alexander Lakhin Discussion: https://postgr.es/m/699beab4-a6ca-92c9-f152-f559caf6dc25@gmail.com
2023-04-18Fix various typosDavid Rowley
This fixes many spelling mistakes in comments, but a few references to invalid parameter names, function names and option names too in comments and also some in string constants Also, fix an #undef that was undefining the incorrect definition Author: Alexander Lakhin Reviewed-by: Justin Pryzby Discussion: https://postgr.es/m/d5f68d19-c0fc-91a9-118d-7c6a5a3f5fad@gmail.com
2023-04-06Support long distance matching for zstd compressionTomas Vondra
zstd compression supports a special mode for finding matched in distant past, which may result in better compression ratio, at the expense of using more memory (the window size is 128MB). To enable this optional mode, use the "long" keyword when specifying the compression method (--compress=zstd:long). Author: Justin Pryzby Reviewed-by: Tomas Vondra, Jacob Champion Discussion: https://postgr.es/m/20230224191840.GD1653@telsasoft.com Discussion: https://postgr.es/m/20220327205020.GM28503@telsasoft.com
2023-03-29Simplify useless 0L constantsPeter Eisentraut
In ancient times, these belonged to arguments or fields that were actually of type long, but now they are not anymore, so this "L" decoration is just confusing. (Some other 0L and other "L" constants remain, where they are actually associated with a long type.)
2023-03-17Improve several permission-related error messages.Peter Eisentraut
Mainly move some detail from errmsg to errdetail, remove explicit mention of superuser where appropriate, since that is implied in most permission checks, and make messages more uniform. Author: Nathan Bossart <nathandbossart@gmail.com> Discussion: https://www.postgresql.org/message-id/20230316234701.GA903298@nathanxps13
2023-03-06Reword overly-optimistic comment about backup checksum verification.Robert Haas
The comment implies that a single retry is sufficient to avoid spurious checksum failures, but in fact no number of retries is sufficient for that purpose. Update the comment accordingly. Patch by me, reviewed by Michael Paquier. Discussion: http://postgr.es/m/CA+TgmoZ_fFAoU6mrHt9QBs+dcYhN6yXenGTTMRebZNhtwPwHyg@mail.gmail.com
2023-03-06Remove an old comment that doesn't seem especially useful.Robert Haas
The functions that follow are concerned with various things, of which the tar format is only one, so this comment doesn't really seem helpful. The file isn't really divided into sections in the way that this comment seems to contemplate -- or at least, not any more. Patch by me, reviewed by Michael Paquier. Discussion: http://postgr.es/m/CA+TgmoZ_fFAoU6mrHt9QBs+dcYhN6yXenGTTMRebZNhtwPwHyg@mail.gmail.com
2023-03-06In basebackup.c, perform end-of-file test after checksum validation.Robert Haas
We read blocks of data from files that we're backing up in chunks, some multiple of BLCKSZ for each read. If checksum verification fails, we then try rereading just the one block for which validation failed. If that block happened to be the first block of the chunk, and if the file was concurrently truncated to remove that block, then we'd reach a call to bbsink_archive_contents() with a buffer length of 0. That causes an assertion failure. As far as I can see, there are no particularly bad consequences if this happens in a non-assert build, and it's pretty unlikely to happen in the first place because it requires a series of somewhat unlikely things to happen in very quick succession. However, assertion failures are bad, so rearrange the code to avoid that possibility. Patch by me, reviewed by Michael Paquier. Discussion: http://postgr.es/m/CA+TgmoZ_fFAoU6mrHt9QBs+dcYhN6yXenGTTMRebZNhtwPwHyg@mail.gmail.com
2023-01-26Improve TimestampDifferenceMilliseconds to cope with overflow sanely.Tom Lane
We'd like to use TimestampDifferenceMilliseconds with the stop_time possibly being TIMESTAMP_INFINITY, but up to now it's disclaimed responsibility for overflow cases. Define it to clamp its output to the range [0, INT_MAX], handling overflow correctly. (INT_MAX rather than LONG_MAX seems appropriate, because the function is already described as being intended for calculating wait times for WaitLatch et al, and that infrastructure only handles waits up to INT_MAX. Also, this choice gets rid of cross-platform behavioral differences.) Having done that, we can replace some ad-hoc code in walreceiver.c with a simple call to TimestampDifferenceMilliseconds. While at it, fix some buglets in existing callers of TimestampDifferenceMilliseconds: basebackup_copy.c had not read the memo about TimestampDifferenceMilliseconds never returning a negative value, and postmaster.c had not read the memo about Min() and Max() being macros with multiple-evaluation hazards. Neither of these quite seem worth back-patching. Patch by me; thanks to Nathan Bossart for review. Discussion: https://postgr.es/m/3126727.1674759248@sss.pgh.pa.us