| Age | Commit message (Collapse) | Author |
|
This feature will allow us to replicate the changes on subscriber nodes
after the upgrade.
Previously, only the subscription metadata information was preserved.
Without the list of relations and their state, it's not possible to
re-enable the subscriptions without missing some records as the list of
relations can only be refreshed after enabling the subscription (and
therefore starting the apply worker). Even if we added a way to refresh
the subscription while enabling a publication, we still wouldn't know
which relations are new on the publication side, and therefore should be
fully synced, and which shouldn't.
To preserve the subscription relations, this patch teaches pg_dump to
restore the content of pg_subscription_rel from the old cluster by using
binary_upgrade_add_sub_rel_state SQL function. This is supported only
in binary upgrade mode.
The subscription's replication origin is needed to ensure that we don't
replicate anything twice.
To preserve the replication origins, this patch teaches pg_dump to update
the replication origin along with creating a subscription by using
binary_upgrade_replorigin_advance SQL function to restore the
underlying replication origin remote LSN. This is supported only in
binary upgrade mode.
pg_upgrade will check that all the subscription relations are in 'i'
(init) or in 'r' (ready) state and will error out if that's not the case,
logging the reason for the failure. This helps to avoid the risk of any
dangling slot or origin after the upgrade.
Author: Vignesh C, Julien Rouhaud, Shlok Kyal
Reviewed-by: Peter Smith, Masahiko Sawada, Michael Paquier, Amit Kapila, Hayato Kuroda
Discussion: https://postgr.es/m/20230217075433.u5mjly4d5cr4hcfe@jrouhaud
|
|
When enabled (default off), this logs a backtrace anytime elog() or an
equivalent ereport() for internal errors is called.
This is not well covered by the existing backtrace_functions, because
there are many equally-worded low-level errors in many functions. And
if you find out where the error is, then you need to manually rewrite
the elog() to ereport() to attach the errbacktrace(), which is
annoying. Having a backtrace automatically on every elog() call could
be very helpful during development for various kinds of common errors
from palloc, syscache, node support, etc.
Discussion: https://www.postgresql.org/message-id/flat/ba76c6bc-f03f-4285-bf16-47759cfcab9e@eisentraut.org
|
|
There are a lot of Perl scripts in the tree, mostly code generation
and TAP tests. Occasionally, these scripts produce warnings. These
are probably always mistakes on the developer side (true positives).
Typical examples are warnings from genbki.pl or related when you make
a mess in the catalog files during development, or warnings from tests
when they massage a config file that looks different on different
hosts, or mistakes during merges (e.g., duplicate subroutine
definitions), or just mistakes that weren't noticed because there is a
lot of output in a verbose build.
This changes all warnings into fatal errors, by replacing
use warnings;
by
use warnings FATAL => 'all';
in all Perl files.
Discussion: https://www.postgresql.org/message-id/flat/06f899fd-1826-05ab-42d6-adeb1fd5e200%40eisentraut.org
|
|
The documentation has been missing one value in the list of catalog OIDs
that can be given to the validator function of a FDW, as of
AttributeRelationId, when changing the attribute options of a foreign
table.
Author: Ian Lawrence Barwick
Discussion: https://postgr.es/m/CAB8KJ=i16t2yJU_Pq2Z+hnNGWFhagp_bJmzxHZu3ZkOjZm-+rQ@mail.gmail.com
Backpatch-through: 12
|
|
The previous wording here relied solely on an example to explain
aclitem output format. Add an actual syntax synopsis and
explanation of the elements to make it clearer.
David Johnston and Tom Lane, per gripe from Eugen Konkov.
Discussion: https://postgr.es/m/170326116972.1876499.18357820037829248593@wrigleys.postgresql.org
|
|
Reported-by: juha.mustonen@iki.fi
Discussion: https://postgr.es/m/20170217160154.6101.52806@wrigleys.postgresql.org
Co-authored-by: Erik Wienhold
Backpatch-through: master
|
|
We forgot to update the docs while adding new options in pgoutput.
Author: Emre Hasegeli
Reviewed-by: Peter Smith, Amit Kapila
Backpatch-through: 12
Discussion: https://postgr.es/m/CAE2gYzwdwtUbs-tPSV-QBwgTubiyGD2ZGsSnAVsDfAGGLDrGOA%40mail.gmail.com
|
|
Bhis commit introduces enhancements to the pg_stat_checkpointer view by adding
three new columns: restartpoints_timed, restartpoints_req, and
restartpoints_done. These additions aim to improve the visibility and
monitoring of restartpoint processes on replicas.
Previously, it was challenging to differentiate between successful and failed
restartpoint requests. This limitation arises because restartpoints on replicas
are dependent on checkpoint records from the primary, and cannot occur more
frequently than these checkpoints.
The new columns allow for clear distinction and tracking of restartpoint
requests, their triggers, and successful completions. This enhancement aids
database administrators and developers in better understanding and diagnosing
issues related to restartpoint behavior, particularly in scenarios where
restartpoint requests may fail.
System catalog is changed. Catversion is bumped.
Discussion: https://postgr.es/m/99b2ccd1-a77a-962a-0837-191cdf56c2b9%40inbox.ru
Author: Anton A. Melnikov
Reviewed-by: Kyotaro Horiguchi, Alexander Korotkov
|
|
Up to now, our distribution tarballs have included a plain-text form
of the installation.sgml chapter. The rationale for that was that a
recipient might not have either ready internet access or HTML-viewing
tools; a theory that seems downright quaint today. Maintaining the
ability to generate this file is not without cost, because it puts
special requirements on installation.sgml that are often overlooked.
Moreover, we are moving in the direction of making our distribution
tarballs be pure git snapshots for traceability/reproducibility
reasons; including generated files doesn't fit into that plan.
Hence, let's just drop INSTALL and remove the infrastructure for
generating it. The top-level README will now recommend visiting
our website to see the installation instructions. As a useful
side-effect, we can get rid of README.git which has provoked
confusion.
Discussion: https://postgr.es/m/20231220114927.faccqqprmuyrzdip@alap3.anarazel.de
Discussion: https://postgr.es/m/e07408d9-e5f2-d9fd-5672-f53354e9305e@eisentraut.org
|
|
Apparently, spell check would have been a really good idea.
Alexander Lakhin, with a few additions as per an off-list report
from Andres Freund.
Discussion: http://postgr.es/m/f08f7c60-1ad3-0b57-d580-54b11f07cddf@gmail.com
|
|
Commit dc2123400 accidentally misspelled "combination".
|
|
To take an incremental backup, you use the new replication command
UPLOAD_MANIFEST to upload the manifest for the prior backup. This
prior backup could either be a full backup or another incremental
backup. You then use BASE_BACKUP with the INCREMENTAL option to take
the backup. pg_basebackup now has an --incremental=PATH_TO_MANIFEST
option to trigger this behavior.
An incremental backup is like a regular full backup except that
some relation files are replaced with files with names like
INCREMENTAL.${ORIGINAL_NAME}, and the backup_label file contains
additional lines identifying it as an incremental backup. The new
pg_combinebackup tool can be used to reconstruct a data directory
from a full backup and a series of incremental backups.
Patch by me. Reviewed by Matthias van de Meent, Dilip Kumar, Jakub
Wartak, Peter Eisentraut, and Álvaro Herrera. Thanks especially to
Jakub for incredibly helpful and extensive testing.
Discussion: http://postgr.es/m/CA+TgmoYOYZfMCyOXFyC-P+-mdrZqm5pP2N7S-r0z3_402h9rsA@mail.gmail.com
|
|
When active, this process writes WAL summary files to
$PGDATA/pg_wal/summaries. Each summary file contains information for a
certain range of LSNs on a certain TLI. For each relation, it stores a
"limit block" which is 0 if a relation is created or destroyed within
a certain range of WAL records, or otherwise the shortest length to
which the relation was truncated during that range of WAL records, or
otherwise InvalidBlockNumber. In addition, it stores a list of blocks
which have been modified during that range of WAL records, but
excluding blocks which were removed by truncation after they were
modified and never subsequently modified again.
In other words, it tells us which blocks need to copied in case of an
incremental backup covering that range of WAL records. But this
doesn't yet add the capability to actually perform an incremental
backup; the next patch will do that.
A new parameter summarize_wal enables or disables this new background
process. The background process also automatically deletes summary
files that are older than wal_summarize_keep_time, if that parameter
has a non-zero value and the summarizer is configured to run.
Patch by me, with some design help from Dilip Kumar and Andres Freund.
Reviewed by Matthias van de Meent, Dilip Kumar, Jakub Wartak, Peter
Eisentraut, and Álvaro Herrera.
Discussion: http://postgr.es/m/CA+TgmoYOYZfMCyOXFyC-P+-mdrZqm5pP2N7S-r0z3_402h9rsA@mail.gmail.com
|
|
1301c80b2167 has introduced in installation.sgml a link reference that
`make dist` was not able to understand.
Per buildfarm member guaibasaurus.
|
|
This commit removes all the scripts located in src/tools/msvc/ to build
PostgreSQL with Visual Studio on Windows, meson becoming the recommended
way to achieve that. The scripts held some information that is still
relevant with meson, information kept and moved to better locations.
Comments that referred directly to the scripts are removed.
All the documentation still relevant that was in install-windows.sgml
has been moved to installation.sgml under a new subsection for Visual.
All the content specific to the scripts is removed. Some adjustments
for the documentation are planned in a follow-up set of changes.
Author: Michael Paquier
Reviewed-by: Peter Eisentraut, Andres Freund
Discussion: https://postgr.es/m/ZQzp_VMJcerM1Cs_@paquier.xyz
|
|
The example for dropping an option was incorrectly quoting the
option key thus making it a value turning the command into an
unqualified ADD operation. The result of dropping became adding
a new key/value pair instead:
d=# alter foreign data wrapper f options (drop 'b');
ALTER FOREIGN DATA WRAPPER
d=# select fdwoptions from pg_foreign_data_wrapper where fdwname='f';
fdwoptions
------------
{drop=b}
(1 row)
This has been incorrect for a long time so backpatch to all
supported branches.
Author: Tim <tim.needham2@gmail.com>
Discussion: https://postgr.es/m/170292280173.1876505.5204623074024041738@wrigleys.postgresql.org
|
|
smgrreadv() and smgrwritev() and their md.c implementations call
FileReadV() and FileWriteV(). A range of disk blocks beginning at
'blocknum' and extending for 'nblocks' can be scattered to or gathered
from multiple buffers with a single system call. The traditional
smgrread() and smgrwrite() functions are implemented in terms of the new
functions.
Later commits will introduce calls with nblocks > 1, but the following
behavioral changes can be seen already:
* After a short transfer we'll now retry until we eventually read 0
bytes (= EOF) or get ENOSPC, EDQUOT, EFBIG etc, where previously we
would infer the reason. Retrying is consistent with xlog.c's
treatment of large WAL writes, and arguably also xlog.c and fd.c's
treatment of EINTR. Arbitrary short returns for larger transfers have
been observed on several OSes, and might in theory also happen for
transient reasons with our own pg_p*v() fallback code.
* After unexpected EOF or -1, the error thrown now talks about
a range even for the single block case, eg "blocks 42..42".
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Discussion: https://postgr.es/m/CA+hUKGJkOiOCa+mag4BF+zHo7qo=o9CFheB8=g6uT5TUm2gkvA@mail.gmail.com
|
|
We didn't explain this clearly until somewhere deep in the
"Extending SQL" chapter, but really it ought to be mentioned
in the introductory material too.
Discussion: https://postgr.es/m/4097442.1694967650@sss.pgh.pa.us
|
|
Commit dc9f8a79830 accidentally misspelled minimum as minimun.
|
|
This GUC was intended as a debugging help in the 9.0 area when hot
standby and streaming replication were being developped, able to offer
more information at LOG level rather than DEBUGn. There are more tools
available these days that are able to offer rather equivalent
information, like pg_waldump introduced in 9.3. It is not obvious how
this facility is useful these days, so let's remove it.
Author: Bharath Rupireddy
Discussion: https://postgr.es/m/ZXEXEAUVFrvpquSd@paquier.xyz
|
|
Allow using multiple worker processes to build BRIN index, which until
now was supported only for BTREE indexes. For large tables this often
results in significant speedup when the build is CPU-bound.
The work is split in a simple way - each worker builds BRIN summaries on
a subset of the table, determined by the regular parallel scan used to
read the data, and feeds them into a shared tuplesort which sorts them
by blkno (start of the range). The leader then reads this sorted stream
of ranges, merges duplicates (which may happen if the parallel scan does
not align with BRIN pages_per_range), and adds the resulting ranges into
the index.
The number of duplicate results produced by workers (requiring merging
in the leader process) should be fairly small, thanks to how parallel
scans assign chunks to workers. The likelihood of duplicate results may
increase for higher pages_per_range values, but then there are fewer
page ranges in total. In any case, we expect the merging to be much
cheaper than summarization, so this should be a win.
Most of the parallelism infrastructure is a simplified copy of the code
used by BTREE indexes, omitting the parts irrelevant for BRIN indexes
(e.g. uniqueness checks).
This also introduces a new index AM flag amcanbuildparallel, determining
whether to attempt to start parallel workers for the index build.
Original patch by me, with reviews and substantial reworks by Matthias
van de Meent, certainly enough to make him a co-author.
Author: Tomas Vondra, Matthias van de Meent
Reviewed-by: Matthias van de Meent
Discussion: https://postgr.es/m/c2ee7d69-ce17-43f2-d1a0-9811edbda6e6%40enterprisedb.com
|
|
The previous wording was confusing. Also move partitioning mention to a
more logical location.
Reported-by: neil@fairwindsoft.com
Discussion: https://postgr.es/m/20170703200710.27956.64565@wrigleys.postgresql.org
Backpatch-through: master
|
|
Reported-by: Magnus Hagander
Discussion: https://postgr.es/m/CABUevEwGBY-W7EkTbjMY1rC+mmRL3fMrnX6YaUkcr+7o9PSa3w@mail.gmail.com
Backpatch-through: master
|
|
Previously only a table name was documented for this SELECT clause.
Reported-by: robert <lists@humanleg.org.uk>
Discussion: https://postgr.es/m/152483686904.19805.3369061025704720797@wrigleys.postgresql.org
Backpatch-through: master
|
|
Reported-by: Christophe Courtois
Discussion: https://postgr.es/m/aa7cfd73-0d8d-596a-b684-39faa479afa5@dalibo.com
Author: Christophe Courtois
Backpatch-through: master
|
|
This commit adds support for REINDEX in event triggers, making this
command react for the events ddl_command_start and ddl_command_end. The
indexes rebuilt are collected with the ReindexStmt emitted by the
caller, for the concurrent and non-concurrent paths.
Thanks to that, it is possible to know a full list of the indexes that a
single REINDEX command has worked on.
Author: Garrett Thornburg, Jian He
Reviewed-by: Jim Jones, Michael Paquier
Discussion: https://postgr.es/m/CAEEqfk5bm32G7sbhzHbES9WejD8O8DCEOaLkxoBP7HNWxjPpvg@mail.gmail.com
|
|
The wording changed here comes from 991bfe11d28a, when the only way to
trigger a promotion was with a trigger file. There are more options to
achieve this operation these days, like the SQL function pg_promote() or
the command `pg_ctl promote`, so it is confusing to assume that only a
trigger file is able to do the work.
Note also that promote_trigger_file has been removed as of cd4329d9393f
in 16~.
Author: Shinya Kato
Discussion: https://postgr.es/m/201b08ea29aa61f96162080e75be503c@oss.nttdata.com
Backpatch-through: 12
|
|
Commit f40c6969d0 added the information schema usage tables but added
documentation that they did not fully work yet. Commit e717a9a18b
then added SQL-standard function bodies, which made the information
schema views fully functional, but it neglected to update the
documentation. This is now done here.
Reported-by: Erki Eessaar <erki.eessaar@taltech.ee>
Reviewed-by: Erki Eessaar <erki.eessaar@taltech.ee>
Discussion: https://www.postgresql.org/message-id/flat/AM9PR01MB8268EC7B696F9FE346CA5B93FEB8A%40AM9PR01MB8268.eurprd01.prod.exchangelabs.com
|
|
The test module includes helper functions to quickly burn through lots
of XIDs. They are used in the tests, and are also handy for manually
testing XID wraparound.
Since these tests are very expensive the entire suite is disabled by
default. It requires to set PG_TEST_EXTRA to run it.
Reviewed-by: Daniel Gustafsson, John Naylor, Michael Paquier
Reviewed-by: vignesh C
Author: Heikki Linnakangas, Masahiko Sawada, Andres Freund
Discussion: https://www.postgresql.org/message-id/CAD21AoDVhkXp8HjpFO-gp3TgL6tCKcZQNxn04m01VAtcSi-5sA%40mail.gmail.com
|
|
Quotes should not be used except if a GUC name is a natural English
word.
Author: Álvaro Herrera
Discussion: https://postgr.es/m/CAHut+Pv-kSN8SkxSdoHano_wPubqcg5789ejhCDZAcLFceBR-w@mail.gmail.com
|
|
When there is a need to filter multiple tables with include and/or exclude
options it's quite possible to run into the limitations of the commandline.
This adds a --filter=FILENAME feature to pg_dump, pg_dumpall and pg_restore
which is used to supply a file containing object exclude/include commands
which work just like their commandline counterparts. The format of the file
is one command per row like:
<command> <object> <objectpattern>
<command> can be "include" or "exclude", <object> can be table_data, index
table_data_and_children, database, extension, foreign_data, function, table
schema, table_and_children or trigger.
This patch has gone through many revisions and design changes over a long
period of time, the list of reviewers reflect reviewers of some version of
the patch, not necessarily the final version.
Patch by Pavel Stehule with some additional hacking by me.
Author: Pavel Stehule <pavel.stehule@gmail.com>
Reviewed-by: Justin Pryzby <pryzby@telsasoft.com>
Reviewed-by: vignesh C <vignesh21@gmail.com>
Reviewed-by: Dean Rasheed <dean.a.rasheed@gmail.com>
Reviewed-by: Tomas Vondra <tomas.vondra@enterprisedb.com>
Reviewed-by: Julien Rouhaud <rjuju123@gmail.com>
Reviewed-by: Erik Rijkers <er@xs4all.nl>
Discussion: https://postgr.es/m/CAFj8pRB10wvW0CC9Xq=1XDs=zCQxer3cbLcNZa+qiX4cUH-G_A@mail.gmail.com
|
|
This avoids the wraparound in async.c and removes the corresponding code
complexity. The maximum amount of allocated SLRU pages for NOTIFY / LISTEN
queue is now determined by the max_notify_queue_pages GUC. The default
value is 1048576. It allows to consume up to 8 GB of disk space which is
exactly the limit we had previously.
Author: Maxim Orlov, Aleksander Alekseev, Alexander Korotkov, Teodor Sigaev
Author: Nikita Glukhov, Pavel Borisov, Yura Sokolov
Reviewed-by: Jacob Champion, Heikki Linnakangas, Alexander Korotkov
Reviewed-by: Japin Li, Pavel Borisov, Tom Lane, Peter Eisentraut, Andres Freund
Reviewed-by: Andrey Borodin, Dilip Kumar, Aleksander Alekseev
Discussion: https://postgr.es/m/CACG%3DezZe1NQSCnfHOr78AtAZxJZeCvxrts0ygrxYwe%3DpyyjVWA%40mail.gmail.com
Discussion: https://postgr.es/m/CAJ7c6TPDOYBYrnCAeyndkBktO0WG2xSdYduTF0nxq%2BvfkmTF5Q%40mail.gmail.com
|
|
It fails to use the CONCURRENTLY keyword where it was necessary, so add
it. This text was added to pg11 in commit 5efd604ec0a3; backpatch to pg12.
Author: Nikolay Samokhvalov <nik@postgres.ai>
Discussion: https://postgr.es/m/CAM527d9iz6+=_c7EqSKaGzjqWvSeCeRVVvHZ1v3gDgjTtvgsbw@mail.gmail.com
|
|
This patch adds 'stats_since' and 'minmax_stats_since' columns to the
pg_stat_statements view and pg_stat_statements() function. The new min/max
reset mode for the pg_stat_stetments_reset() function is controlled by the
parameter minmax_only.
'stat_since' column is populated with the current timestamp when a new
statement is added to the pg_stat_statements hashtable. It provides clean
information about statistics collection time intervals for each statement.
Besides it can be used by sampling solutions to detect situations when a
statement was evicted and stored again between samples.
Such a sampling solution could derive any pg_stat_statements statistic values
for an interval between two samples with the exception of all min/max
statistics. To address this issue this patch adds the ability to reset
min/max statistics independently of the statement reset using the new
minmax_only parameter of the pg_stat_statements_reset(userid oid, dbid oid,
queryid bigint, minmax_only boolean) function. The timestamp of such reset
is stored in the minmax_stats_since field for each statement.
pg_stat_statements_reset() function now returns the timestamp of a reset as the
result.
Discussion: https://postgr.es/m/flat/72e80e7b160a6eb189df9ef6f068cce3765d37f8.camel%40moonset.ru
Author: Andrei Zubkov
Reviewed-by: Julien Rouhaud, Hayato Kuroda, Yuki Seino, Chengxi Sun
Reviewed-by: Anton Melnikov, Darren Rush, Michael Paquier, Sergei Kornilov
Reviewed-by: Alena Rybakina, Andrei Lepikhov
|
|
Values corresponding to STATISTIC_KIND_RANGE_LENGTH_HISTOGRAM and
STATISTIC_KIND_BOUNDS_HISTOGRAM were not exposed to pg_stats when these
slot kinds were introduced in 918eee0c49.
This commit adds the missing fields to pg_stats.
Catversion is bumped.
Discussion: https://postgr.es/m/flat/b67d8b57-9357-7e82-a2e7-f6ce6eaeec67@postgrespro.ru
Author: Egor Rogov, Soumyadeep Chakraborty
Reviewed-by: Tomas Vondra, Justin Pryzby, Jian He
|
|
These constructs have precedence, but we forgot to list them.
In HEAD, mention AT LOCAL as well as AT TIME ZONE.
Per gripe from Shay Rojansky.
Discussion: https://postgr.es/m/CADT4RqBPdbsZW7HS1jJP319TMRHs1hzUiP=iRJYR6UqgHCrgNQ@mail.gmail.com
|
|
The brininsert code used to initialize (and destroy) BrinDesc and
BrinRevmap for each tuple, which is not free. This patch initializes
these structures only once, and reuses them for all inserts in the same
command. The data is passed through indexInfo->ii_AmCache.
This also introduces an optional AM callback "aminsertcleanup" that
allows performing custom cleanup in case simply pfree-ing ii_AmCache is
not sufficient (which is the case when the cache contains TupleDesc,
Buffers, and so on).
Author: Soumyadeep Chakraborty
Reviewed-by: Alvaro Herrera, Matthias van de Meent, Tomas Vondra
Discussion: https://postgr.es/m/CAE-ML%2B9r2%3DaO1wwji1sBN9gvPz2xRAtFUGfnffpd0ZqyuzjamA%40mail.gmail.com
|
|
Reported-by: Jeff Janes
Discussion: https://postgr.es/m/CAMkU=1xvzQxTAiYNM2PWJ6snMTPh3u3Ammbwss7mvAShS2Ohww@mail.gmail.com
Author: Jeff Janes
Backpatch-through: master
|
|
Reported-by: Josh Kupershmidt
Discussion: https://postgr.es/m/CAK3UJRF=KY_nx_TRQq+t6jOrtS2rry79ktkzPiMDhFx_K=dZAg@mail.gmail.com
Author: Josh Kupershmidt
Backpatch-through: master
|
|
Oversight in 5c4c7efad: gotta adjust the cell height for removal of
an entry. Per buildfarm.
|
|
Oversight in 07cb29737.
|
|
Previously these functions returned the previous segment number if the
LSN was on a segment boundary. We now always return the current segment
number for an LSN.
Docs updated to reflect this change. Regression tests added, author
Andres Freund.
Also mentioned in thread https://postgr.es/m/flat/20220204225057.GA1535307%40nathanxps13#d964275c9540d8395e138efc0a75f7e8
BACKWARD INCOMPATIBILITY
Reported-by: Kyotaro Horiguchi
Discussion: https://postgr.es/m/20190726.172120.101752680.horikyota.ntt@gmail.com
Co-authored-by: Kyotaro Horiguchi
Backpatch-through: master
|
|
Reported-by: Kyotaro HORIGUCHI
Discussion: https://postgr.es/m/CAF4Au4wmUsZRVhR+ySpvabRfB_1D1fnrPY9TRAKO2DEbi4Cpgg@mail.gmail.com
Co-authored-by: Oleg Bartunov
Backpatch-through: master
|
|
Reported-by: Kyotaro HORIGUCHI
Discussion: https://postgr.es/m/20180622.172132.230342845.horiguchi.kyotaro@lab.ntt.co.jp
Co-authored-by: Kyotaro HORIGUCHI
Backpatch-through: 16
|
|
Reported-by: David Rowley
Discussion: https://postgr.es/m/CAKJS1f_OQpz7rpe-KJmskVxbU06buiXbfonxG3JLB+nGCJ5E=g@mail.gmail.com
Backpatch-through: 16
|
|
Reported-by: ap@robillo.net
Discussion: https://postgr.es/m/20170208152743.1411.6073@wrigleys.postgresql.org
Backpatch-through: master
|
|
This is for IDE drive cache control, same as SCSI (already documented
properly).
Reported-by: John Ekins
Discussion: https://postgr.es/m/20170808224017.8424.69170@wrigleys.postgresql.org
Author: John Ekins
Backpatch-through: 12
|
|
Mention this relationship.
Reported-by: Martín Marqués
Discussion: https://postgr.es/m/CABeG9LtsAVP4waKngUYo-HAiiowcb8xEjQvDDfhX_nFi5SJ4jw@mail.gmail.com
Author: Martín Marqués
Backpatch-through: master
|
|
This commit log messages (at LOG level when log_replication_commands is
set, otherwise at DEBUG1 level) when walsenders acquire and release
replication slots. These messages help to know the lifetime of a
replication slot - one can know how long a streaming standby, logical
subscriber, or replication slot consumer is down. These messages will be
useful on production servers to debug and analyze inactive replication
slots.
Note that these messages are emitted only for walsenders but not for
backends. This is because walsenders are the ones that typically hold
replication slots for longer durations, unlike backends which hold them
for executing replication related functions.
Author: Bharath Rupireddy
Reviewed-by: Peter Smith, Amit Kapila, Alvaro Herrera
Discussion: http://postgr.es/m/CALj2ACX17G7F-jeLt+7KhJ6YxVeRwR8Zk0rDh4VnT546o0UpTQ@mail.gmail.com
|
|
Currently important build targets are somewhat hard to discover. This commit
documents important meson build targets in the sgml documentation. But it's
awkward to have to lookup build targets in the docs when hacking, so this also
adds a 'help' target, printing out the same information. To avoid having to
duplicate information in two places, generate both docbook and interactive
docs from a single source.
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/20231108232121.ww542mt6lfo6f26f@awork3.anarazel.de
|