From 88e982302684246e8af785e78a467ac37c76dee9 Mon Sep 17 00:00:00 2001 From: Heikki Linnakangas Date: Mon, 23 Feb 2015 18:53:02 +0200 Subject: Replace checkpoint_segments with min_wal_size and max_wal_size. Instead of having a single knob (checkpoint_segments) that both triggers checkpoints, and determines how many checkpoints to recycle, they are now separate concerns. There is still an internal variable called CheckpointSegments, which triggers checkpoints. But it no longer determines how many segments to recycle at a checkpoint. That is now auto-tuned by keeping a moving average of the distance between checkpoints (in bytes), and trying to keep that many segments in reserve. The advantage of this is that you can set max_wal_size very high, but the system won't actually consume that much space if there isn't any need for it. The min_wal_size sets a floor for that; you can effectively disable the auto-tuning behavior by setting min_wal_size equal to max_wal_size. The max_wal_size setting is now the actual target size of WAL at which a new checkpoint is triggered, instead of the distance between checkpoints. Previously, you could calculate the actual WAL usage with the formula "(2 + checkpoint_completion_target) * checkpoint_segments + 1". With this patch, you set the desired WAL usage with max_wal_size, and the system calculates the appropriate CheckpointSegments with the reverse of that formula. That's a lot more intuitive for administrators to set. Reviewed by Amit Kapila and Venkata Balaji N. --- doc/src/sgml/config.sgml | 40 ++++++++++++++++++++------- doc/src/sgml/perform.sgml | 16 +++++------ doc/src/sgml/wal.sgml | 69 +++++++++++++++++++++++++++++------------------ 3 files changed, 81 insertions(+), 44 deletions(-) (limited to 'doc/src') diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml index a3917aac785..5ada5c8a1c2 100644 --- a/doc/src/sgml/config.sgml +++ b/doc/src/sgml/config.sgml @@ -1325,7 +1325,7 @@ include_dir 'conf.d' 40% of RAM to shared_buffers will work better than a smaller amount. Larger settings for shared_buffers usually require a corresponding increase in - checkpoint_segments, in order to spread out the + max_wal_size, in order to spread out the process of writing large quantities of new or changed data over a longer period of time. @@ -2394,18 +2394,20 @@ include_dir 'conf.d' Checkpoints - - checkpoint_segments (integer) + + max_wal_size (integer) - checkpoint_segments configuration parameter + max_wal_size configuration parameter - - Maximum number of log file segments between automatic WAL - checkpoints (each segment is normally 16 megabytes). The default - is three segments. Increasing this parameter can increase the - amount of time needed for crash recovery. + Maximum size to let the WAL grow to between automatic WAL + checkpoints. This is a soft limit; WAL size can exceed + max_wal_size under special circumstances, like + under heavy load, a failing archive_command, or a high + wal_keep_segments setting. The default is 128 MB. + Increasing this parameter can increase the amount of time needed for + crash recovery. This parameter can only be set in the postgresql.conf file or on the server command line. @@ -2458,7 +2460,7 @@ include_dir 'conf.d' Write a message to the server log if checkpoints caused by the filling of checkpoint segment files happen closer together than this many seconds (which suggests that - checkpoint_segments ought to be raised). The default is + max_wal_size ought to be raised). The default is 30 seconds (30s). Zero disables the warning. No warnings will be generated if checkpoint_timeout is less than checkpoint_warning. @@ -2468,6 +2470,24 @@ include_dir 'conf.d' + + min_wal_size (integer) + + min_wal_size configuration parameter + + + + As long as WAL disk usage stays below this setting, old WAL files are + always recycled for future use at a checkpoint, rather than removed. + This can be used to ensure that enough WAL space is reserved to + handle spikes in WAL usage, for example when running large batch + jobs. The default is 80 MB. + This parameter can only be set in the postgresql.conf + file or on the server command line. + + + + diff --git a/doc/src/sgml/perform.sgml b/doc/src/sgml/perform.sgml index 5a087fbe6a0..c73580ed460 100644 --- a/doc/src/sgml/perform.sgml +++ b/doc/src/sgml/perform.sgml @@ -1328,19 +1328,19 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse; - - Increase <varname>checkpoint_segments</varname> + + Increase <varname>max_wal_size</varname> - Temporarily increasing the configuration variable can also + Temporarily increasing the + configuration variable can also make large data loads faster. This is because loading a large amount of data into PostgreSQL will cause checkpoints to occur more often than the normal checkpoint frequency (specified by the checkpoint_timeout configuration variable). Whenever a checkpoint occurs, all dirty pages must be flushed to disk. By increasing - checkpoint_segments temporarily during bulk + max_wal_size temporarily during bulk data loads, the number of checkpoints that are required can be reduced. @@ -1445,7 +1445,7 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse; Set appropriate (i.e., larger than normal) values for maintenance_work_mem and - checkpoint_segments. + max_wal_size. @@ -1512,7 +1512,7 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse; So when loading a data-only dump, it is up to you to drop and recreate indexes and foreign keys if you wish to use those techniques. - It's still useful to increase checkpoint_segments + It's still useful to increase max_wal_size while loading the data, but don't bother increasing maintenance_work_mem; rather, you'd do that while manually recreating indexes and foreign keys afterwards. @@ -1577,7 +1577,7 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse; - Increase and and ; this reduces the frequency of checkpoints, but increases the storage requirements of /pg_xlog. diff --git a/doc/src/sgml/wal.sgml b/doc/src/sgml/wal.sgml index 1254c03f80e..b57749fdbc3 100644 --- a/doc/src/sgml/wal.sgml +++ b/doc/src/sgml/wal.sgml @@ -472,9 +472,10 @@ The server's checkpointer process automatically performs a checkpoint every so often. A checkpoint is begun every log segments, or every seconds, whichever comes first. - The default settings are 3 segments and 300 seconds (5 minutes), respectively. + linkend="guc-checkpoint-timeout"> seconds, or if + is about to be exceeded, + whichever comes first. + The default settings are 5 minutes and 128 MB, respectively. If no WAL has been written since the previous checkpoint, new checkpoints will be skipped even if checkpoint_timeout has passed. (If WAL archiving is being used and you want to put a lower limit on how @@ -486,8 +487,8 @@ - Reducing checkpoint_segments and/or - checkpoint_timeout causes checkpoints to occur + Reducing checkpoint_timeout and/or + max_wal_size causes checkpoints to occur more often. This allows faster after-crash recovery, since less work will need to be redone. However, one must balance this against the increased cost of flushing dirty data pages more often. If @@ -510,11 +511,11 @@ parameter. If checkpoints happen closer together than checkpoint_warning seconds, a message will be output to the server log recommending increasing - checkpoint_segments. Occasional appearance of such + max_wal_size. Occasional appearance of such a message is not cause for alarm, but if it appears often then the checkpoint control parameters should be increased. Bulk operations such as large COPY transfers might cause a number of such warnings - to appear if you have not set checkpoint_segments high + to appear if you have not set max_wal_size high enough. @@ -525,10 +526,10 @@ , which is given as a fraction of the checkpoint interval. The I/O rate is adjusted so that the checkpoint finishes when the - given fraction of checkpoint_segments WAL segments - have been consumed since checkpoint start, or the given fraction of - checkpoint_timeout seconds have elapsed, - whichever is sooner. With the default value of 0.5, + given fraction of + checkpoint_timeout seconds have elapsed, or before + max_wal_size is exceeded, whichever is sooner. + With the default value of 0.5, PostgreSQL can be expected to complete each checkpoint in about half the time before the next checkpoint starts. On a system that's very close to maximum I/O throughput during normal operation, @@ -545,18 +546,35 @@ - There will always be at least one WAL segment file, and will normally - not be more than (2 + checkpoint_completion_target) * checkpoint_segments + 1 - or checkpoint_segments + + 1 - files. Each segment file is normally 16 MB (though this size can be - altered when building the server). You can use this to estimate space - requirements for WAL. - Ordinarily, when old log segment files are no longer needed, they - are recycled (that is, renamed to become future segments in the numbered - sequence). If, due to a short-term peak of log output rate, there - are more than 3 * checkpoint_segments + 1 - segment files, the unneeded segment files will be deleted instead - of recycled until the system gets back under this limit. + The number of WAL segment files in pg_xlog directory depends on + min_wal_size, max_wal_size and + the amount of WAL generated in previous checkpoint cycles. When old log + segment files are no longer needed, they are removed or recycled (that is, + renamed to become future segments in the numbered sequence). If, due to a + short-term peak of log output rate, max_wal_size is + exceeded, the unneeded segment files will be removed until the system + gets back under this limit. Below that limit, the system recycles enough + WAL files to cover the estimated need until the next checkpoint, and + removes the rest. The estimate is based on a moving average of the number + of WAL files used in previous checkpoint cycles. The moving average + is increased immediately if the actual usage exceeds the estimate, so it + accommodates peak usage rather average usage to some extent. + min_wal_size puts a minimum on the amount of WAL files + recycled for future usage; that much WAL is always recycled for future use, + even if the system is idle and the WAL usage estimate suggests that little + WAL is needed. + + + + Independently of max_wal_size, + + 1 most recent WAL files are + kept at all times. Also, if WAL archiving is used, old segments can not be + removed or recycled until they are archived. If WAL archiving cannot keep up + with the pace that WAL is generated, or if archive_command + fails repeatedly, old WAL files will accumulate in pg_xlog + until the situation is resolved. A slow or failed standby server that + uses a replication slot will have the same effect (see + ). @@ -571,9 +589,8 @@ master because restartpoints can only be performed at checkpoint records. A restartpoint is triggered when a checkpoint record is reached if at least checkpoint_timeout seconds have passed since the last - restartpoint. In standby mode, a restartpoint is also triggered if at - least checkpoint_segments log segments have been replayed - since the last restartpoint. + restartpoint, or if WAL size is about to exceed + max_wal_size. -- cgit v1.2.3