diff options
| author | Nathan Bossart <nathan@postgresql.org> | 2024-07-08 16:18:00 -0500 |
|---|---|---|
| committer | Nathan Bossart <nathan@postgresql.org> | 2024-07-08 16:18:00 -0500 |
| commit | 64f34eb2e2ce4bca7351d8c88a6999aeed000c4a (patch) | |
| tree | 957cf62a6c0ce41f49d84729da3ef8c9fbd3b09a /src/bin/pg_upgrade/pg_upgrade.c | |
| parent | 4b4b931bcdf23f5facd49809278a3048c4fdba1f (diff) | |
Use CREATE DATABASE ... STRATEGY = FILE_COPY in pg_upgrade.
While this strategy is ordinarily quite costly because it requires
performing two checkpoints, testing shows that it tends to be a
faster choice than WAL_LOG during pg_upgrade, presumably because
fsync is turned off. Furthermore, we can skip the checkpoints
altogether because the problems they are intended to prevent don't
apply to pg_upgrade. Instead, we just need to CHECKPOINT once in
the new cluster after making any changes to template0 and before
restoring the rest of the databases. This ensures that said
template0 changes are written out to disk prior to creating the
databases via FILE_COPY.
Co-authored-by: Matthias van de Meent
Reviewed-by: Ranier Vilela, Dilip Kumar, Robert Haas, Michael Paquier
Discussion: https://postgr.es/m/Zl9ta3FtgdjizkJ5%40nathan
Diffstat (limited to 'src/bin/pg_upgrade/pg_upgrade.c')
| -rw-r--r-- | src/bin/pg_upgrade/pg_upgrade.c | 12 |
1 files changed, 12 insertions, 0 deletions
diff --git a/src/bin/pg_upgrade/pg_upgrade.c b/src/bin/pg_upgrade/pg_upgrade.c index af370768b60..03eb738fd7e 100644 --- a/src/bin/pg_upgrade/pg_upgrade.c +++ b/src/bin/pg_upgrade/pg_upgrade.c @@ -534,10 +534,22 @@ static void create_new_objects(void) { int dbnum; + PGconn *conn_new_template1; prep_status_progress("Restoring database schemas in the new cluster"); /* + * Ensure that any changes to template0 are fully written out to disk + * prior to restoring the databases. This is necessary because we use the + * FILE_COPY strategy to create the databases (which testing has shown to + * be faster), and when the server is in binary upgrade mode, it skips the + * checkpoints this strategy ordinarily performs. + */ + conn_new_template1 = connectToServer(&new_cluster, "template1"); + PQclear(executeQueryOrDie(conn_new_template1, "CHECKPOINT")); + PQfinish(conn_new_template1); + + /* * We cannot process the template1 database concurrently with others, * because when it's transiently dropped, connection attempts would fail. * So handle it in a separate non-parallelized pass. |
