diff options
author | Noah Misch <noah@leadboat.com> | 2020-03-21 09:38:26 -0700 |
---|---|---|
committer | Noah Misch <noah@leadboat.com> | 2020-03-21 09:38:36 -0700 |
commit | 9db4b9da2801ed94c8f209c807e654c139dc1d7e (patch) | |
tree | 69fe368d6fdc02a316399cdeeaf64c11ea75477e /src/backend/storage/smgr/md.c | |
parent | e0dd086414f782d9200ad525a1643a9f57a2b497 (diff) |
Skip WAL for new relfilenodes, under wal_level=minimal.
Until now, only selected bulk operations (e.g. COPY) did this. If a
given relfilenode received both a WAL-skipping COPY and a WAL-logged
operation (e.g. INSERT), recovery could lose tuples from the COPY. See
src/backend/access/transam/README section "Skipping WAL for New
RelFileNode" for the new coding rules. Maintainers of table access
methods should examine that section.
To maintain data durability, just before commit, we choose between an
fsync of the relfilenode and copying its contents to WAL. A new GUC,
wal_skip_threshold, guides that choice. If this change slows a workload
that creates small, permanent relfilenodes under wal_level=minimal, try
adjusting wal_skip_threshold. Users setting a timeout on COMMIT may
need to adjust that timeout, and log_min_duration_statement analysis
will reflect time consumption moving to COMMIT from commands like COPY.
Internally, this requires a reliable determination of whether
RollbackAndReleaseCurrentSubTransaction() would unlink a relation's
current relfilenode. Introduce rd_firstRelfilenodeSubid. Amend the
specification of rd_createSubid such that the field is zero when a new
rel has an old rd_node. Make relcache.c retain entries for certain
dropped relations until end of transaction.
Back-patch to 9.5 (all supported versions). This introduces a new WAL
record type, XLOG_GIST_ASSIGN_LSN, without bumping XLOG_PAGE_MAGIC. As
always, update standby systems before master systems. This changes
sizeof(RelationData) and sizeof(IndexStmt), breaking binary
compatibility for affected extensions. (The most recent commit to
affect the same class of extensions was
089e4d405d0f3b94c74a2c6a54357a84a681754b.)
Kyotaro Horiguchi, reviewed (in earlier, similar versions) by Robert
Haas. Heikki Linnakangas and Michael Paquier implemented earlier
designs that materially clarified the problem. Reviewed, in earlier
designs, by Andrew Dunstan, Andres Freund, Alvaro Herrera, Tom Lane,
Fujii Masao, and Simon Riggs. Reported by Martijn van Oosterhout.
Discussion: https://postgr.es/m/20150702220524.GA9392@svana.org
Diffstat (limited to 'src/backend/storage/smgr/md.c')
-rw-r--r-- | src/backend/storage/smgr/md.c | 52 |
1 files changed, 43 insertions, 9 deletions
diff --git a/src/backend/storage/smgr/md.c b/src/backend/storage/smgr/md.c index 58a6e0f4ddc..1358f81e3fc 100644 --- a/src/backend/storage/smgr/md.c +++ b/src/backend/storage/smgr/md.c @@ -352,11 +352,10 @@ mdcreate(SMgrRelation reln, ForkNumber forkNum, bool isRedo) * During replay, we would delete the file and then recreate it, which is fine * if the contents of the file were repopulated by subsequent WAL entries. * But if we didn't WAL-log insertions, but instead relied on fsyncing the - * file after populating it (as for instance CLUSTER and CREATE INDEX do), - * the contents of the file would be lost forever. By leaving the empty file - * until after the next checkpoint, we prevent reassignment of the relfilenode - * number until it's safe, because relfilenode assignment skips over any - * existing file. + * file after populating it (as we do at wal_level=minimal), the contents of + * the file would be lost forever. By leaving the empty file until after the + * next checkpoint, we prevent reassignment of the relfilenode number until + * it's safe, because relfilenode assignment skips over any existing file. * * We do not need to go through this dance for temp relations, though, because * we never make WAL entries for temp rels, and so a temp rel poses no threat @@ -961,12 +960,19 @@ mdtruncate(SMgrRelation reln, ForkNumber forknum, BlockNumber nblocks) * mdimmedsync() -- Immediately sync a relation to stable storage. * * Note that only writes already issued are synced; this routine knows - * nothing of dirty buffers that may exist inside the buffer manager. + * nothing of dirty buffers that may exist inside the buffer manager. We + * sync active and inactive segments; smgrDoPendingSyncs() relies on this. + * Consider a relation skipping WAL. Suppose a checkpoint syncs blocks of + * some segment, then mdtruncate() renders that segment inactive. If we + * crash before the next checkpoint syncs the newly-inactive segment, that + * segment may survive recovery, reintroducing unwanted data into the table. */ void mdimmedsync(SMgrRelation reln, ForkNumber forknum) { MdfdVec *v; + BlockNumber segno = 0; + bool active = true; /* * NOTE: mdnblocks makes sure we have opened all active segments, so that @@ -976,14 +982,42 @@ mdimmedsync(SMgrRelation reln, ForkNumber forknum) v = mdopen(reln, forknum, EXTENSION_FAIL); + /* + * Temporarily open inactive segments, then close them after sync. There + * may be some inactive segments left opened after fsync() error, but that + * is harmless. We don't bother to clean them up and take a risk of + * further trouble. The next mdclose() will soon close them. + */ while (v != NULL) { - if (FileSync(v->mdfd_vfd) < 0) + File vfd = v->mdfd_vfd; + + if (active) + v = v->mdfd_chain; + else + { + Assert(v->mdfd_chain == NULL); + pfree(v); + v = NULL; + } + + if (FileSync(vfd) < 0) ereport(data_sync_elevel(ERROR), (errcode_for_file_access(), errmsg("could not fsync file \"%s\": %m", - FilePathName(v->mdfd_vfd)))); - v = v->mdfd_chain; + FilePathName(vfd)))); + + /* Close inactive segments immediately */ + if (!active) + FileClose(vfd); + + segno++; + + if (v == NULL) + { + v = _mdfd_openseg(reln, forknum, segno, 0); + active = false; + } } } |