summaryrefslogtreecommitdiff
path: root/src/include/access/tableam.h
diff options
context:
space:
mode:
authorAndres Freund <andres@anarazel.de>2019-04-04 15:47:19 -0700
committerAndres Freund <andres@anarazel.de>2019-04-04 16:28:18 -0700
commit86b85044e823a304d2a265abc030254d39efe7df (patch)
tree7a4e236f6a73a38db638c561a238f1d29d0f436b /src/include/access/tableam.h
parent7bac3acab4d5c3f2c35aa3a7bea08411d83fd5bc (diff)
tableam: Add table_multi_insert() and revamp/speed-up COPY FROM buffering.
This adds table_multi_insert(), and converts COPY FROM, the only user of heap_multi_insert, to it. A simple conversion of COPY FROM use slots would have yielded a slowdown when inserting into a partitioned table for some workloads. Different partitions might need different slots (both slot types and their descriptors), and dropping / creating slots when there's constant partition changes is measurable. Thus instead revamp the COPY FROM buffering for partitioned tables to allow to buffer inserts into multiple tables, flushing only when limits are reached across all partition buffers. By only dropping slots when there've been inserts into too many different partitions, the aforementioned overhead is gone. By allowing larger batches, even when there are frequent partition changes, we actuall speed such cases up significantly. By using slots COPY of very narrow rows into unlogged / temporary might slow down very slightly (due to the indirect function calls). Author: David Rowley, Andres Freund, Haribabu Kommi Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de https://postgr.es/m/20190327054923.t3epfuewxfqdt22e@alap3.anarazel.de
Diffstat (limited to 'src/include/access/tableam.h')
-rw-r--r--src/include/access/tableam.h26
1 files changed, 26 insertions, 0 deletions
diff --git a/src/include/access/tableam.h b/src/include/access/tableam.h
index 42e2ba68bf9..90c329a88d3 100644
--- a/src/include/access/tableam.h
+++ b/src/include/access/tableam.h
@@ -350,6 +350,10 @@ typedef struct TableAmRoutine
uint32 specToken,
bool succeeded);
+ /* see table_multi_insert() for reference about parameters */
+ void (*multi_insert) (Relation rel, TupleTableSlot **slots, int nslots,
+ CommandId cid, int options, struct BulkInsertStateData *bistate);
+
/* see table_delete() for reference about parameters */
TM_Result (*tuple_delete) (Relation rel,
ItemPointer tid,
@@ -1078,6 +1082,28 @@ table_complete_speculative(Relation rel, TupleTableSlot *slot,
}
/*
+ * Insert multiple tuple into a table.
+ *
+ * This is like table_insert(), but inserts multiple tuples in one
+ * operation. That's often faster than calling table_insert() in a loop,
+ * because e.g. the AM can reduce WAL logging and page locking overhead.
+ *
+ * Except for taking `nslots` tuples as input, as an array of TupleTableSlots
+ * in `slots`, the parameters for table_multi_insert() are the same as for
+ * table_insert().
+ *
+ * Note: this leaks memory into the current memory context. You can create a
+ * temporary context before calling this, if that's a problem.
+ */
+static inline void
+table_multi_insert(Relation rel, TupleTableSlot **slots, int nslots,
+ CommandId cid, int options, struct BulkInsertStateData *bistate)
+{
+ rel->rd_tableam->multi_insert(rel, slots, nslots,
+ cid, options, bistate);
+}
+
+/*
* Delete a tuple.
*
* NB: do not call this directly unless prepared to deal with