diff options
author | Andres Freund <andres@anarazel.de> | 2025-03-17 18:51:33 -0400 |
---|---|---|
committer | Andres Freund <andres@anarazel.de> | 2025-03-17 18:51:33 -0400 |
commit | da7226993fd4b73d8b40abb7167d124eada97f2e (patch) | |
tree | 6dfb9949c552c6a6aa6c5511e77a2477ccb9641b /src/backend/storage/aio/aio_init.c | |
parent | 02844012b304ba80d1c48d51f6fe10bb622490cc (diff) |
aio: Add core asynchronous I/O infrastructure
The main motivations to use AIO in PostgreSQL are:
a) Reduce the time spent waiting for IO by issuing IO sufficiently early.
In a few places we have approximated this using posix_fadvise() based
prefetching, but that is fairly limited (no completion feedback, double the
syscalls, only works with buffered IO, only works on some OSs).
b) Allow to use Direct-I/O (DIO).
DIO can offload most of the work for IO to hardware and thus increase
throughput / decrease CPU utilization, as well as reduce latency. While we
have gained the ability to configure DIO in d4e71df6, it is not yet usable
for real world workloads, as every IO is executed synchronously.
For portability, the new AIO infrastructure allows to implement AIO using
different methods. The choice of the AIO method is controlled by the new
io_method GUC. As of this commit, the only implemented method is "sync",
i.e. AIO is not actually executed asynchronously. The "sync" method exists to
allow to bypass most of the new code initially.
Subsequent commits will introduce additional IO methods, including a
cross-platform method implemented using worker processes and a linux specific
method using io_uring.
To allow different parts of postgres to use AIO, the core AIO infrastructure
does not need to know what kind of files it is operating on. The necessary
behavioral differences for different files are abstracted as "AIO
Targets". One example target would be smgr. For boring portability reasons,
all targets currently need to be added to an array in aio_target.c. This
commit does not implement any AIO targets, just the infrastructure for
them. The smgr target will be added in a later commit.
Completion (and other events) of IOs for one type of file (i.e. one AIO
target) need to be reacted to differently, based on the IO operation and the
callsite. This is made possible by callbacks that can be registered on
IOs. E.g. an smgr read into a local buffer does not need to update the
corresponding BufferDesc (as there is none), but a read into shared buffers
does. This commit does not contain any callbacks, they will be added in
subsequent commits.
For now the AIO infrastructure only understands READV and WRITEV operations,
but it is expected that more operations will be added. E.g. fsync/fdatasync,
flush_range and network operations like send/recv.
As of this commit, nothing uses the AIO infrastructure. Later commits will add
an smgr target, md.c and bufmgr.c callbacks and then finally use AIO for
read_stream.c IO, which, in one fell swoop, will convert all read stream users
to AIO.
The goal is to use AIO in many more places. There are patches to use AIO for
checkpointer and bgwriter that are reasonably close to being ready. There also
are prototypes to use it for WAL, relation extension, backend writes and many
more. Those prototypes were important to ensure the design of the AIO
subsystem is not too limiting (e.g. WAL writes need to happen in critical
sections, which influenced a lot of the design).
A future commit will add an AIO README explaining the AIO architecture and how
to use the AIO subsystem. The README is added later, as it references details
only added in later commits.
Many many more people than the folks named below have contributed with
feedback, work on semi-independent patches etc. E.g. various folks have
contributed patches to use the read stream infrastructure (added by Thomas in
b5a9b18cd0b) in more places. Similarly, a *lot* of folks have contributed to
the CI infrastructure, which I had started to work on to make adding AIO
feasible.
Some of the work by contributors has gone into the "v1" prototype of AIO,
which heavily influenced the current design of the AIO subsystem. None of the
code from that directly survives, but without the prototype, the current
version of the AIO infrastructure would not exist.
Similarly, the reviewers below have not necessarily looked at the current
design or the whole infrastructure, but have provided very valuable input. I
am to blame for problems, not they.
Author: Andres Freund <andres@anarazel.de>
Co-authored-by: Thomas Munro <thomas.munro@gmail.com>
Co-authored-by: Nazir Bilal Yavuz <byavuz81@gmail.com>
Co-authored-by: Melanie Plageman <melanieplageman@gmail.com>
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Reviewed-by: Noah Misch <noah@leadboat.com>
Reviewed-by: Jakub Wartak <jakub.wartak@enterprisedb.com>
Reviewed-by: Melanie Plageman <melanieplageman@gmail.com>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Reviewed-by: Dmitry Dolgov <9erthalion6@gmail.com>
Reviewed-by: Antonin Houska <ah@cybertec.at>
Discussion: https://postgr.es/m/uvrtrknj4kdytuboidbhwclo4gxhswwcpgadptsjvjqcluzmah%40brqs62irg4dt
Discussion: https://postgr.es/m/20210223100344.llw5an2aklengrmn@alap3.anarazel.de
Discussion: https://postgr.es/m/stj36ea6yyhoxtqkhpieia2z4krnam7qyetc57rfezgk4zgapf@gcnactj4z56m
Diffstat (limited to 'src/backend/storage/aio/aio_init.c')
-rw-r--r-- | src/backend/storage/aio/aio_init.c | 198 |
1 files changed, 198 insertions, 0 deletions
diff --git a/src/backend/storage/aio/aio_init.c b/src/backend/storage/aio/aio_init.c index aeacc144149..6fe55510fae 100644 --- a/src/backend/storage/aio/aio_init.c +++ b/src/backend/storage/aio/aio_init.c @@ -14,24 +14,222 @@ #include "postgres.h" +#include "miscadmin.h" +#include "storage/aio.h" +#include "storage/aio_internal.h" #include "storage/aio_subsys.h" +#include "storage/ipc.h" +#include "storage/proc.h" +#include "storage/shmem.h" +#include "utils/guc.h" +static Size +AioCtlShmemSize(void) +{ + Size sz; + + /* pgaio_ctl itself */ + sz = offsetof(PgAioCtl, io_handles); + + return sz; +} + +static uint32 +AioProcs(void) +{ + return MaxBackends + NUM_AUXILIARY_PROCS; +} + +static Size +AioBackendShmemSize(void) +{ + return mul_size(AioProcs(), sizeof(PgAioBackend)); +} + +static Size +AioHandleShmemSize(void) +{ + Size sz; + + /* verify AioChooseMaxConcurrency() did its thing */ + Assert(io_max_concurrency > 0); + + /* io handles */ + sz = mul_size(AioProcs(), + mul_size(io_max_concurrency, sizeof(PgAioHandle))); + + return sz; +} + +static Size +AioHandleIOVShmemSize(void) +{ + /* + * Each IO handle can have an PG_IOV_MAX long iovec. + * + * XXX: Right now the amount of space available for each IO is PG_IOV_MAX. + * While it's tempting to use the io_combine_limit GUC, that's + * PGC_USERSET, so we can't allocate shared memory based on that. + */ + return mul_size(sizeof(struct iovec), + mul_size(mul_size(PG_IOV_MAX, AioProcs()), + io_max_concurrency)); +} + +static Size +AioHandleDataShmemSize(void) +{ + /* each buffer referenced by an iovec can have associated data */ + return mul_size(sizeof(uint64), + mul_size(mul_size(PG_IOV_MAX, AioProcs()), + io_max_concurrency)); +} + +/* + * Choose a suitable value for io_max_concurrency. + * + * It's unlikely that we could have more IOs in flight than buffers that we + * would be allowed to pin. + * + * On the upper end, apply a cap too - just because shared_buffers is large, + * it doesn't make sense have millions of buffers undergo IO concurrently. + */ +static int +AioChooseMaxConcurrency(void) +{ + uint32 max_backends; + int max_proportional_pins; + + /* Similar logic to LimitAdditionalPins() */ + max_backends = MaxBackends + NUM_AUXILIARY_PROCS; + max_proportional_pins = NBuffers / max_backends; + + max_proportional_pins = Max(max_proportional_pins, 1); + + /* apply upper limit */ + return Min(max_proportional_pins, 64); +} + Size AioShmemSize(void) { Size sz = 0; + /* + * We prefer to report this value's source as PGC_S_DYNAMIC_DEFAULT. + * However, if the DBA explicitly set io_max_concurrency = -1 in the + * config file, then PGC_S_DYNAMIC_DEFAULT will fail to override that and + * we must force the matter with PGC_S_OVERRIDE. + */ + if (io_max_concurrency == -1) + { + char buf[32]; + + snprintf(buf, sizeof(buf), "%d", AioChooseMaxConcurrency()); + SetConfigOption("io_max_concurrency", buf, PGC_POSTMASTER, + PGC_S_DYNAMIC_DEFAULT); + if (io_max_concurrency == -1) /* failed to apply it? */ + SetConfigOption("io_max_concurrency", buf, PGC_POSTMASTER, + PGC_S_OVERRIDE); + } + + sz = add_size(sz, AioCtlShmemSize()); + sz = add_size(sz, AioBackendShmemSize()); + sz = add_size(sz, AioHandleShmemSize()); + sz = add_size(sz, AioHandleIOVShmemSize()); + sz = add_size(sz, AioHandleDataShmemSize()); + + /* Reserve space for method specific resources. */ + if (pgaio_method_ops->shmem_size) + sz = add_size(sz, pgaio_method_ops->shmem_size()); + return sz; } void AioShmemInit(void) { + bool found; + uint32 io_handle_off = 0; + uint32 iovec_off = 0; + uint32 per_backend_iovecs = io_max_concurrency * PG_IOV_MAX; + + pgaio_ctl = (PgAioCtl *) + ShmemInitStruct("AioCtl", AioCtlShmemSize(), &found); + + if (found) + goto out; + + memset(pgaio_ctl, 0, AioCtlShmemSize()); + + pgaio_ctl->io_handle_count = AioProcs() * io_max_concurrency; + pgaio_ctl->iovec_count = AioProcs() * per_backend_iovecs; + + pgaio_ctl->backend_state = (PgAioBackend *) + ShmemInitStruct("AioBackend", AioBackendShmemSize(), &found); + + pgaio_ctl->io_handles = (PgAioHandle *) + ShmemInitStruct("AioHandle", AioHandleShmemSize(), &found); + + pgaio_ctl->iovecs = (struct iovec *) + ShmemInitStruct("AioHandleIOV", AioHandleIOVShmemSize(), &found); + pgaio_ctl->handle_data = (uint64 *) + ShmemInitStruct("AioHandleData", AioHandleDataShmemSize(), &found); + + for (int procno = 0; procno < AioProcs(); procno++) + { + PgAioBackend *bs = &pgaio_ctl->backend_state[procno]; + + bs->io_handle_off = io_handle_off; + io_handle_off += io_max_concurrency; + + dclist_init(&bs->idle_ios); + memset(bs->staged_ios, 0, sizeof(PgAioHandle *) * PGAIO_SUBMIT_BATCH_SIZE); + dclist_init(&bs->in_flight_ios); + + /* initialize per-backend IOs */ + for (int i = 0; i < io_max_concurrency; i++) + { + PgAioHandle *ioh = &pgaio_ctl->io_handles[bs->io_handle_off + i]; + + ioh->generation = 1; + ioh->owner_procno = procno; + ioh->iovec_off = iovec_off; + ioh->handle_data_len = 0; + ioh->report_return = NULL; + ioh->resowner = NULL; + ioh->num_callbacks = 0; + ioh->distilled_result.status = ARS_UNKNOWN; + ioh->flags = 0; + + ConditionVariableInit(&ioh->cv); + + dclist_push_tail(&bs->idle_ios, &ioh->node); + iovec_off += PG_IOV_MAX; + } + } + +out: + /* Initialize IO method specific resources. */ + if (pgaio_method_ops->shmem_init) + pgaio_method_ops->shmem_init(!found); } void pgaio_init_backend(void) { + /* shouldn't be initialized twice */ + Assert(!pgaio_my_backend); + + if (MyProc == NULL || MyProcNumber >= AioProcs()) + elog(ERROR, "aio requires a normal PGPROC"); + + pgaio_my_backend = &pgaio_ctl->backend_state[MyProcNumber]; + + if (pgaio_method_ops->init_backend) + pgaio_method_ops->init_backend(); + + before_shmem_exit(pgaio_shutdown, 0); } |