diff options
| author | Andrew Morton <akpm@digeo.com> | 2002-09-22 08:16:54 -0700 |
|---|---|---|
| committer | Linus Torvalds <torvalds@home.transmeta.com> | 2002-09-22 08:16:54 -0700 |
| commit | c9b22619390dfa338193a704109a29d93bbcfd00 (patch) | |
| tree | aaad672d5834e8ca249fde1968d15f0a7c4feadc /include/linux/buffer_head.h | |
| parent | f33323844241239df18e8aaafc4b20baf07f6dc6 (diff) | |
[PATCH] use the congestion APIs in pdflush
The key concept here is that pdflush does not block on request queues
any more. Instead, it circulates across the queues, keeping any
non-congested queues full of write data. When all queues are full,
pdflush takes a nap, to be woken when *any* queue exits write
congestion.
This code can keep sixty spindles saturated - we've never been able to
do that before.
- Add the `nonblocking' flag to struct writeback_control, and teach
the writeback paths to honour it.
- Add the `encountered_congestion' flag to struct writeback_control
and teach the writeback paths to set it.
So as soon as a mapping's backing_dev_info indicates that it is getting
congested, bale out of writeback. And don't even start writeback
against filesystems whose queues are congested.
- Convert pdflush's background_writeback() function to use
nonblocking writeback.
This way, a single pdflush thread will circulate around all the
dirty queues, keeping them filled.
- Convert the pdlfush `kupdate' function to do the same thing.
This solves the problem of pdflush thread pool exhaustion.
It solves the problem of pdflush startup latency.
It solves the (minor) problem wherein `kupdate' writeback only writes
back a single disk at a time (it was getting blocked on each queue in
turn).
It probably means that we only ever need a single pdflush thread.
Diffstat (limited to 'include/linux/buffer_head.h')
0 files changed, 0 insertions, 0 deletions
