diff options
| author | Andrew Morton <akpm@zip.com.au> | 2002-08-14 21:20:57 -0700 |
|---|---|---|
| committer | Linus Torvalds <torvalds@home.transmeta.com> | 2002-08-14 21:20:57 -0700 |
| commit | 9eb76ee2a6f64fe412bef315eccbb1dd63a203ae (patch) | |
| tree | 71651d8b58f95a04a53598107877bc169c42911b /include/linux/etherdevice.h | |
| parent | 823e0df87c01883c05b3ee0f1c1d109a56d22cd3 (diff) | |
[PATCH] batched addition of pages to the LRU
The patch goes through the various places which were calling
lru_cache_add() against bulk pages and batches them up.
Also. This whole patch series improves the behaviour of the system
under heavy writeback load. There is a reduction in page allocation
failures, some reduction in loss of interactivity due to page
allocators getting stuck on writeback from the VM. (This is still bad
though).
I think it's due to the change here in mpage_writepages(). That
function was originally unconditionally refiling written-back pages to
the head of the inactive list. The theory being that they should be
moved out of the way of page allocators, who would end up waiting on
them.
It appears that this simply had the effect of pushing dirty, unwritten
data closer to the tail of the inactive list, making things worse.
So instead, if the caller is (typically) balance_dirty_pages() then
leave the pages where they are on the LRU.
If the caller is PF_MEMALLOC then the pages *have* to be refiled. This
is because VM writeback is clustered along mapping->dirty_pages, and
it's almost certain that the pages which are being written are near the
tail of the LRU. If they were left there, page allocators would block
on them too soon. It would effectively become a synchronous write.
Diffstat (limited to 'include/linux/etherdevice.h')
0 files changed, 0 insertions, 0 deletions
