summaryrefslogtreecommitdiff
path: root/include
diff options
context:
space:
mode:
authorAndrew Morton <akpm@zip.com.au>2002-08-14 21:20:57 -0700
committerLinus Torvalds <torvalds@home.transmeta.com>2002-08-14 21:20:57 -0700
commit9eb76ee2a6f64fe412bef315eccbb1dd63a203ae (patch)
tree71651d8b58f95a04a53598107877bc169c42911b /include
parent823e0df87c01883c05b3ee0f1c1d109a56d22cd3 (diff)
[PATCH] batched addition of pages to the LRU
The patch goes through the various places which were calling lru_cache_add() against bulk pages and batches them up. Also. This whole patch series improves the behaviour of the system under heavy writeback load. There is a reduction in page allocation failures, some reduction in loss of interactivity due to page allocators getting stuck on writeback from the VM. (This is still bad though). I think it's due to the change here in mpage_writepages(). That function was originally unconditionally refiling written-back pages to the head of the inactive list. The theory being that they should be moved out of the way of page allocators, who would end up waiting on them. It appears that this simply had the effect of pushing dirty, unwritten data closer to the tail of the inactive list, making things worse. So instead, if the caller is (typically) balance_dirty_pages() then leave the pages where they are on the LRU. If the caller is PF_MEMALLOC then the pages *have* to be refiled. This is because VM writeback is clustered along mapping->dirty_pages, and it's almost certain that the pages which are being written are near the tail of the LRU. If they were left there, page allocators would block on them too soon. It would effectively become a synchronous write.
Diffstat (limited to 'include')
-rw-r--r--include/linux/pagemap.h2
1 files changed, 2 insertions, 0 deletions
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index b559ccd68520..69e214920908 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -58,6 +58,8 @@ extern struct page * read_cache_page(struct address_space *mapping,
extern int add_to_page_cache(struct page *page,
struct address_space *mapping, unsigned long index);
+extern int add_to_page_cache_lru(struct page *page,
+ struct address_space *mapping, unsigned long index);
extern void remove_from_page_cache(struct page *page);
extern void __remove_from_page_cache(struct page *page);