diff options
| author | Andrew Morton <akpm@zip.com.au> | 2002-05-19 02:22:24 -0700 |
|---|---|---|
| committer | Arnaldo Carvalho de Melo <acme@conectiva.com.br> | 2002-05-19 02:22:24 -0700 |
| commit | acb5f6f9bb66a409205a3a9fa6dffa98e8520d00 (patch) | |
| tree | b1cbf0d9227781e0d9f2267ca2261304fbc811b4 /include | |
| parent | 17a74e8800eb0f00a74b9c1d269483e4f9f22bc8 (diff) | |
[PATCH] writeback tuning
Tune up the VM-based writeback a bit.
- Always use the multipage clustered-writeback function from within
shrink_cache(), even if the page's mapping has a NULL ->vm_writeback(). So
clustered writeback is turned on for all address_spaces, not just ext2.
Subtle effect of this change: it is now the case that *all* writeback
proceeds along the mapping->dirty_pages list. The orderedness of the page
LRUs no longer has an impact on disk scheduling. So we only have one list
to keep well-sorted rather than two, and churning pages around on the LRU
will no longer damage write bandwidth - it's all up to the filesystem.
- Decrease the clustered writeback from 1024 pages(!) to 32 pages.
(1024 was a leftover from when this code was always dispatching writeback
to a pdflush thread).
- Fix wakeup_bdflush() so that it actually does write something (duh).
do_wp_page() needs to call balance_dirty_pages_ratelimited(), so we
throttle mmap page-dirtiers in the same way as write(2) page-dirtiers.
This may make wakeup_bdflush() obsolete, but it doesn't hurt.
- Converts generic_vm_writeback() to directly call ->writeback_mapping(),
rather that going through writeback_single_inode(). This prevents memory
allocators from blocking on the inode's I_LOCK. But it does mean that two
processes can be writing pages from the same mapping at the same time. If
filesystems care about this (for layout reasons) then they should serialise
in their ->writeback_mapping a_op.
This means that memory-allocators will writeback only pages, not pages
and inodes. There are no locks in that writeback path (except for request
queue exhaustion). Reduces memory allocation latency.
- Implement new background_writeback function, which when kicked off
will perform writeback until dirty memory falls below the background
threshold.
- Put written-back pages onto the remote end of the page LRU. It
does this in the slow-and-stupid way at present. pagemap_lru_lock
stress-relief is planned...
- Remove the funny writeback_unused_inodes() stuff from prune_icache().
Writeback from wakeup_bdflush() and the `kupdate' function now just
naturally cleanses the oldest inodes so we don't need to do anything
there.
- Dirty memory balancing is still using magic numbers: "after you
dirtied your 1,000th page, go write 1,500". Obviously, this needs
more work.
Diffstat (limited to 'include')
| -rw-r--r-- | include/linux/writeback.h | 10 |
1 files changed, 1 insertions, 9 deletions
diff --git a/include/linux/writeback.h b/include/linux/writeback.h index a089dd009fc1..e345205b6d86 100644 --- a/include/linux/writeback.h +++ b/include/linux/writeback.h @@ -46,17 +46,9 @@ static inline void wait_on_inode(struct inode *inode) /* * mm/page-writeback.c */ -/* - * How much data to write out at a time in various places. This isn't - * really very important - it's just here to prevent any thread from - * locking an inode for too long and blocking other threads which wish - * to write the same file for allocation throttling purposes. - */ -#define WRITEOUT_PAGES ((4096 * 1024) / PAGE_CACHE_SIZE) - void balance_dirty_pages(struct address_space *mapping); void balance_dirty_pages_ratelimited(struct address_space *mapping); -int pdflush_flush(unsigned long nr_pages); int pdflush_operation(void (*fn)(unsigned long), unsigned long arg0); +int writeback_mapping(struct address_space *mapping, int *nr_to_write); #endif /* WRITEBACK_H */ |
