diff options
| author | Andrew Morton <akpm@digeo.com> | 2002-09-25 07:20:18 -0700 |
|---|---|---|
| committer | Linus Torvalds <torvalds@home.transmeta.com> | 2002-09-25 07:20:18 -0700 |
| commit | b65bbded3935b896d55cb6b3e420a085d3089368 (patch) | |
| tree | 47bb77cea1bc9cec586d59dc79b9f6274334cc18 /include | |
| parent | dfdacf598759e7027914d50a77e8cd3a98bf7481 (diff) | |
[PATCH] slab reclaim balancing
A patch from Ed Tomlinson which improves the way in which the kernel
reclaims slab objects.
The theory is: a cached object's usefulness is measured in terms of the
number of disk seeks which it saves. Furthermore, we assume that one
dentry or inode saves as many seeks as one pagecache page.
So we reap slab objects at the same rate as we reclaim pages. For each
1% of reclaimed pagecache we reclaim 1% of slab. (Actually, we _scan_
1% of slab for each 1% of scanned pages).
Furthermore we assume that one swapout costs twice as many seeks as one
pagecache page, and twice as many seeks as one slab object. So we
double the pressure on slab when anonymous pages are being considered
for eviction.
The code works nicely, and smoothly. Possibly it does not shrink slab
hard enough, but that is now very easy to tune up and down. It is just:
ratio *= 3;
in shrink_caches().
Slab caches no longer hold onto completely empty pages. Instead, pages
are freed as soon as they have zero objects. This is possibly a
performance hit for slabs which have constructors, but it's doubtful.
Most allocations after a batch of frees are satisfied from inside
internally-fragmented pages and by the time slab gets back onto using
the wholly-empty pages they'll be cache-cold. slab would be better off
going and requesting a new, cache-warm page and reconstructing the
objects therein. (Once we have the per-cpu hot-page allocator in
place. It's happening).
As a consequence of the above, kmem_cache_shrink() is now unused. No
great loss there - the serialising effect of kmem_cache_shrink and its
semaphore in front of page reclaim was measurably bad.
Still todo:
- batch up the shrinking so we don't call into prune_dcache and
friends at high frequency asking for a tiny number of objects.
- Maybe expose the shrink ratio via a tunable.
- clean up slab.c
- highmem page reclaim in prune_icache: highmem pages can pin
inodes.
Diffstat (limited to 'include')
| -rw-r--r-- | include/linux/dcache.h | 2 | ||||
| -rw-r--r-- | include/linux/mm.h | 1 |
2 files changed, 2 insertions, 1 deletions
diff --git a/include/linux/dcache.h b/include/linux/dcache.h index f99a03f17e60..a64a657545fe 100644 --- a/include/linux/dcache.h +++ b/include/linux/dcache.h @@ -186,7 +186,7 @@ extern int shrink_dcache_memory(int, unsigned int); extern void prune_dcache(int); /* icache memory management (defined in linux/fs/inode.c) */ -extern int shrink_icache_memory(int, int); +extern int shrink_icache_memory(int, unsigned int); extern void prune_icache(int); /* quota cache memory management (defined in linux/fs/dquot.c) */ diff --git a/include/linux/mm.h b/include/linux/mm.h index c63e4947387f..482db998aca7 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -524,6 +524,7 @@ extern struct vm_area_struct *find_extend_vma(struct mm_struct *mm, unsigned lon extern struct page * vmalloc_to_page(void *addr); extern unsigned long get_page_cache_size(void); +extern unsigned int nr_used_zone_pages(void); #endif /* __KERNEL__ */ |
