diff options
| author | Andrew Morton <akpm@digeo.com> | 2002-11-21 19:32:34 -0800 |
|---|---|---|
| committer | Linus Torvalds <torvalds@penguin.transmeta.com> | 2002-11-21 19:32:34 -0800 |
| commit | 36fb7f8459cc42eca202f0ad7b2d051359406d57 (patch) | |
| tree | 27aecc5515f089762fe40f6301b442a584332c32 /include/linux | |
| parent | fee2b68dc9746548133b059ef83dd633890022ef (diff) | |
[PATCH] handle zones which are full of unreclaimable pages
This patch is a general solution to the situation where a zone is full
of pinned pages.
This can come about if:
a) Someone has allocated all of ZONE_DMA for IO buffers
b) Some application is mlocking some memory and a zone ends up full
of mlocked pages (can happen on a 1G ia32 system)
c) All of ZONE_HIGHMEM is pinned in hugetlb pages (can happen on 1G
machines)
We'll currently burn 10% of CPU in kswapd when this happens, although
it is quite hard to trigger.
The algorithm is:
- If page reclaim has scanned 2 * the total number of pages in the
zone and there have been no pages freed in that zone then mark the
zone as "all unreclaimable".
- When a zone is "all unreclaimable" page reclaim almost ignores it.
We will perform a "light" scan at DEF_PRIORITY (typically 1/4096'th of
the zone, or 64 pages) and then forget about the zone.
- When a batch of pages are freed into the zone, clear its "all
unreclaimable" state and start full scanning again. The assumption
being that some state change has come about which will make reclaim
successful again.
So if a "light scan" actually frees some pages, the zone will revert to
normal state immediately.
So we're effectively putting the zone into "low power" mode, and lightly
polling it to see if something has changed.
The code works OK, but is quite hard to test - I mainly tested it by
pinning all highmem in hugetlb pages.
Diffstat (limited to 'include/linux')
| -rw-r--r-- | include/linux/mmzone.h | 3 |
1 files changed, 3 insertions, 0 deletions
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 3e004bc2ff63..f286bf9aeefd 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -84,6 +84,8 @@ struct zone { atomic_t refill_counter; unsigned long nr_active; unsigned long nr_inactive; + int all_unreclaimable; /* All pages pinned */ + unsigned long pages_scanned; /* since last reclaim */ ZONE_PADDING(_pad2_) @@ -203,6 +205,7 @@ memclass(struct zone *pgzone, struct zone *classzone) void get_zone_counts(unsigned long *active, unsigned long *inactive); void build_all_zonelists(void); +void wakeup_kswapd(struct zone *zone); /** * for_each_pgdat - helper macro to iterate over all nodes |
