diff options
| author | Andrew Morton <akpm@osdl.org> | 2003-12-29 05:44:34 -0800 |
|---|---|---|
| committer | Linus Torvalds <torvalds@home.osdl.org> | 2003-12-29 05:44:34 -0800 |
| commit | 9c8c94922e75afa04ee2d623e27b8ed0dd0ae8f3 (patch) | |
| tree | 060879a6218a3ea2336a09a3e444b7c2e9d3e413 | |
| parent | 6f2220203d88732a4b04599e8513f6e10fcc9660 (diff) | |
[PATCH] vmscan: reset refill_counter after refilling the inactive list
zone->refill_counter is only there to provide decent levels of work batching:
don't call refill_inactive_zone() just for a couple of pages.
But the logic in there allows it to build up to huge values and it can
overflow (go negative) which will disable refilling altogether until it wraps
positive again.
Just reset it to zero whenever we decide to do some refilling.
| -rw-r--r-- | mm/vmscan.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/mm/vmscan.c b/mm/vmscan.c index 32fd381325b9..b8594827bbac 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -779,7 +779,7 @@ shrink_zone(struct zone *zone, int max_scan, unsigned int gfp_mask, count = atomic_read(&zone->refill_counter); if (count > SWAP_CLUSTER_MAX * 4) count = SWAP_CLUSTER_MAX * 4; - atomic_sub(count, &zone->refill_counter); + atomic_set(&zone->refill_counter, 0); refill_inactive_zone(zone, count, ps, priority); } return shrink_cache(nr_pages, zone, gfp_mask, |
