diff options
| author | Andrew Morton <akpm@osdl.org> | 2004-03-11 16:25:36 -0800 |
|---|---|---|
| committer | Linus Torvalds <torvalds@ppc970.osdl.org> | 2004-03-11 16:25:36 -0800 |
| commit | fb5b4abea5de73406d63f60aeb455edccff8eb6e (patch) | |
| tree | 84e6f12ab965baa8a227d269bece57202bd96205 | |
| parent | 07a257798ff40905f6cdd71f300cb2fd1f6625f6 (diff) | |
[PATCH] vm: balance inactive zone refill rates
The current refill logic in refill_inactive_zone() takes an arbitrarily large
number of pages and chops it down to SWAP_CLUSTER_MAX*4, regardless of the
size of the zone.
This has the effect of reducing the amount of refilling of large zones
proportionately much more than of small zones.
We made this change in may 2003 and I'm damned if I remember why. let's put
it back so we don't truncate the refill count and see what happens.
| -rw-r--r-- | mm/vmscan.c | 9 |
1 files changed, 1 insertions, 8 deletions
diff --git a/mm/vmscan.c b/mm/vmscan.c index 65824df165e3..7768aca74d1d 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -756,17 +756,10 @@ shrink_zone(struct zone *zone, int max_scan, unsigned int gfp_mask, */ ratio = (unsigned long)SWAP_CLUSTER_MAX * zone->nr_active / ((zone->nr_inactive | 1) * 2); + atomic_add(ratio+1, &zone->nr_scan_active); count = atomic_read(&zone->nr_scan_active); if (count >= SWAP_CLUSTER_MAX) { - /* - * Don't try to bring down too many pages in one attempt. - * If this fails, the caller will increase `priority' and - * we'll try again, with an increased chance of reclaiming - * mapped memory. - */ - if (count > SWAP_CLUSTER_MAX * 4) - count = SWAP_CLUSTER_MAX * 4; atomic_set(&zone->nr_scan_active, 0); refill_inactive_zone(zone, count, ps); } |
