diff options
| author | Andrew Morton <akpm@zip.com.au> | 2002-08-27 21:03:50 -0700 |
|---|---|---|
| committer | Linus Torvalds <torvalds@penguin.transmeta.com> | 2002-08-27 21:03:50 -0700 |
| commit | a8382cf1153689a1caac0e707e951e7869bb92e1 (patch) | |
| tree | 71e2722fd8fd5e08fb7862171f8fdb1443ce31c6 /include/linux/mmzone.h | |
| parent | e6f0e61d9ed94134f57bcf6c72b81848b9d3c2fe (diff) | |
[PATCH] per-zone LRU locking
Now the LRUs are per-zone, make their lock per-zone as well.
In this patch the per-zone lock shares a cacheline with the zone's
buddy list lock, which is very bad. Some groundwork is needed to fix
this well.
This change is expected to be a significant win on NUMA, where most
page allocation comes from the local node's zones.
For NUMA the `struct zone' itself should really be placed in that
node's memory, which is something the platform owners should look at.
However the internode cache will help here.
Per-node kswapd would make heaps of sense too.
Diffstat (limited to 'include/linux/mmzone.h')
| -rw-r--r-- | include/linux/mmzone.h | 1 |
1 files changed, 1 insertions, 0 deletions
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 928000348e6b..f62e36b902a2 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -44,6 +44,7 @@ struct zone { unsigned long pages_min, pages_low, pages_high; int need_balance; + spinlock_t lru_lock; struct list_head active_list; struct list_head inactive_list; atomic_t refill_counter; |
