summaryrefslogtreecommitdiff
path: root/include
diff options
context:
space:
mode:
authorAndrew Morton <akpm@zip.com.au>2002-08-14 21:21:05 -0700
committerLinus Torvalds <torvalds@home.transmeta.com>2002-08-14 21:21:05 -0700
commitaaba9265318483297267400fbfce1c399b3ac018 (patch)
treebd6521370529df2797969711816ad0a1272977b4 /include
parent008f707cb94696398bac6e5b5050b3bfd0ddf054 (diff)
[PATCH] make pagemap_lru_lock irq-safe
It is expensive for a CPU to take an interrupt while holding the page LRU lock, because other CPUs will pile up on the lock while the interrupt runs. Disabling interrupts while holding the lock reduces contention by an additional 30% on 4-way. This is when the only source of interrupts is disk completion. The improvement will be higher with more CPUs and it will be higher if there is networking happening. The maximum hold time of this lock is 17 microseconds on 500 MHx PIII, which is well inside the kernel's maximum interrupt latency (which was 100 usecs when I last looked, a year ago). This optimisation is not needed on uniprocessor, but the patch disables IRQs while holding pagemap_lru_lock anyway, so it becomes an irq-safe spinlock, and pages can be moved from the LRU in interrupt context. pagemap_lru_lock has been renamed to _pagemap_lru_lock to pick up any missed uses, and to reliably break any out-of-tree patches which may be using the old semantics.
Diffstat (limited to 'include')
-rw-r--r--include/linux/swap.h2
1 files changed, 1 insertions, 1 deletions
diff --git a/include/linux/swap.h b/include/linux/swap.h
index 8dbd9d7e401d..e09e96170182 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -211,7 +211,7 @@ extern struct swap_list_t swap_list;
asmlinkage long sys_swapoff(const char *);
asmlinkage long sys_swapon(const char *, int);
-extern spinlock_t pagemap_lru_lock;
+extern spinlock_t _pagemap_lru_lock;
extern void FASTCALL(mark_page_accessed(struct page *));