diff options
| author | Andrew Morton <akpm@osdl.org> | 2003-07-10 10:02:50 -0700 |
|---|---|---|
| committer | Linus Torvalds <torvalds@home.osdl.org> | 2003-07-10 10:02:50 -0700 |
| commit | 91b79ba7bb2f3afbd14f7e711ffbe9cd4d4b05a8 (patch) | |
| tree | 778bab0b5d874d77851afcedc11ad7de17497a92 /include/linux | |
| parent | 679c40a86efa423cafa636365556347c2f4b1f5c (diff) | |
[PATCH] separate locking for vfsmounts
From: Maneesh Soni <maneesh@in.ibm.com>
While path walking we do follow_mount or follow_down which uses
dcache_lock for serialisation. vfsmount related operations also use
dcache_lock for all updates. I think we can use a separate lock for
vfsmount related work and can improve path walking.
The following two patches does the same. The first one replaces
dcache_lock with new vfsmount_lock in namespace.c. The lock is
local to namespace.c and is not required outside. The second patch
uses RCU to have lock free lookup_mnt(). The patches are quite simple
and straight forward.
The lockmeter reults show reduced contention, and lock acquisitions
for dcache_lock while running dcachebench* on a 4-way SMP box
SPINLOCKS HOLD WAIT
UTIL CON MEAN( MAX ) MEAN( MAX )(% CPU) TOTAL NOWAIT SPIN RJECT NAME
baselkm-2569:
20.7% 20.9% 0.5us( 146us) 2.9us( 144us)(0.81%) 31590840 79.1% 20.9% 0% dcache_lock
mntlkm-2569:
14.3% 13.6% 0.4us( 170us) 2.9us( 187us)(0.42%) 23071746 86.4% 13.6% 0% dcache_lock
We get more than 8% improvement on 4-way SMP and 44% improvement on 16-way
NUMAQ while runing dcachebench*.
Average (usecs/iteration) Std. Deviation
(lower is better)
4-way SMP
2.5.69 15739.3 470.90
2.5.69-mnt 14459.6 298.51
16-way NUMAQ
2.5.69 120426.5 363.78
2.5.69-mnt 63225.8 427.60
*dcachebench is a microbenchmark written by Bill Hartner and is available at
http://www-124.ibm.com/developerworks/opensource/linuxperf/dcachebench/dcachebench.html
vfsmount_lock.patch
-------------------
- Patch for replacing dcache_lock with new vfsmount_lock for all mount
related operation. This removes the need to take dcache_lock while
doing follow_mount or follow_down operations in path walking.
I re-ran dcachebench with 2.5.70 as base on 16-way NUMAQ box.
Average (usecs/iteration) Std. Deviation
(lower is better)
16-way NUMAQ
2.5.70 120710.9 230.67
+ vfsmount_lock.patch 65209.6 242.97
+ lookup_mnt-rcu.patch 64042.3 416.61
So just the lock splitting (vfsmount_lock.patch) gives almost similar benifits
Diffstat (limited to 'include/linux')
| -rw-r--r-- | include/linux/mount.h | 1 |
1 files changed, 1 insertions, 0 deletions
diff --git a/include/linux/mount.h b/include/linux/mount.h index d6996e7c7310..61554dda78a8 100644 --- a/include/linux/mount.h +++ b/include/linux/mount.h @@ -54,6 +54,7 @@ extern void free_vfsmnt(struct vfsmount *mnt); extern struct vfsmount *alloc_vfsmnt(const char *name); extern struct vfsmount *do_kern_mount(const char *fstype, int flags, const char *name, void *data); +extern spinlock_t vfsmount_lock; #endif #endif /* _LINUX_MOUNT_H */ |
