summaryrefslogtreecommitdiff
path: root/fs/proc/base.c
diff options
context:
space:
mode:
authorAndrew Morton <akpm@osdl.org>2003-07-10 10:02:50 -0700
committerLinus Torvalds <torvalds@home.osdl.org>2003-07-10 10:02:50 -0700
commit91b79ba7bb2f3afbd14f7e711ffbe9cd4d4b05a8 (patch)
tree778bab0b5d874d77851afcedc11ad7de17497a92 /fs/proc/base.c
parent679c40a86efa423cafa636365556347c2f4b1f5c (diff)
[PATCH] separate locking for vfsmounts
From: Maneesh Soni <maneesh@in.ibm.com> While path walking we do follow_mount or follow_down which uses dcache_lock for serialisation. vfsmount related operations also use dcache_lock for all updates. I think we can use a separate lock for vfsmount related work and can improve path walking. The following two patches does the same. The first one replaces dcache_lock with new vfsmount_lock in namespace.c. The lock is local to namespace.c and is not required outside. The second patch uses RCU to have lock free lookup_mnt(). The patches are quite simple and straight forward. The lockmeter reults show reduced contention, and lock acquisitions for dcache_lock while running dcachebench* on a 4-way SMP box SPINLOCKS HOLD WAIT UTIL CON MEAN( MAX ) MEAN( MAX )(% CPU) TOTAL NOWAIT SPIN RJECT NAME baselkm-2569: 20.7% 20.9% 0.5us( 146us) 2.9us( 144us)(0.81%) 31590840 79.1% 20.9% 0% dcache_lock mntlkm-2569: 14.3% 13.6% 0.4us( 170us) 2.9us( 187us)(0.42%) 23071746 86.4% 13.6% 0% dcache_lock We get more than 8% improvement on 4-way SMP and 44% improvement on 16-way NUMAQ while runing dcachebench*. Average (usecs/iteration) Std. Deviation (lower is better) 4-way SMP 2.5.69 15739.3 470.90 2.5.69-mnt 14459.6 298.51 16-way NUMAQ 2.5.69 120426.5 363.78 2.5.69-mnt 63225.8 427.60 *dcachebench is a microbenchmark written by Bill Hartner and is available at http://www-124.ibm.com/developerworks/opensource/linuxperf/dcachebench/dcachebench.html vfsmount_lock.patch ------------------- - Patch for replacing dcache_lock with new vfsmount_lock for all mount related operation. This removes the need to take dcache_lock while doing follow_mount or follow_down operations in path walking. I re-ran dcachebench with 2.5.70 as base on 16-way NUMAQ box. Average (usecs/iteration) Std. Deviation (lower is better) 16-way NUMAQ 2.5.70 120710.9 230.67 + vfsmount_lock.patch 65209.6 242.97 + lookup_mnt-rcu.patch 64042.3 416.61 So just the lock splitting (vfsmount_lock.patch) gives almost similar benifits
Diffstat (limited to 'fs/proc/base.c')
-rw-r--r--fs/proc/base.c9
1 files changed, 5 insertions, 4 deletions
diff --git a/fs/proc/base.c b/fs/proc/base.c
index 485ff692e87f..0a061cd0bb6f 100644
--- a/fs/proc/base.c
+++ b/fs/proc/base.c
@@ -307,20 +307,22 @@ static int proc_check_root(struct inode *inode)
base = dget(current->fs->root);
read_unlock(&current->fs->lock);
- spin_lock(&dcache_lock);
+ spin_lock(&vfsmount_lock);
de = root;
mnt = vfsmnt;
while (vfsmnt != our_vfsmnt) {
- if (vfsmnt == vfsmnt->mnt_parent)
+ if (vfsmnt == vfsmnt->mnt_parent) {
+ spin_unlock(&vfsmount_lock);
goto out;
+ }
de = vfsmnt->mnt_mountpoint;
vfsmnt = vfsmnt->mnt_parent;
}
+ spin_unlock(&vfsmount_lock);
if (!is_subdir(de, base))
goto out;
- spin_unlock(&dcache_lock);
exit:
dput(base);
@@ -329,7 +331,6 @@ exit:
mntput(mnt);
return res;
out:
- spin_unlock(&dcache_lock);
res = -EACCES;
goto exit;
}