diff options
| author | Andrew Morton <akpm@digeo.com> | 2003-04-12 13:00:40 -0700 |
|---|---|---|
| committer | James Bottomley <jejb@raven.il.steeleye.com> | 2003-04-12 13:00:40 -0700 |
| commit | c14c1a4417026e137246bbdf464d38b8671232fa (patch) | |
| tree | 3751743de748d2fb375645bc0eb4525c03a31560 /include/linux | |
| parent | c9db333ac1f16a11dfc8b5a84637f89d014f6316 (diff) | |
[PATCH] use spinlocking in the ext2 block allocator
From Alex Tomas and myself
ext2 currently uses lock_super() to protect the filesystem's in-core block
allocation bitmaps.
On big SMP machines the contention on that semaphore is causing high context
switch rates, large amounts of idle time and reduced throughput.
The context switch rate can also worsen block allocation: if several tasks
are trying to allocate blocks inside the same blockgroup for different files,
madly rotating between those tasks will cause the files' blocks to be
intermingled.
On SDET and dbench-style worloads (lots of tasks doing lots of allocation)
this patch (and a similar one for the inode allocator) improve throughout on
an 8-way by ~15%. On 16-way NUMAQ the speedup is 150%.
What wedo isto remove the lock altogether and just rely on the atomic
semantics of test_and_set_bit(): if the allocator sees a block was free it
runs test_and_set_bit(). If that fails, then we raced and the allocator will
go and look for another block.
Of course, we don't really use test_and_set_bit() because that
isn'tendian-dependent. New atomic endian-independent functions are
introduced: ext2_set_bit_atomic() and ext2_clear_bit_atomic(). We do not
need ext2_test_bit_atomic(), since even if ext2_test_bit() returns the wrong
result, that error will be detected and naturally handled in the subsequent
ext2_set_bit_atomic().
For little-endian machines the new atomic ops map directly onto the
test_and_set_bit(), etc.
For big-endian machines we provide the architecture's impementation with the
address of a spinlock whcih can be taken around the nonatomic ext2_set_bit().
The spinlocks are hashed, and the hash is scaled according to the machine
size. Architectures are free to implement optimised versions of
ext2_set_bit_atomic() and ext2_clear_bit_atomic().
Diffstat (limited to 'include/linux')
| -rw-r--r-- | include/linux/ext2_fs_sb.h | 5 |
1 files changed, 5 insertions, 0 deletions
diff --git a/include/linux/ext2_fs_sb.h b/include/linux/ext2_fs_sb.h index 3c07d4ecf898..f6139acdac5c 100644 --- a/include/linux/ext2_fs_sb.h +++ b/include/linux/ext2_fs_sb.h @@ -16,6 +16,9 @@ #ifndef _LINUX_EXT2_FS_SB #define _LINUX_EXT2_FS_SB +#include <linux/blockgroup_lock.h> +#include <linux/percpu_counter.h> + /* * second extended-fs super-block data in memory */ @@ -45,6 +48,8 @@ struct ext2_sb_info { u32 s_next_generation; unsigned long s_dir_count; u8 *s_debts; + struct percpu_counter s_freeblocks_counter; + struct blockgroup_lock s_blockgroup_lock; }; #endif /* _LINUX_EXT2_FS_SB */ |
