diff options
| author | Ingo Molnar <mingo@elte.hu> | 2005-03-28 03:51:53 -0800 |
|---|---|---|
| committer | Linus Torvalds <torvalds@ppc970.osdl.org> | 2005-03-28 03:51:53 -0800 |
| commit | 7f03bb0f68caef3a6b4f79e22c80c89b8fff6c41 (patch) | |
| tree | c13ec376c1ce46eec8bb77c5b418678e45638340 /kernel | |
| parent | fe3ae97549bc8417f9fc25b198fef36f6448b183 (diff) | |
[PATCH] break_lock fix
lock->break_lock is set when a lock is contended, but cleared only in
cond_resched_lock. Users of need_lockbreak (journal_commit_transaction,
copy_pte_range, unmap_vmas) don't necessarily use cond_resched_lock on it.
So, if the lock has been contended at some time in the past, break_lock
remains set thereafter, and the fastpath keeps dropping lock unnecessarily.
Hanging the system if you make a change like I did, forever restarting a
loop before making any progress. And even users of cond_resched_lock may
well suffer an initial unnecessary lockbreak.
There seems to be no point at which break_lock can be cleared when
unlocking, any point being either too early or too late; but that's okay,
it's only of interest while the lock is held. So clear it whenever the
lock is acquired - and any waiting contenders will quickly set it again.
Additional locking overhead? well, this is only when CONFIG_PREEMPT is on.
Since cond_resched_lock's spin_lock clears break_lock, no need to clear it
itself; and use need_lockbreak there too, preferring optimizer to #ifdefs.
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Diffstat (limited to 'kernel')
| -rw-r--r-- | kernel/sched.c | 5 | ||||
| -rw-r--r-- | kernel/spinlock.c | 2 |
2 files changed, 3 insertions, 4 deletions
diff --git a/kernel/sched.c b/kernel/sched.c index c32f9389978f..dff94ba6df38 100644 --- a/kernel/sched.c +++ b/kernel/sched.c @@ -3741,14 +3741,11 @@ EXPORT_SYMBOL(cond_resched); */ int cond_resched_lock(spinlock_t * lock) { -#if defined(CONFIG_SMP) && defined(CONFIG_PREEMPT) - if (lock->break_lock) { - lock->break_lock = 0; + if (need_lockbreak(lock)) { spin_unlock(lock); cpu_relax(); spin_lock(lock); } -#endif if (need_resched()) { _raw_spin_unlock(lock); preempt_enable_no_resched(); diff --git a/kernel/spinlock.c b/kernel/spinlock.c index b8e76ca8a001..e15ed17863f1 100644 --- a/kernel/spinlock.c +++ b/kernel/spinlock.c @@ -187,6 +187,7 @@ void __lockfunc _##op##_lock(locktype##_t *lock) \ cpu_relax(); \ preempt_disable(); \ } \ + (lock)->break_lock = 0; \ } \ \ EXPORT_SYMBOL(_##op##_lock); \ @@ -209,6 +210,7 @@ unsigned long __lockfunc _##op##_lock_irqsave(locktype##_t *lock) \ cpu_relax(); \ preempt_disable(); \ } \ + (lock)->break_lock = 0; \ return flags; \ } \ \ |
