summaryrefslogtreecommitdiff
path: root/include/linux/spinlock.h
AgeCommit message (Collapse)Author
2009-08-31locking: Simplify spinlock inliningHeiko Carstens
For !DEBUG_SPINLOCK && !PREEMPT && SMP the spin_unlock() functions were always inlined by using special defines which would call the __raw* functions. The out-of-line variants for these functions would be generated anyway. Use the new per unlock/locking variant mechanism to force inlining of the unlock functions like before. This is not a functional change, we just get rid of one additional way to force inlining. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Nick Piggin <nickpiggin@yahoo.com.au> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Horst Hartmann <horsth@linux.vnet.ibm.com> Cc: Christian Ehrhardt <ehrhardt@linux.vnet.ibm.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: David Miller <davem@davemloft.net> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Roman Zippel <zippel@linux-m68k.org> Cc: <linux-arch@vger.kernel.org> LKML-Reference: <20090831124418.848735034@de.ibm.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-08-31locking: Move spinlock function bodies to header fileHeiko Carstens
Move spinlock function bodies to header file by creating a static inline version of each variant. Use the inline version on the out-of-line code. This shouldn't make any difference besides that the spinlock code can now be used to generate inlined spinlock code. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Nick Piggin <nickpiggin@yahoo.com.au> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Horst Hartmann <horsth@linux.vnet.ibm.com> Cc: Christian Ehrhardt <ehrhardt@linux.vnet.ibm.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: David Miller <davem@davemloft.net> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Roman Zippel <zippel@linux-m68k.org> Cc: <linux-arch@vger.kernel.org> LKML-Reference: <20090831124417.859022429@de.ibm.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-07-09memory barrier: adding smp_mb__after_lockJiri Olsa
Adding smp_mb__after_lock define to be used as a smp_mb call after a lock. Making it nop for x86, since {read|write|spin}_lock() on x86 are full memory barriers. Signed-off-by: Jiri Olsa <jolsa@redhat.com> Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2009-04-02Allow rwlocks to re-enable interruptsRobin Holt
Pass the original flags to rwlock arch-code, so that it can re-enable interrupts if implemented for that architecture. Initially, make __raw_read_lock_flags and __raw_write_lock_flags stubs which just do the same thing as non-flags variants. Signed-off-by: Petr Tesarik <ptesarik@suse.cz> Signed-off-by: Robin Holt <holt@sgi.com> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: <linux-arch@vger.kernel.org> Acked-by: Ingo Molnar <mingo@elte.hu> Cc: "Luck, Tony" <tony.luck@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-02-09x86: spinlocks: define dummy __raw_spin_is_contendedKyle McMartin
Architectures other than mips and x86 are not using ticket spinlocks. Therefore, the contention on the lock is meaningless, since there is nobody known to be waiting on it (arguably /fairly/ unfair locks). Dummy it out to return 0 on other architectures. Signed-off-by: Kyle McMartin <kyle@redhat.com> Acked-by: Ralf Baechle <ralf@linux-mips.org> Acked-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-08-11lockdep: spin_lock_nest_lock()Peter Zijlstra
Expose the new lock protection lock. This can be used to annotate places where we take multiple locks of the same class and avoid deadlocks by always taking another (top-level) lock first. NOTE: we're still bound to the MAX_LOCK_DEPTH (48) limit. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-07-25locking: add typecheck on irqsave and friends for correct flagsSteven Rostedt
There haave been several areas in the kernel where an int has been used for flags in local_irq_save() and friends instead of a long. This can cause some hard to debug problems on some architectures. This patch adds a typecheck inside the irqsave and restore functions to flag these cases. [akpm@linux-foundation.org: coding-style fixes] [akpm@linux-foundation.org: build fix] Signed-off-by: Steven Rostedt <srostedt@redhat.com> Cc: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-04-17locking: remove unused double_spin_lock()Oleg Nesterov
double_spin_lock() has no callers, and it can't be used without additional lockdep annotations, remove it. Signed-off-by: Oleg Nesterov <oleg@tv-sign.ru> Cc: Arjan van de Ven <arjan@linux.intel.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-04-11Spell out behavior of atomic_dec_and_lock() in kerneldocJ. Bruce Fields
A little more detail here wouldn't hurt. Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu> Signed-off-by: Jonathan Corbet <corbet@lwn.net>
2008-02-08Remove fastcall from linux/includeHarvey Harrison
[akpm@linux-foundation.org: coding-style fixes] Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-01-30spinlock: lockbreak cleanupNick Piggin
The break_lock data structure and code for spinlocks is quite nasty. Not only does it double the size of a spinlock but it changes locking to a potentially less optimal trylock. Put all of that under CONFIG_GENERIC_LOCKBREAK, and introduce a __raw_spin_is_contended that uses the lock data itself to determine whether there are waiters on the lock, to be used if CONFIG_GENERIC_LOCKBREAK is not set. Rename need_lockbreak to spin_needbreak, make it use spin_is_contended to decouple it from the spinlock implementation, and make it typesafe (rwlocks do not have any need_lockbreak sites -- why do they even get bloated up with that break_lock then?). Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2007-07-16introduce write_trylock_irqsave()Satyam Sharma
Introduce a write_trylock_irqsave() implementation. Similar in style to the implementation of spin_trylock_irqsave() in mainline. Signed-off-by: Satyam Sharma <ssatyam@cse.iitk.ac.in> Cc: Sripathi Kodi <sripathik@in.ibm.com> Cc: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-03-05[PATCH] timer/hrtimer: take per cpu locks in sane orderHeiko Carstens
Doing something like this on a two cpu system # echo 0 > /sys/devices/system/cpu/cpu0/online # echo 1 > /sys/devices/system/cpu/cpu0/online # echo 0 > /sys/devices/system/cpu/cpu1/online will give me this: ======================================================= [ INFO: possible circular locking dependency detected ] 2.6.21-rc2-g562aa1d4-dirty #7 ------------------------------------------------------- bash/1282 is trying to acquire lock: (&cpu_base->lock_key){.+..}, at: [<000000000005f17e>] hrtimer_cpu_notify+0xc6/0x240 but task is already holding lock: (&cpu_base->lock_key#2){.+..}, at: [<000000000005f174>] hrtimer_cpu_notify+0xbc/0x240 which lock already depends on the new lock. This happens because we have the following code in kernel/hrtimer.c: migrate_hrtimers(int cpu) [...] old_base = &per_cpu(hrtimer_bases, cpu); new_base = &get_cpu_var(hrtimer_bases); [...] spin_lock(&new_base->lock); spin_lock(&old_base->lock); Which means the spinlocks are taken in an order which depends on which cpu gets shut down from which other cpu. Therefore lockdep complains that there might be an ABBA deadlock. Since migrate_hrtimers() gets only called on cpu hotplug it's safe to assume that it isn't executed concurrently on a The same problem exists in kernel/timer.c: migrate_timers(). As pointed out by Christian Borntraeger one possible solution to avoid the locking order complaints would be to make sure that the locks are always taken in the same order. E.g. by taking the lock of the cpu with the lower number first. To achieve this we introduce two new spinlock functions double_spin_lock and double_spin_unlock which lock or unlock two locks in a given order. Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Roman Zippel <zippel@linux-m68k.org> Cc: John Stultz <johnstul@us.ibm.com> Cc: Christian Borntraeger <cborntra@de.ibm.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-02-11[PATCH] Fix sparse annotation of spin unlock macros in one casePavel Roskin
SMP systems without premption and spinlock debugging enabled use unlock macros that don't tell sparse that the lock is being released. Add sparse annotations in this case. Signed-off-by: Pavel Roskin <proski@gnu.org> Acked-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2006-12-07[PATCH] add bottom_half.hAndrew Morton
With CONFIG_SMP=n: drivers/input/ff-memless.c:384: warning: implicit declaration of function 'local_bh_disable' drivers/input/ff-memless.c:393: warning: implicit declaration of function 'local_bh_enable' Really linux/spinlock.h should include linux/interrupt.h. But interrupt.h includes sched.h which will need spinlock.h. So the patch breaks the _bh declarations out into a separate header and includes it in both interrupt.h and spinlock.h. Cc: "Randy.Dunlap" <rdunlap@xenotime.net> Cc: Andi Kleen <ak@suse.de> Cc: <stable@kernel.org> Cc: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-11-26Revert "[PATCH] Enforce "unsigned long flags;" when spinlocking"Linus Torvalds
This reverts commit ee3ce191e8eaa4cc15c51a28b34143b36404c4f5, since it broke on at least ARM, MIPS and PA-RISC due to complicated header file dependencies. Conflicts in include/linux/spinlock.h (due to the "nested" variety fixes) fixed up by hand. Cc: Alexey Dobriyan <adobriyan@gmail.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Kyle McMartin <kyle@parisc-linux.org> Cc: Russell King <rmk+lkml@arm.linux.org.uk> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-11-25[PATCH] lockdep: spin_lock_irqsave_nested()Arjan van de Ven
Introduce spin_lock_irqsave_nested(); implementation from: http://lkml.org/lkml/2006/6/1/122 Patch from: http://lkml.org/lkml/2006/9/13/258 [akpm@osdl.org: two compile fixes] Signed-off-by: Arjan van de Ven <arjan@linux.intel.com> Signed-off-by: Jiri Kosina <jikos@jikos.cz> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Acked-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-11-25[PATCH] Enforce "unsigned long flags;" when spinlockingAlexey Dobriyan
Make it break or warn if you pass to spin_lock_irqsave() and friends something different from "unsigned long flags;". Suprisingly large amount of these was caught by recent commit c53421b18f205c5f97c604ae55c6a921f034b0f6 and others. Idea is largely from FRV typechecking. Suggestions from Andrew Morton. All stupid typos in first version fixed. Passes allmodconfig on i386, x86_64, alpha, arm as well as my usual config. Note #1: checking with sparse is still needed, because a driver can save and pass around flags or something. So far patch is very intrusive. Note #2: techically, we should break only if sizeof(flags) < sizeof(unsigned long), however, the more pain for getting suspicious code into kernel, the better. Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-09-29[PATCH] Pass a lock expression to __cond_lock, like __acquire and __releaseJosh Triplett
Currently, __acquire and __release take a lock expression, but __cond_lock takes only a condition, not the lock acquired if the expression evaluates to true. Change __cond_lock to accept a lock expression, and change all the callers to pass in a lock expression. Signed-off-by: Josh Triplett <josh@freedesktop.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-09-29[PATCH] Replace _spin_trylock with spin_trylock in the IRQ variants to use ↵Josh Triplett
__cond_lock spin_trylock_irq and spin_trylock_irqsave use _spin_trylock, which does not use the __cond_lock wrapper annotation and thus does not affect the lock context; change them to use spin_trylock instead, which does use __cond_lock. Signed-off-by: Josh Triplett <josh@freedesktop.org> Cc: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-07-03[PATCH] lockdep: prove spinlock rwlock locking correctnessIngo Molnar
Use the lock validator framework to prove spinlock and rwlock locking correctness. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Arjan van de Ven <arjan@linux.intel.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-04-26Don't include linux/config.h from anywhere else in include/David Woodhouse
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
2005-12-26kbuild: set correct KBUILD_MODNAME when using well known kernel symbols as ↵Ustyugov Roman
module names This patch fixes a problem when we use well known kernel symbols as module names. For example, if module source name is current.c, idle_stack.c or etc., we have a bad KBUILD_MODNAME value. For example, KBUILD_MODNAME will be "get_current()" instead of "current", or "(init_thread_union.stack)" instead of "idle_task". The trick is to define a stringify macro on the commandline - named KBUILD_STR for namespace reasons - and then to stringify the module name. There are a few uses of KBUILD_MODNAME throughout the tree but the usage is for debug and will not be harmed by this change so left untouched for now. While at it KBUILD_BASENAME was changed too. Any spinlock usage in the unix module would have created wrong section names without it. Usage in spinlock.h fixed so it no longer stringify KBUILD_BASENAME. Original patch from Ustyogov Roman - all bugs introduced by me. Signed-off-by: Sam Ravnborg <sam@ravnborg.org>
2005-10-30[PATCH] x86: inline spin_unlock if !CONFIG_DEBUG_SPINLOCK and !CONFIG_PREEMPTIngo Molnar
Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Miklos Szeredi <miklos@szeredi.hu> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-09-10[PATCH] spinlock consolidationIngo Molnar
This patch (written by me and also containing many suggestions of Arjan van de Ven) does a major cleanup of the spinlock code. It does the following things: - consolidates and enhances the spinlock/rwlock debugging code - simplifies the asm/spinlock.h files - encapsulates the raw spinlock type and moves generic spinlock features (such as ->break_lock) into the generic code. - cleans up the spinlock code hierarchy to get rid of the spaghetti. Most notably there's now only a single variant of the debugging code, located in lib/spinlock_debug.c. (previously we had one SMP debugging variant per architecture, plus a separate generic one for UP builds) Also, i've enhanced the rwlock debugging facility, it will now track write-owners. There is new spinlock-owner/CPU-tracking on SMP builds too. All locks have lockup detection now, which will work for both soft and hard spin/rwlock lockups. The arch-level include files now only contain the minimally necessary subset of the spinlock code - all the rest that can be generalized now lives in the generic headers: include/asm-i386/spinlock_types.h | 16 include/asm-x86_64/spinlock_types.h | 16 I have also split up the various spinlock variants into separate files, making it easier to see which does what. The new layout is: SMP | UP ----------------------------|----------------------------------- asm/spinlock_types_smp.h | linux/spinlock_types_up.h linux/spinlock_types.h | linux/spinlock_types.h asm/spinlock_smp.h | linux/spinlock_up.h linux/spinlock_api_smp.h | linux/spinlock_api_up.h linux/spinlock.h | linux/spinlock.h /* * here's the role of the various spinlock/rwlock related include files: * * on SMP builds: * * asm/spinlock_types.h: contains the raw_spinlock_t/raw_rwlock_t and the * initializers * * linux/spinlock_types.h: * defines the generic type and initializers * * asm/spinlock.h: contains the __raw_spin_*()/etc. lowlevel * implementations, mostly inline assembly code * * (also included on UP-debug builds:) * * linux/spinlock_api_smp.h: * contains the prototypes for the _spin_*() APIs. * * linux/spinlock.h: builds the final spin_*() APIs. * * on UP builds: * * linux/spinlock_type_up.h: * contains the generic, simplified UP spinlock type. * (which is an empty structure on non-debug builds) * * linux/spinlock_types.h: * defines the generic type and initializers * * linux/spinlock_up.h: * contains the __raw_spin_*()/etc. version of UP * builds. (which are NOPs on non-debug, non-preempt * builds) * * (included on UP-non-debug builds:) * * linux/spinlock_api_up.h: * builds the _spin_*() APIs. * * linux/spinlock.h: builds the final spin_*() APIs. */ All SMP and UP architectures are converted by this patch. arm, i386, ia64, ppc, ppc64, s390/s390x, x64 was build-tested via crosscompilers. m32r, mips, sh, sparc, have not been tested yet, but should be mostly fine. From: Grant Grundler <grundler@parisc-linux.org> Booted and lightly tested on a500-44 (64-bit, SMP kernel, dual CPU). Builds 32-bit SMP kernel (not booted or tested). I did not try to build non-SMP kernels. That should be trivial to fix up later if necessary. I converted bit ops atomic_hash lock to raw_spinlock_t. Doing so avoids some ugly nesting of linux/*.h and asm/*.h files. Those particular locks are well tested and contained entirely inside arch specific code. I do NOT expect any new issues to arise with them. If someone does ever need to use debug/metrics with them, then they will need to unravel this hairball between spinlocks, atomic ops, and bit ops that exist only because parisc has exactly one atomic instruction: LDCW (load and clear word). From: "Luck, Tony" <tony.luck@intel.com> ia64 fix Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Arjan van de Ven <arjanv@infradead.org> Signed-off-by: Grant Grundler <grundler@parisc-linux.org> Cc: Matthew Wilcox <willy@debian.org> Signed-off-by: Hirokazu Takata <takata@linux-m32r.org> Signed-off-by: Mikael Pettersson <mikpe@csd.uu.se> Signed-off-by: Benoit Boissinot <benoit.boissinot@ens-lyon.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-05-21[PATCH] spin_unlock_bh() and preempt_check_resched()Samuel Thibault
In _spin_unlock_bh(lock): do { \ _raw_spin_unlock(lock); \ preempt_enable(); \ local_bh_enable(); \ __release(lock); \ } while (0) there is no reason for using preempt_enable() instead of a simple preempt_enable_no_resched() Since we know bottom halves are disabled, preempt_schedule() will always return at once (preempt_count!=0), and hence preempt_check_resched() is useless here... This fixes it by using "preempt_enable_no_resched()" instead of the "preempt_enable()", and thus avoids the useless preempt_check_resched() just before re-enabling bottom halves. Signed-off-by: Samuel Thibault <samuel.thibault@ens-lyon.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-02-01[PATCH] assert_spin_locked()Herbert Pötzl
Consolidate the various private implementations of this into a kernel-wide implementation. Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-01-19[PATCH] nonintrusive spin-polling loop in kernel/spinlock.cIngo Molnar
This re-implements the nonintrusive spin-polling loop for the SMP+PREEMPT spinlock/rwlock variants, using the new *_can_lock() primitives. (The patch also adds *_can_lock() to the UP branch of spinlock.h, for completeness.) build- and boot-tested on x86 SMP+PREEMPT and SMP+!PREEMPT. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-01-19[PATCH] x86 rwlock *_can_lock() primitivesIngo Molnar
This introduces the following 3 new locking primitives: spin_can_lock(lock) read_can_lock(lock) write_can_lock(lock) which are a nonintrusive test to check whether the real (intrusive) trylock op would succeed or not. Semantics and naming is completely symmetric to the trylock counterpart. Architectures that want to support PREEMPT will need to add these definitions. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-01-19[PATCH] minor spinlock cleanupsIngo Molnar
cleanup: remove stale semicolon from linux/spinlock.h and stale space from asm-i386/spinlock.h. Ingo Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-01-07[PATCH] Lock initializer cleanup (common headers)Thomas Gleixner
First part of the patch series. Define initializer macros Often used structures in the kernel are almost all declared and initialized by macros in the form: DEFINE_TYPE(name) Spinlocks and rwlocks are declared and initialized by: type name = INITIALIZER; After converting the runtime initialization of spinlocks/rwlocks to macro form it is consequent to change the declaration and initializion of global and static locks to the macro form too. This conversion identifies those variables as "special", common code controlled entities similar to list_heads, mutexes... Besides consistency and code clearness this also helps automatic lock validators and debugging code. The patch converts -rwlock_t snd_card_rwlock = RW_LOCK_UNLOCKED; +DEFINE_RWLOCK(snd_card_rwlock); and -static spinlock_t slave_active_lock = SPIN_LOCK_UNLOCKED; +static DEFINE_SPINLOCK(slave_active_lock); There is no runtime overhead or actual code change resulting out of this patch, other than a small reduction in the kernel source code size. The conversion was done with a script, it was verified manually and it was reviewed, compiled and tested as far as possible on x86, ARM, PPC. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-01-07[PATCH] improve preemption on SMPIngo Molnar
SMP locking latencies are one of the last architectural problems that cause millisec-category scheduling delays. CONFIG_PREEMPT tries to solve some of the SMP issues but there are still lots of problems remaining: spinlocks nested at multiple levels, spinning with irqs turned off, and non-nested spinning with preemption turned off permanently. The nesting problem goes like this: if a piece of kernel code (e.g. the MM or ext3's journalling code) does the following: spin_lock(&spinlock_1); ... spin_lock(&spinlock_2); ... then even with CONFIG_PREEMPT enabled, current kernels may spin on spinlock_2 indefinitely. A number of critical sections break their long paths by using cond_resched_lock(), but this does not break the path on SMP, because need_resched() *of the other CPU* is not set so cond_resched_lock() doesnt notice that a reschedule is due. to solve this problem i've introduced a new spinlock field, lock->break_lock, which signals towards the holding CPU that a spinlock-break is requested by another CPU. This field is only set if a CPU is spinning in a spinlock function [at any locking depth], so the default overhead is zero. I've extended cond_resched_lock() to check for this flag - in this case we can also save a reschedule. I've added the lock_need_resched(lock) and need_lockbreak(lock) methods to check for the need to break out of a critical section. Another latency problem was that the stock kernel, even with CONFIG_PREEMPT enabled, didnt have any spin-nicely preemption logic for the following, commonly used SMP locking primitives: read_lock(), spin_lock_irqsave(), spin_lock_irq(), spin_lock_bh(), read_lock_irqsave(), read_lock_irq(), read_lock_bh(), write_lock_irqsave(), write_lock_irq(), write_lock_bh(). Only spin_lock() and write_lock() [the two simplest cases] where covered. In addition to the preemption latency problems, the _irq() variants in the above list didnt do any IRQ-enabling while spinning - possibly resulting in excessive irqs-off sections of code! preempt-smp.patch fixes all these latency problems by spinning irq-nicely (if possible) and by requesting lock-breaks if needed. Two architecture-level changes were necessary for this: the addition of the break_lock field to spinlock_t and rwlock_t, and the addition of the _raw_read_trylock() function. Testing done by Mark H Johnson and myself indicate SMP latencies comparable to the UP kernel - while they were basically indefinitely high without this patch. i successfully test-compiled and test-booted this patch ontop of BK-curr using the following .config combinations: SMP && PREEMPT, !SMP && PREEMPT, SMP && !PREEMPT and !SMP && !PREEMPT on x86, !SMP && !PREEMPT and SMP && PREEMPT on x64. I also test-booted x86 with the generic_read_trylock function to check that it works fine. Essentially the same patch has been in testing as part of the voluntary-preempt patches for some time already. NOTE to architecture maintainers: generic_raw_read_trylock() is a crude version that should be replaced with the proper arch-optimized version ASAP. From: Hugh Dickins <hugh@veritas.com> The i386 and x86_64 _raw_read_trylocks in preempt-smp.patch are too successful: atomic_read() returns a signed integer. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-01-04[PATCH] SELinux scalability: add spin_trylock_irq and spin_trylock_irqsaveJames Morris
This patch from Kaigai Kohei <kaigai@ak.jp.nec.com> adds irq and irqsave trylock spinlock variants for use by the SELinux AVC RCU patch. Signed-off-by: Kaigai Kohei <kaigai@ak.jp.nec.com> Signed-off-by: Stephen Smalley <sds@epoch.ncsc.mil> Signed-off-by: James Morris <jmorris@redhat.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2004-10-30Annotate UP spinlock stubs with lock annotations.Linus Torvalds
This way you can do a checking run on UP too - even if the locks don't actually _matter_, they should still be right, I'd hope.
2004-10-30Add sparse checker rules for conditional lock functions: trylockLinus Torvalds
and atomic_dec_and_lock. This means that we now have all of the spinlock context counting infrastructure in place, and you can check-compile the kernel with sparse -Wcontext.
2004-10-30Make "atomic_dec_and_lock()" a macro.Linus Torvalds
We rename the actual architecture-specific low-level implementation to have a prepended underscore: "_atomic_dec_and_lock()". This extra macro indirection is so that we can make the macro do the lock context counting. That will be the next patch.
2004-10-28Make x86 semaphore routines use register calling convention.Linus Torvalds
This avoids a bug where the compiler would overwrite the stackframe that the caller also considered to be a register save area. It also shrinks the code segment by a tiny amount by moving the failure case argument setup into the slow path. This not only makes the fast path smaller, but it makes it easier on gcc (gcc is not very good at generating code that uses fixed register names).
2004-10-24Un-inline the big kernel lock.Linus Torvalds
Now that spinlocks are uninlined, it is silly to keep the BKL inlined. And this should make it a lot easier for people to play around with variations on the locking (ie Ingo's semaphores etc).
2004-10-23Annotate the trivial unconditional lock/unlock functions on SMP.Linus Torvalds
This does _not_ handle the conditional ones (lock_kernel and the trylock variants), so there will be a fair number of context error warnings with this. However, the warnings are disabled by default in sparse - you have to use "-Wcontext" to see them.
2004-09-09[PATCH] Move __preempt_*lock into kernel_spinlock, clean up.Anton Blanchard
- create in_lock_functions() to match in_sched_functions(). Export it for use in oprofile. - use char __lock_text_start[] instead of long __lock_text_start when declaring linker symbols. Rusty fixed a number of these a while ago based on advice from rth. - Move __preempt_*_lock into kernel/spinlock.c and make it inline. This means locks are only one deep. - Make in_sched_functions() check in_lock_functions() Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2004-09-04[PATCH] out-of-line locks / genericZwane Mwaikambo
This patch achieves out of line spinlocks by creating kernel/spinlock.c and using the _raw_* inline locking functions. Now, as much as this is supposed to be arch agnostic, there was still a fair amount of rummaging about in archs, mostly for the cases where the arch already has out of line locks and i wanted to avoid the extra call, saving that extra call also makes lock profiling easier. PPC32/64 was an example of such an arch and i have added the necessary profile_pc() function as an example. Size differences are with CONFIG_PREEMPT enabled since we wanted to determine how much could be saved by moving that lot out of line too. ppc64 = 259897 bytes: text data bss dec hex filename 5489808 1962724 709064 8161596 7c893c vmlinux-after 5749577 1962852 709064 8421493 808075 vmlinux-before sparc64 = 193368 bytes: text data bss dec hex filename 3472037 633712 308920 4414669 435ccd vmlinux-after 3665285 633832 308920 4608037 465025 vmlinux-before i386 = 416075 bytes text data bss dec hex filename 5808371 867442 326864 7002677 6ada35 vmlinux-after 6221254 870634 326864 7418752 713380 vmlinux-before x86-64 = 282446 bytes text data bss dec hex filename 4598025 1450644 523632 6572301 64490d vmlinux-after 4881679 1449436 523632 6854747 68985b vmlinux-before It has been compile tested (UP, SMP, PREEMPT) on i386, x86-64, sparc, sparc64, ppc64, ppc32 and runtime tested on i386, x86-64 and sparc64. Signed-off-by: Zwane Mwaikambo <zwane@fsmlabs.com> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2004-05-10[PATCH] Allow architectures to reenable interrupts on contended spinlocksAndrew Morton
From: Keith Owens <kaos@sgi.com> As requested by Linus, update all architectures to add the common infrastructure. Tested on ia64 and i386. Enable interrupts while waiting for a disabled spinlock, but only if interrupts were enabled before issuing spin_lock_irqsave(). This patch consists of three sections :- * An architecture independent change to call _raw_spin_lock_flags() instead of _raw_spin_lock() when the flags are available. * An ia64 specific change to implement _raw_spin_lock_flags() and to define _raw_spin_lock(lock) as _raw_spin_lock_flags(lock, 0) for the ASM_SUPPORTED case. * Patches for all other architectures and for ia64 with !ASM_SUPPORTED to map _raw_spin_lock_flags(lock, flags) to _raw_spin_lock(lock). Architecture maintainers can define _raw_spin_lock_flags() to do something useful if they want to enable interrupts while waiting for a disabled spinlock.
2003-06-25[PATCH] GCC 2.95.4 needs the spinlock workaroundAndrew Morton
From: Mikael Pettersson <mikpe@csd.uu.se> 2.5.73 removed the workaround needed to prevent gcc-2.95.x from miscompiling spinlocks on UP. It turns out that some versions of gcc-2.95 still do have problems with empty structs, so re-introduce the workaround.
2003-06-20[PATCH] Remove spinlock workaround for pre 2.95 gccsAndi Kleen
Remove the empty initializer workaround that was added for egcs 1.1. Only 2.95+ is supported now, so all compilers should support empty structures. The if just checked for __GNUC__, which means that 2.95 got the workaround (and the incompatibility) too even though it didn't need it. Advantage is that gcc 2.95 and 3.x compiled kernels are now potentially binary compatible. Module loading still checks the compiler version, but it might be removable.
2003-06-17[PATCH] JBD: fix race over access to b_committed_dataAndrew Morton
From: Alex Tomas <bzzz@tmi.comex.ru> We have a race wherein the block allocator can decide that journal_head.b_committed_data is present and then will use it. But kjournald can concurrently free it and set the pointer to NULL. It goes oops. We introduce per-buffer_head "spinlocking" based on a bit in b_state. To do this we abstract out pte_chain_lock() and reuse the implementation. The bit-based spinlocking is pretty inefficient CPU-wise (hence the warning in there) and we may move this to a hashed spinlock later.
2003-06-06[PATCH] Make spinlock debugging compile on x86-64Andi Kleen
cpu_relax is on i386 and x86-64 in processor.h, not system.h This makes CONFIG_DEBUG_SPINLOCK compile for x86-64
2003-05-14[PATCH] Make debugging variant of spinlocks a bit more robustAndrew Morton
From: Petr Vandrovec <vandrove@vc.cvut.cz> While I was trying to hunt down problem with spin_lock_irq in send_sig_info, I noticed that debugging spinlocks are a bit unusable. The problem is that these spinlocks first print warning, and then decrement babble. So if the lock is used by printk code (like runqueue lock was), we get nothing, just a lockup or a double fault... When we first decrement babble and then print the error message we can break this unfortunate situation and the error message (5 of the same...) appear on screen.
2003-02-11[PATCH] provide uniproc write_trylock()Andrew Morton
Patch from Oleg Drokin <green@namesys.com>, Nikita Danilov <Nikita@Namesys.COM> There is no uniprocessor definition of _raw_write_trylock(), so write_trylock() doesn't work on UP.
2003-02-05[PATCH] spinlock debugging on uniprocessorsAndrew Morton
Patch from Manfred Spraul <manfred@colorfullife.com> This enables spinlock debuggng on uniprocessor builds, under CONFIG_DEBUG_SPINLOCK. The reason I want this is that one day we'll need to pull out the debugging support from the timer code which detects uninitialised timers. And once that has gone, uniprocessor developers and testers have no way of detecting uninitialised timers - there will be mysterious deadlocks on SMP machines. And there will surely be more uninitialised timers The patch also removes the last pieces of the support for including <asm/spinlock.h> directly. Doesn't work since (IIRC) 2.3.x
2003-01-10[PATCH] Fix an SMP+preempt latency problemAndrew Morton
Here is spin_lock(): #define spin_lock(lock) \ do { \ preempt_disable(); \ _raw_spin_lock(lock); \ } while(0) Here is the scenario: CPU0: spin_lock(some_lock); do_very_long_thing(); /* This has cond_resched()s in it */ CPU1: spin_lock(some_lock); Now suppose that the scheduler tries to schedule a task on CPU1. Nothing happens, because CPU1 is spinning on the lock with preemption disabled. CPU0 will happliy hold the lock for a long time because nobody has set need_resched() against CPU0. This problem can cause scheduling latencies of many tens of milliseconds on SMP on kernels which handle UP quite happily. This patch fixes the problem by changing the spin_lock() and write_lock() contended slowpath to spin on the lock by hand, while polling for preemption requests. I would have done read_lock() too, but we don't seem to have read_trylock() primitives. The patch also shrinks the kernel by 30k due to not having separate out-of-line spinning code for each spin_lock() callsite.