summaryrefslogtreecommitdiff
path: root/include/linux/spinlock_rt.h
AgeCommit message (Collapse)Author
2026-01-28compiler-context-analysis: Remove __assume_ctx_lock from initializersMarco Elver
Remove __assume_ctx_lock() from lock initializers. Implicitly asserting an active context during initialization caused false-positive double-lock errors when acquiring a lock immediately after its initialization. Moving forward, guarded member initialization must either: 1. Use guard(type_init)(&lock) or scoped_guard(type_init, ...). 2. Use context_unsafe() for simple initialization. Reported-by: Bart Van Assche <bvanassche@acm.org> Signed-off-by: Marco Elver <elver@google.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/all/57062131-e79e-42c2-aa0b-8f931cb8cac2@acm.org/ Link: https://patch.msgid.link/20260119094029.1344361-7-elver@google.com
2026-01-05compiler-context-analysis: Remove __cond_lock() function-like helperMarco Elver
As discussed in [1], removing __cond_lock() will improve the readability of trylock code. Now that Sparse context tracking support has been removed, we can also remove __cond_lock(). Change existing APIs to either drop __cond_lock() completely, or make use of the __cond_acquires() function attribute instead. In particular, spinlock and rwlock implementations required switching over to inline helpers rather than statement-expressions for their trylock_* variants. Suggested-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Marco Elver <elver@google.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/all/20250207082832.GU7145@noisy.programming.kicks-ass.net/ [1] Link: https://patch.msgid.link/20251219154418.3592607-25-elver@google.com
2026-01-05locking/rwlock, spinlock: Support Clang's context analysisMarco Elver
Add support for Clang's context analysis for raw_spinlock_t, spinlock_t, and rwlock. This wholesale conversion is required because all three of them are interdependent. To avoid warnings in constructors, the initialization functions mark a lock as acquired when initialized before guarded variables. The test verifies that common patterns do not generate false positives. Signed-off-by: Marco Elver <elver@google.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://patch.msgid.link/20251219154418.3592607-9-elver@google.com
2024-11-11rust: helpers: Avoid raw_spin_lock initialization for PREEMPT_RTEder Zulian
When PREEMPT_RT=y, spin locks are mapped to rt_mutex types, so using spinlock_check() + __raw_spin_lock_init() to initialize spin locks is incorrect, and would cause build errors. Introduce __spin_lock_init() to initialize a spin lock with lockdep rquired information for PREEMPT_RT builds, and use it in the Rust helper. Fixes: d2d6422f8bd1 ("x86: Allow to enable PREEMPT_RT.") Closes: https://lore.kernel.org/oe-kbuild-all/202409251238.vetlgXE9-lkp@intel.com/ Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Eder Zulian <ezulian@redhat.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Boqun Feng <boqun.feng@gmail.com> Tested-by: Boqun Feng <boqun.feng@gmail.com> Link: https://lore.kernel.org/r/20241107163223.2092690-2-ezulian@redhat.com
2024-10-24locking/rt: Remove one __cond_lock() in RT's spin_trylock_irqsave()Sebastian Andrzej Siewior
spin_trylock_irqsave() has a __cond_lock() wrapper which points to __spin_trylock_irqsave(). The function then invokes spin_trylock() which has another __cond_lock() finally pointing to rt_spin_trylock(). The compiler has no problem to parse this but sparse does not recognise that users of spin_trylock_irqsave() acquire a conditional lock and complains. Remove one layer of __cond_lock() so that sparse recognises conditional locking. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/all/20240812104200.2239232-3-bigeasy@linutronix.de
2024-10-24locking/rt: Add sparse annotation PREEMPT_RT's sleeping locks.Sebastian Andrzej Siewior
The sleeping locks on PREEMPT_RT (rt_spin_lock() and friends) lack sparse annotation. Therefore a missing spin_unlock() won't be spotted by sparse in a PREEMPT_RT build while it is noticed on a !PREEMPT_RT build. Add the __acquires/__releases macros to the lock/ unlock functions. The trylock functions already use the __cond_lock() wrapper. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/all/20240812104200.2239232-2-bigeasy@linutronix.de
2022-09-15locking: Detect includes rwlock.h outside of spinlock.hSebastian Andrzej Siewior
From: Michael S. Tsirkin <mst@redhat.com> The check for __LINUX_SPINLOCK_H within rwlock.h (and other files) detects the direct include of the header file if it is at the very beginning of the include section. If it is listed later then chances are high that spinlock.h was already included (including rwlock.h) and the additional listing of rwlock.h will not cause any failure. On PREEMPT_RT this additional rwlock.h will lead to compile failures since it uses a different rwlock implementation. Add __LINUX_INSIDE_SPINLOCK_H to spinlock.h and check for this instead of __LINUX_SPINLOCK_H to detect wrong includes. This will help detect direct includes of rwlock.h with without PREEMPT_RT enabled. [ bigeasy: add remaining __LINUX_SPINLOCK_H user and rewrite commit description. ] Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/YweemHxJx7O8rjBx@linutronix.de
2021-08-17locking/spinlock/rt: Prepare for RT local_lockThomas Gleixner
Add the static and runtime initializer mechanics to support the RT variant of local_lock, which requires the lock type in the lockdep map to be set to LD_LOCK_PERCPU. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20210815211305.967526724@linutronix.de
2021-08-17locking/rwlock: Provide RT variantThomas Gleixner
Similar to rw_semaphores, on RT the rwlock substitution is not writer fair, because it's not feasible to have a writer inherit its priority to multiple readers. Readers blocked on a writer follow the normal rules of priority inheritance. Like RT spinlocks, RT rwlocks are state preserving across the slow lock operations (contended case). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20210815211303.882793524@linutronix.de
2021-08-17locking/spinlock: Provide RT variant header: <linux/spinlock_rt.h>Thomas Gleixner
Provide the necessary wrappers around the actual rtmutex based spinlock implementation. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20210815211303.712897671@linutronix.de