<feed xmlns='http://www.w3.org/2005/Atom'>
<title>user/sven/linux.git/kernel/locking, branch next/HEAD</title>
<subtitle>Linux Kernel
</subtitle>
<id>https://git.stealer.net/cgit.cgi/user/sven/linux.git/atom?h=next%2FHEAD</id>
<link rel='self' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/atom?h=next%2FHEAD'/>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/'/>
<updated>2026-04-09T03:07:36Z</updated>
<entry>
<title>Merge branch into tip/master: 'sched/core'</title>
<updated>2026-04-09T03:07:36Z</updated>
<author>
<name>Ingo Molnar</name>
<email>mingo@kernel.org</email>
</author>
<published>2026-04-09T03:07:36Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=f102465ff34d28ccd2b59819d2eaffb130fb510b'/>
<id>urn:sha1:f102465ff34d28ccd2b59819d2eaffb130fb510b</id>
<content type='text'>
 # New commits in sched/core:
    985215804dcb ("sched/rt: Cleanup global RT bandwidth functions")
    4f70a0456d09 ("sched/rt: Move group schedulability check to sched_rt_global_validate()")
    8b016dcec936 ("sched/rt: Skip group schedulable check with rt_group_sched=0")
    556146ce5e94 ("sched/fair: Avoid overflow in enqueue_entity()")
    c6e80201e057 ("sched: Use u64 for bandwidth ratio calculations")
    059258b0d424 ("sched/fair: Prevent negative lag increase during delayed dequeue")
    2d4cc371baa5 ("sched/fair: Use sched_energy_enabled()")
    b049b81bdff6 ("sched: Handle blocked-waiter migration (and return migration)")
    dec9554dc036 ("sched: Move attach_one_task and attach_task helpers to sched.h")
    48fda62de67a ("sched: Add logic to zap balance callbacks if we pick again")
    f9530b318335 ("sched: Add assert_balance_callbacks_empty helper")
    2d7622669836 ("sched/locking: Add special p-&gt;blocked_on==PROXY_WAKING value for proxy return-migration")
    56f4b24267a6 ("sched: Fix modifying donor-&gt;blocked on without proper locking")
    fa4a1ff8ab23 ("locking: Add task::blocked_lock to serialize blocked_on state")
    f4fe6be82e6d ("sched: Fix potentially missing balancing with Proxy Exec")
    37341ec573da ("sched: Minimise repeated sched_proxy_exec() checking")
    e0ca8991b2de ("sched: Make class_schedulers avoid pushing current, and get rid of proxy_tag_curr()")

Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
</content>
</entry>
<entry>
<title>sched/locking: Add special p-&gt;blocked_on==PROXY_WAKING value for proxy return-migration</title>
<updated>2026-04-03T12:23:40Z</updated>
<author>
<name>John Stultz</name>
<email>jstultz@google.com</email>
</author>
<published>2026-03-24T19:13:21Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=2d7622669836dcbbb449741b4e6c503ffe005c25'/>
<id>urn:sha1:2d7622669836dcbbb449741b4e6c503ffe005c25</id>
<content type='text'>
As we add functionality to proxy execution, we may migrate a
donor task to a runqueue where it can't run due to cpu affinity.
Thus, we must be careful to ensure we return-migrate the task
back to a cpu in its cpumask when it becomes unblocked.

Peter helpfully provided the following example with pictures:
"Suppose we have a ww_mutex cycle:

                  ,-+-* Mutex-1 &lt;-.
        Task-A ---' |             | ,-- Task-B
                    `-&gt; Mutex-2 *-+-'

Where Task-A holds Mutex-1 and tries to acquire Mutex-2, and
where Task-B holds Mutex-2 and tries to acquire Mutex-1.

Then the blocked_on-&gt;owner chain will go in circles.

        Task-A  -&gt; Mutex-2
          ^          |
          |          v
        Mutex-1 &lt;- Task-B

We need two things:

 - find_proxy_task() to stop iterating the circle;

 - the woken task to 'unblock' and run, such that it can
   back-off and re-try the transaction.

Now, the current code [without this patch] does:
        __clear_task_blocked_on();
        wake_q_add();

And surely clearing -&gt;blocked_on is sufficient to break the
cycle.

Suppose it is Task-B that is made to back-off, then we have:

  Task-A -&gt; Mutex-2 -&gt; Task-B (no further blocked_on)

and it would attempt to run Task-B. Or worse, it could directly
pick Task-B and run it, without ever getting into
find_proxy_task().

Now, here is a problem because Task-B might not be runnable on
the CPU it is currently on; and because !task_is_blocked() we
don't get into the proxy paths, so nobody is going to fix this
up.

Ideally we would have dequeued Task-B alongside of clearing
-&gt;blocked_on, but alas, [the lock ordering prevents us from
getting the task_rq_lock() and] spoils things."

Thus we need more than just a binary concept of the task being
blocked on a mutex or not.

So allow setting blocked_on to PROXY_WAKING as a special value
which specifies the task is no longer blocked, but needs to
be evaluated for return migration *before* it can be run.

This will then be used in a later patch to handle proxy
return-migration.

Signed-off-by: John Stultz &lt;jstultz@google.com&gt;
Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
Reviewed-by: K Prateek Nayak &lt;kprateek.nayak@amd.com&gt;
Link: https://patch.msgid.link/20260324191337.1841376-7-jstultz@google.com
</content>
</entry>
<entry>
<title>locking: Add task::blocked_lock to serialize blocked_on state</title>
<updated>2026-04-03T12:23:39Z</updated>
<author>
<name>John Stultz</name>
<email>jstultz@google.com</email>
</author>
<published>2026-03-24T19:13:19Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=fa4a1ff8ab235a308d8c983827657a69649185fd'/>
<id>urn:sha1:fa4a1ff8ab235a308d8c983827657a69649185fd</id>
<content type='text'>
So far, we have been able to utilize the mutex::wait_lock
for serializing the blocked_on state, but when we move to
proxying across runqueues, we will need to add more state
and a way to serialize changes to this state in contexts
where we don't hold the mutex::wait_lock.

So introduce the task::blocked_lock, which nests under the
mutex::wait_lock in the locking order, and rework the locking
to use it.

Signed-off-by: John Stultz &lt;jstultz@google.com&gt;
Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
Reviewed-by: K Prateek Nayak &lt;kprateek.nayak@amd.com&gt;
Link: https://patch.msgid.link/20260324191337.1841376-5-jstultz@google.com
</content>
</entry>
<entry>
<title>locking: Add lock context annotations in the spinlock implementation</title>
<updated>2026-03-16T12:16:50Z</updated>
<author>
<name>Bart Van Assche</name>
<email>bvanassche@acm.org</email>
</author>
<published>2026-03-13T17:15:09Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=b06e988c4c52ce8750616ea9b23c8bd3b611b931'/>
<id>urn:sha1:b06e988c4c52ce8750616ea9b23c8bd3b611b931</id>
<content type='text'>
Make the spinlock implementation compatible with lock context analysis
(CONTEXT_ANALYSIS := 1) by adding lock context annotations to the
_raw_##op##_...() macros.

Signed-off-by: Bart Van Assche &lt;bvanassche@acm.org&gt;
Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
Link: https://patch.msgid.link/20260313171510.230998-4-bvanassche@acm.org
</content>
</entry>
<entry>
<title>locking/rwsem: Fix logic error in rwsem_del_waiter()</title>
<updated>2026-03-16T12:16:48Z</updated>
<author>
<name>Andrei Vagin</name>
<email>avagin@google.com</email>
</author>
<published>2026-03-14T18:26:07Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=68bcd8b6e0b10d902f7fc8bf3f08f335f5d1640e'/>
<id>urn:sha1:68bcd8b6e0b10d902f7fc8bf3f08f335f5d1640e</id>
<content type='text'>
Commit 1ea4b473504b ("locking/rwsem: Remove the list_head from struct
rw_semaphore") introduced a logic error in rwsem_del_waiter().

The root cause of this issue is an inconsistency in the return values of
__rwsem_del_waiter() and rwsem_del_waiter(). Specifically,
__rwsem_del_waiter() returns true when the wait list becomes empty,
whereas rwsem_del_waiter() is supposed to return true if the wait list
is NOT empty.

This caused a null pointer dereference in rwsem_mark_wake() because it
was being called when sem-&gt;first_waiter was NULL.

Fixes: 1ea4b473504b ("locking/rwsem: Remove the list_head from struct rw_semaphore")
Reported-by: syzbot+3d2ff92c67127d337463@syzkaller.appspotmail.com
Signed-off-by: Andrei Vagin &lt;avagin@google.com&gt;
Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
Tested-by: syzbot+3d2ff92c67127d337463@syzkaller.appspotmail.com
Link: https://patch.msgid.link/20260314182607.3343346-1-avagin@google.com
</content>
</entry>
<entry>
<title>locking/rwsem: Add context analysis</title>
<updated>2026-03-08T10:06:53Z</updated>
<author>
<name>Peter Zijlstra</name>
<email>peterz@infradead.org</email>
</author>
<published>2026-03-06T09:43:56Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=739690915ce1f017223ef4e6f3cc966ccfa3c861'/>
<id>urn:sha1:739690915ce1f017223ef4e6f3cc966ccfa3c861</id>
<content type='text'>
Add compiler context analysis annotations.

Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
Link: https://patch.msgid.link/20260306101417.GT1282955@noisy.programming.kicks-ass.net
</content>
</entry>
<entry>
<title>locking/rtmutex: Add context analysis</title>
<updated>2026-03-08T10:06:53Z</updated>
<author>
<name>Peter Zijlstra</name>
<email>peterz@infradead.org</email>
</author>
<published>2026-01-20T17:17:50Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=90bb681dcdf7e69c90b56a18f06c0389a0810b92'/>
<id>urn:sha1:90bb681dcdf7e69c90b56a18f06c0389a0810b92</id>
<content type='text'>
Add compiler context analysis annotations.

Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
Link: https://patch.msgid.link/20260121111213.851599178@infradead.org
</content>
</entry>
<entry>
<title>locking/mutex: Add context analysis</title>
<updated>2026-03-08T10:06:53Z</updated>
<author>
<name>Peter Zijlstra</name>
<email>peterz@infradead.org</email>
</author>
<published>2026-01-20T09:06:08Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=5c4326231cde36fd5e90c41e403df9fac6238f4b'/>
<id>urn:sha1:5c4326231cde36fd5e90c41e403df9fac6238f4b</id>
<content type='text'>
Add compiler context analysis annotations.

Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
Link: https://patch.msgid.link/20260121111213.745353747@infradead.org
</content>
</entry>
<entry>
<title>locking/mutex: Remove the list_head from struct mutex</title>
<updated>2026-03-08T10:06:52Z</updated>
<author>
<name>Matthew Wilcox (Oracle)</name>
<email>willy@infradead.org</email>
</author>
<published>2026-03-05T19:55:43Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=25500ba7e77ce9d3d9b5a1929d41a2ee2e23f6fe'/>
<id>urn:sha1:25500ba7e77ce9d3d9b5a1929d41a2ee2e23f6fe</id>
<content type='text'>
Instead of embedding a list_head in struct mutex, store a pointer to
the first waiter.  The list of waiters remains a doubly linked list so
we can efficiently add to the tail of the list, remove from the front
(or middle) of the list.

Some of the list manipulation becomes more complicated, but it's a
reasonable tradeoff on the slow paths to shrink data structures which
embed a mutex like struct file.

Some of the debug checks have to be deleted because there's no equivalent
to checking them in the new scheme (eg an empty waiter-&gt;list now means
that it is the only waiter, not that the waiter is no longer on the list).

Signed-off-by: Matthew Wilcox (Oracle) &lt;willy@infradead.org&gt;
Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
Link: https://patch.msgid.link/20260305195545.3707590-4-willy@infradead.org
</content>
</entry>
<entry>
<title>locking/semaphore: Remove the list_head from struct semaphore</title>
<updated>2026-03-08T10:06:52Z</updated>
<author>
<name>Matthew Wilcox (Oracle)</name>
<email>willy@infradead.org</email>
</author>
<published>2026-03-05T19:55:42Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=b9bdd4b6840454ef87f61b6506c9635c57a81650'/>
<id>urn:sha1:b9bdd4b6840454ef87f61b6506c9635c57a81650</id>
<content type='text'>
Instead of embedding a list_head in struct semaphore, store a pointer to
the first waiter.  The list of waiters remains a doubly linked list so
we can efficiently add to the tail of the list and remove from the front
(or middle) of the list.

Some of the list manipulation becomes more complicated, but it's a
reasonable tradeoff on the slow paths to shrink data structures
which embed a semaphore.

Signed-off-by: Matthew Wilcox (Oracle) &lt;willy@infradead.org&gt;
Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
Link: https://patch.msgid.link/20260305195545.3707590-3-willy@infradead.org
</content>
</entry>
</feed>
