<feed xmlns='http://www.w3.org/2005/Atom'>
<title>user/sven/linux.git/kernel/rcu/tree.h, branch v5.4.55</title>
<subtitle>Linux Kernel
</subtitle>
<id>https://git.stealer.net/cgit.cgi/user/sven/linux.git/atom?h=v5.4.55</id>
<link rel='self' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/atom?h=v5.4.55'/>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/'/>
<updated>2019-08-13T21:38:24Z</updated>
<entry>
<title>rcu/nocb: Print no-CBs diagnostics when rcutorture writer unduly delayed</title>
<updated>2019-08-13T21:38:24Z</updated>
<author>
<name>Paul E. McKenney</name>
<email>paulmck@linux.ibm.com</email>
</author>
<published>2019-06-25T20:32:51Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=f7a81b12d6af42a9d09be1e5f041169f04b0b67a'/>
<id>urn:sha1:f7a81b12d6af42a9d09be1e5f041169f04b0b67a</id>
<content type='text'>
This commit causes locking, sleeping, and callback state to be printed
for no-CBs CPUs when the rcutorture writer is delayed sufficiently for
rcutorture to complain.

Signed-off-by: Paul E. McKenney &lt;paulmck@linux.ibm.com&gt;
</content>
</entry>
<entry>
<title>rcu/nocb: Add bypass callback queueing</title>
<updated>2019-08-13T21:37:32Z</updated>
<author>
<name>Paul E. McKenney</name>
<email>paulmck@linux.ibm.com</email>
</author>
<published>2019-07-02T23:03:33Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=d1b222c6be1f8bfc77099e034219732ecaeaaf96'/>
<id>urn:sha1:d1b222c6be1f8bfc77099e034219732ecaeaaf96</id>
<content type='text'>
Use of the rcu_data structure's segmented -&gt;cblist for no-CBs CPUs
takes advantage of unrelated grace periods, thus reducing the memory
footprint in the face of floods of call_rcu() invocations.  However,
the -&gt;cblist field is a more-complex rcu_segcblist structure which must
be protected via locking.  Even though there are only three entities
which can acquire this lock (the CPU invoking call_rcu(), the no-CBs
grace-period kthread, and the no-CBs callbacks kthread), the contention
on this lock is excessive under heavy stress.

This commit therefore greatly reduces contention by provisioning
an rcu_cblist structure field named -&gt;nocb_bypass within the
rcu_data structure.  Each no-CBs CPU is permitted only a limited
number of enqueues onto the -&gt;cblist per jiffy, controlled by a new
nocb_nobypass_lim_per_jiffy kernel boot parameter that defaults to
about 16 enqueues per millisecond (16 * 1000 / HZ).  When that limit is
exceeded, the CPU instead enqueues onto the new -&gt;nocb_bypass.

The -&gt;nocb_bypass is flushed into the -&gt;cblist every jiffy or when
the number of callbacks on -&gt;nocb_bypass exceeds qhimark, whichever
happens first.  During call_rcu() floods, this flushing is carried out
by the CPU during the course of its call_rcu() invocations.  However,
a CPU could simply stop invoking call_rcu() at any time.  The no-CBs
grace-period kthread therefore carries out less-aggressive flushing
(every few jiffies or when the number of callbacks on -&gt;nocb_bypass
exceeds (2 * qhimark), whichever comes first).  This means that the
no-CBs grace-period kthread cannot be permitted to do unbounded waits
while there are callbacks on -&gt;nocb_bypass.  A -&gt;nocb_bypass_timer is
used to provide the needed wakeups.

[ paulmck: Apply Coverity feedback reported by Colin Ian King. ]
Signed-off-by: Paul E. McKenney &lt;paulmck@linux.ibm.com&gt;
</content>
</entry>
<entry>
<title>rcu/nocb: Reduce -&gt;nocb_lock contention with separate -&gt;nocb_gp_lock</title>
<updated>2019-08-13T21:35:49Z</updated>
<author>
<name>Paul E. McKenney</name>
<email>paulmck@linux.ibm.com</email>
</author>
<published>2019-06-02T20:41:08Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=4fd8c5f153bc41ae847b9ddb1539b34f70c18278'/>
<id>urn:sha1:4fd8c5f153bc41ae847b9ddb1539b34f70c18278</id>
<content type='text'>
The sleep/wakeup of the no-CBs grace-period kthreads is synchronized
using the -&gt;nocb_lock of the first CPU corresponding to that kthread.
This commit provides a separate -&gt;nocb_gp_lock for this purpose, thus
reducing contention on -&gt;nocb_lock.

Signed-off-by: Paul E. McKenney &lt;paulmck@linux.ibm.com&gt;
</content>
</entry>
<entry>
<title>rcu/nocb: Avoid -&gt;nocb_lock capture by corresponding CPU</title>
<updated>2019-08-13T21:35:49Z</updated>
<author>
<name>Paul E. McKenney</name>
<email>paulmck@linux.ibm.com</email>
</author>
<published>2019-05-28T14:18:08Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=81c0b3d724f419c0524f432c1ac22b9f518c2899'/>
<id>urn:sha1:81c0b3d724f419c0524f432c1ac22b9f518c2899</id>
<content type='text'>
A given rcu_data structure's -&gt;nocb_lock can be acquired very frequently
by the corresponding CPU and occasionally by the corresponding no-CBs
grace-period and callbacks kthreads.  In particular, these two kthreads
will have frequent gaps between -&gt;nocb_lock acquisitions that are roughly
a grace period in duration.  This means that any excessive -&gt;nocb_lock
contention will be due to the CPU's acquisitions, and this in turn
enables a very naive contention-avoidance strategy to be quite effective.

This commit therefore modifies rcu_nocb_lock() to first
attempt a raw_spin_trylock(), and to atomically increment a
separate -&gt;nocb_lock_contended across a raw_spin_lock().  This new
-&gt;nocb_lock_contended field is checked in __call_rcu_nocb_wake() when
interrupts are enabled, with a spin-wait for contending acquisitions
to complete, thus allowing the kthreads a chance to acquire the lock.

Signed-off-by: Paul E. McKenney &lt;paulmck@linux.ibm.com&gt;
</content>
</entry>
<entry>
<title>rcu/nocb: Remove obsolete nocb_gp_head and nocb_gp_tail fields</title>
<updated>2019-08-13T21:35:49Z</updated>
<author>
<name>Paul E. McKenney</name>
<email>paulmck@linux.ibm.com</email>
</author>
<published>2019-05-21T16:20:10Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=4f9c1bc727f917c8c32ee1decc88e89057e0dffc'/>
<id>urn:sha1:4f9c1bc727f917c8c32ee1decc88e89057e0dffc</id>
<content type='text'>
Signed-off-by: Paul E. McKenney &lt;paulmck@linux.ibm.com&gt;
</content>
</entry>
<entry>
<title>rcu/nocb: Remove obsolete nocb_cb_tail and nocb_cb_head fields</title>
<updated>2019-08-13T21:35:49Z</updated>
<author>
<name>Paul E. McKenney</name>
<email>paulmck@linux.ibm.com</email>
</author>
<published>2019-05-21T16:10:24Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=2a777de757f4c7050997c6232a585eff59c5ea36'/>
<id>urn:sha1:2a777de757f4c7050997c6232a585eff59c5ea36</id>
<content type='text'>
Signed-off-by: Paul E. McKenney &lt;paulmck@linux.ibm.com&gt;
</content>
</entry>
<entry>
<title>rcu/nocb: Remove obsolete nocb_q_count and nocb_q_count_lazy fields</title>
<updated>2019-08-13T21:35:49Z</updated>
<author>
<name>Paul E. McKenney</name>
<email>paulmck@linux.ibm.com</email>
</author>
<published>2019-05-21T15:28:41Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=c035280f1761b3336f4dad336906c19735d7ba5f'/>
<id>urn:sha1:c035280f1761b3336f4dad336906c19735d7ba5f</id>
<content type='text'>
This commit removes the obsolete nocb_q_count and nocb_q_count_lazy
fields, also removing rcu_get_n_cbs_nocb_cpu(), adjusting
rcu_get_n_cbs_cpu(), and making rcutree_migrate_callbacks() once again
disable the -&gt;cblist fields of offline CPUs.

Signed-off-by: Paul E. McKenney &lt;paulmck@linux.ibm.com&gt;
</content>
</entry>
<entry>
<title>rcu/nocb: Remove obsolete nocb_head and nocb_tail fields</title>
<updated>2019-08-13T21:35:49Z</updated>
<author>
<name>Paul E. McKenney</name>
<email>paulmck@linux.ibm.com</email>
</author>
<published>2019-05-21T14:18:00Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=e7f4c5b3998a3cf1bd8dbf110948075b47ac9b78'/>
<id>urn:sha1:e7f4c5b3998a3cf1bd8dbf110948075b47ac9b78</id>
<content type='text'>
Signed-off-by: Paul E. McKenney &lt;paulmck@linux.ibm.com&gt;
</content>
</entry>
<entry>
<title>rcu/nocb: Use rcu_segcblist for no-CBs CPUs</title>
<updated>2019-08-13T21:35:49Z</updated>
<author>
<name>Paul E. McKenney</name>
<email>paulmck@linux.ibm.com</email>
</author>
<published>2019-05-15T16:56:40Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=5d6742b37727e111f4755155e59c5319cf5caa7b'/>
<id>urn:sha1:5d6742b37727e111f4755155e59c5319cf5caa7b</id>
<content type='text'>
Currently the RCU callbacks for no-CBs CPUs are queued on a series of
ad-hoc linked lists, which means that these callbacks cannot benefit
from "drive-by" grace periods, thus suffering needless delays prior
to invocation.  In addition, the no-CBs grace-period kthreads first
wait for callbacks to appear and later wait for a new grace period,
which means that callbacks appearing during a grace-period wait can
be delayed.  These delays increase memory footprint, and could even
result in an out-of-memory condition.

This commit therefore enqueues RCU callbacks from no-CBs CPUs on the
rcu_segcblist structure that is already used by non-no-CBs CPUs.  It also
restructures the no-CBs grace-period kthread to be checking for incoming
callbacks while waiting for grace periods.  Also, instead of waiting
for a new grace period, it waits for the closest grace period that will
cause some of the callbacks to be safe to invoke.  All of these changes
reduce callback latency and thus the number of outstanding callbacks,
in turn reducing the probability of an out-of-memory condition.

Signed-off-by: Paul E. McKenney &lt;paulmck@linux.ibm.com&gt;
</content>
</entry>
<entry>
<title>rcu/nocb: Leave -&gt;cblist enabled for no-CBs CPUs</title>
<updated>2019-08-13T21:35:49Z</updated>
<author>
<name>Paul E. McKenney</name>
<email>paulmck@linux.ibm.com</email>
</author>
<published>2019-05-14T16:50:49Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=e83e73f5b0f8de6a8978ba64185e80fdf48a2a63'/>
<id>urn:sha1:e83e73f5b0f8de6a8978ba64185e80fdf48a2a63</id>
<content type='text'>
As a first step towards making no-CBs CPUs use the -&gt;cblist, this commit
leaves the -&gt;cblist enabled for these CPUs.  The main reason to make
no-CBs CPUs use -&gt;cblist is to take advantage of callback numbering,
which will reduce the effects of missed grace periods which in turn will
reduce forward-progress problems for no-CBs CPUs.

Signed-off-by: Paul E. McKenney &lt;paulmck@linux.ibm.com&gt;
</content>
</entry>
</feed>
