<feed xmlns='http://www.w3.org/2005/Atom'>
<title>user/sven/linux.git/lib/percpu-refcount.c, branch v5.0.12</title>
<subtitle>Linux Kernel
</subtitle>
<id>https://git.stealer.net/cgit.cgi/user/sven/linux.git/atom?h=v5.0.12</id>
<link rel='self' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/atom?h=v5.0.12'/>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/'/>
<updated>2018-11-27T17:21:45Z</updated>
<entry>
<title>percpu-refcount: Replace call_rcu_sched() with call_rcu()</title>
<updated>2018-11-27T17:21:45Z</updated>
<author>
<name>Paul E. McKenney</name>
<email>paulmck@linux.ibm.com</email>
</author>
<published>2018-11-07T03:22:23Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=36bd1a8e91c66e9def12958547548aa549de9cbf'/>
<id>urn:sha1:36bd1a8e91c66e9def12958547548aa549de9cbf</id>
<content type='text'>
Now that call_rcu()'s callback is not invoked until after all
preempt-disable regions of code have completed (in addition to explicitly
marked RCU read-side critical sections), call_rcu() can be used in place
of call_rcu_sched().  This commit therefore makes that change.

Signed-off-by: Paul E. McKenney &lt;paulmck@linux.ibm.com&gt;
Cc: Ming Lei &lt;ming.lei@redhat.com&gt;
Cc: Bart Van Assche &lt;bvanassche@acm.org&gt;
Cc: Jens Axboe &lt;axboe@kernel.dk&gt;
Acked-by: Tejun Heo &lt;tj@kernel.org&gt;
</content>
</entry>
<entry>
<title>percpu-refcount: Introduce percpu_ref_resurrect()</title>
<updated>2018-09-26T21:11:29Z</updated>
<author>
<name>Bart Van Assche</name>
<email>bvanassche@acm.org</email>
</author>
<published>2018-09-26T21:01:07Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=18c9a6bbe0645a05172a900740b9d2d379d54320'/>
<id>urn:sha1:18c9a6bbe0645a05172a900740b9d2d379d54320</id>
<content type='text'>
This function will be used in a later patch to switch the struct
request_queue q_usage_counter from killed back to live. In contrast
to percpu_ref_reinit(), this new function does not require that the
refcount is zero.

Signed-off-by: Bart Van Assche &lt;bvanassche@acm.org&gt;
Acked-by: Tejun Heo &lt;tj@kernel.org&gt;
Reviewed-by: Ming Lei &lt;ming.lei@redhat.com&gt;
Cc: Christoph Hellwig &lt;hch@lst.de&gt;
Cc: Jianchao Wang &lt;jianchao.w.wang@oracle.com&gt;
Cc: Hannes Reinecke &lt;hare@suse.com&gt;
Cc: Johannes Thumshirn &lt;jthumshirn@suse.de&gt;
Signed-off-by: Jens Axboe &lt;axboe@kernel.dk&gt;
</content>
</entry>
<entry>
<title>percpu_ref: Update doc to dissuade users from depending on internal RCU grace periods</title>
<updated>2018-03-19T17:09:44Z</updated>
<author>
<name>Tejun Heo</name>
<email>tj@kernel.org</email>
</author>
<published>2018-03-14T19:45:12Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=b3a5d111994450909158929560906f2c1c6c1d85'/>
<id>urn:sha1:b3a5d111994450909158929560906f2c1c6c1d85</id>
<content type='text'>
percpu_ref internally uses sched-RCU to implement the percpu -&gt; atomic
mode switching and the documentation suggested that this could be
depended upon.  This doesn't seem like a good idea.

* percpu_ref uses sched-RCU which has different grace periods regular
  RCU.  Users may combine percpu_ref with regular RCU usage and
  incorrectly believe that regular RCU grace periods are performed by
  percpu_ref.  This can lead to, for example, use-after-free due to
  premature freeing.

* percpu_ref has a grace period when switching from percpu to atomic
  mode.  It doesn't have one between the last put and release.  This
  distinction is subtle and can lead to surprising bugs.

* percpu_ref allows starting in and switching to atomic mode manually
  for debugging and other purposes.  This means that there may not be
  any grace periods from kill to release.

This patch makes it clear that the grace periods are percpu_ref's
internal implementation detail and can't be depended upon by the
users.

Signed-off-by: Tejun Heo &lt;tj@kernel.org&gt;
Cc: Kent Overstreet &lt;kent.overstreet@gmail.com&gt;
Cc: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Signed-off-by: Tejun Heo &lt;tj@kernel.org&gt;
</content>
</entry>
<entry>
<title>percpu: READ_ONCE() now implies smp_read_barrier_depends()</title>
<updated>2017-12-04T18:52:53Z</updated>
<author>
<name>Paul E. McKenney</name>
<email>paulmck@linux.vnet.ibm.com</email>
</author>
<published>2017-10-09T17:20:44Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=b393e8b33efd2ee08576ceddc10c2b4bfb3b5435'/>
<id>urn:sha1:b393e8b33efd2ee08576ceddc10c2b4bfb3b5435</id>
<content type='text'>
Because READ_ONCE() now implies smp_read_barrier_depends(), this commit
removes the now-redundant smp_read_barrier_depends() following the
READ_ONCE() in __ref_is_percpu().

Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
Acked-by: Tejun Heo &lt;tj@kernel.org&gt;
Cc: Christoph Lameter &lt;cl@linux.com&gt;
</content>
</entry>
<entry>
<title>percpu-refcount: support synchronous switch to atomic mode.</title>
<updated>2017-03-23T02:18:43Z</updated>
<author>
<name>NeilBrown</name>
<email>neilb@suse.com</email>
</author>
<published>2017-03-15T03:05:14Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=210f7cdcf088c304ee0533ffd33d6f71a8821862'/>
<id>urn:sha1:210f7cdcf088c304ee0533ffd33d6f71a8821862</id>
<content type='text'>
percpu_ref_switch_to_atomic_sync() schedules the switch to atomic mode, then
waits for it to complete.

Also export percpu_ref_switch_to_* so they can be used from modules.

This will be used in md/raid to count the number of pending write
requests to an array.
We occasionally need to check if the count is zero, but most often
we don't care.
We always want updates to the counter to be fast, as in some cases
we count every 4K page.

Signed-off-by: NeilBrown &lt;neilb@suse.com&gt;
Acked-by: Tejun Heo &lt;tj@kernel.org&gt;
Signed-off-by: Shaohua Li &lt;shli@fb.com&gt;
</content>
</entry>
<entry>
<title>percpu-refcount: init -&gt;confirm_switch member properly</title>
<updated>2016-08-11T17:52:23Z</updated>
<author>
<name>Roman Pen</name>
<email>roman.penyaev@profitbricks.com</email>
</author>
<published>2016-08-11T17:27:09Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=a67823c1ed1092160da94c31e6da5aeb35dca81c'/>
<id>urn:sha1:a67823c1ed1092160da94c31e6da5aeb35dca81c</id>
<content type='text'>
This patch targets two things which are related to -&gt;confirm_switch:

 1. Init -&gt;confirm_switch pointer with NULL on percpu_ref_init() or
    kernel frightfully complains with WARN_ON_ONCE(ref-&gt;confirm_switch)
    at __percpu_ref_switch_to_atomic if memory chunk was not properly
    zeroed.

 2. Warn if RCU callback is still in progress on percpu_ref_exit().
    The race still exists, because percpu_ref_call_confirm_rcu()
    drops -&gt;confirm_switch to NULL early, but that is only a warning
    and still the caller is responsible that ref is no longer in
    active use.  Hopefully that can help to catch incorrect usage
    of percpu-refcount.

Signed-off-by: Roman Pen &lt;roman.penyaev@profitbricks.com&gt;
Cc: Tejun Heo &lt;tj@kernel.org&gt;
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Tejun Heo &lt;tj@kernel.org&gt;
</content>
</entry>
<entry>
<title>percpu_ref: allow operation mode switching operations to be called concurrently</title>
<updated>2016-08-10T19:02:58Z</updated>
<author>
<name>Tejun Heo</name>
<email>tj@kernel.org</email>
</author>
<published>2015-09-29T21:47:20Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=33e465ce7cb30b71c113a26f36d293b545a28e12'/>
<id>urn:sha1:33e465ce7cb30b71c113a26f36d293b545a28e12</id>
<content type='text'>
percpu_ref initially didn't have explicit mode switching operations.
It started out in percpu mode and switched to atomic mode on kill and
then released.  Ensuring that kill operation is initiated only after
init completes was naturally the caller's responsibility.

percpu_ref_reinit() was introduced later but it didn't shift the
synchronization responsibility.  Reinit can't be performed until kill
is confirmed, so there was nothing to worry about
synchronization-wise.  Also, as both reinit and kill manipulate the
base reference, invocations of the same function couldn't be allowed
to race each other.

The latest additions of percpu_ref_switch_to_atomic/percpu() changed
the situation.  These two functions can be called any time as long as
the percpu_ref is between init and exit and thus there are valid valid
usage scenarios where these new functions race with each other or
against reinit/kill.  Mostly from inertia, f47ad4578461 ("percpu_ref:
decouple switching to percpu mode and reinit") still left
synchronization among percpu mode switching operations to its users.

That the new switch functions can be freely mixed with kill/reinit but
the operations themselves should be synchronized is too subtle a
requirement and led to a very subtle race condition in blk-mq freezing
path.

This patch fixes the situation by introducing percpu_ref_switch_lock
to protect mode switching operations.  This ensures that percpu-ref
users don't have to worry about mode changing operations racing
against each other, e.g. switch_to_percpu against kill, as long as the
sequence of operations is valid.

Signed-off-by: Tejun Heo &lt;tj@kernel.org&gt;
Reported-by: Akinobu Mita &lt;akinobu.mita@gmail.com&gt;
Link: http://lkml.kernel.org/g/1443287365-4244-7-git-send-email-akinobu.mita@gmail.com
Fixes: f47ad4578461 ("percpu_ref: decouple switching to percpu mode and reinit")
</content>
</entry>
<entry>
<title>percpu_ref: restructure operation mode switching</title>
<updated>2016-08-10T19:02:58Z</updated>
<author>
<name>Tejun Heo</name>
<email>tj@kernel.org</email>
</author>
<published>2015-09-29T21:47:19Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=3f49bdd95855a33eea749304d2e10530a869218b'/>
<id>urn:sha1:3f49bdd95855a33eea749304d2e10530a869218b</id>
<content type='text'>
Restructure atomic/percpu mode switching.

* The users of __percpu_ref_switch_to_atomic/percpu() now call a new
  function __percpu_ref_switch_mode() which calls either of the
  original switching functions depending on the current state of
  ref-&gt;force_atomic and the __PERCPU_REF_DEAD flag.  The callers no
  longer check whether switching is necessary but always invoke
  __percpu_ref_switch_mode().

* !ref-&gt;confirm_switch waiting is collected into
  __percpu_ref_switch_mode().

This patch doesn't cause any behavior differences.

Signed-off-by: Tejun Heo &lt;tj@kernel.org&gt;
</content>
</entry>
<entry>
<title>percpu_ref: unify staggered atomic switching wait behavior</title>
<updated>2016-08-10T19:02:58Z</updated>
<author>
<name>Tejun Heo</name>
<email>tj@kernel.org</email>
</author>
<published>2015-09-29T21:47:18Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=18808354b79622ed11857e41f9044ba17aec5b01'/>
<id>urn:sha1:18808354b79622ed11857e41f9044ba17aec5b01</id>
<content type='text'>
When an atomic or percpu switching starts before the previous atomic
switching finishes, the taken behaviors are

* If the new atomic switching has confirmation callback, it waits
  for the previous atomic switching to complete.

* If the new percpu switching is the first percpu switching following
  the previous atomic switching, it waits the previous atomic
  switching to complete.

No percpu_ref user depends on these subtleties.  The only meaningful
part is that, if the caller ensures that atomic switching isn't in
progress, mode switching operations can be issued from any context.

This patch pulls the wait logic to the top of both switching functions
so that they always wait for the previous atomic switching to
complete.  This makes the behavior simpler and consistent for both
directions and will help allowing concurrent invocations of mode
switching functions.

Signed-off-by: Tejun Heo &lt;tj@kernel.org&gt;
</content>
</entry>
<entry>
<title>percpu_ref: reorganize __percpu_ref_switch_to_atomic() and relocate percpu_ref_switch_to_atomic()</title>
<updated>2016-08-10T19:02:58Z</updated>
<author>
<name>Tejun Heo</name>
<email>tj@kernel.org</email>
</author>
<published>2015-09-29T21:47:17Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=b2302c7fdc654d249c546aac6228b8e10969bc1e'/>
<id>urn:sha1:b2302c7fdc654d249c546aac6228b8e10969bc1e</id>
<content type='text'>
Reorganize __percpu_ref_switch_to_atomic() so that it looks
structurally similar to __percpu_ref_switch_to_percpu() and relocate
percpu_ref_switch_to_atomic so that the two internal functions are
co-located.

This patch doesn't introduce any functional differences.

Signed-off-by: Tejun Heo &lt;tj@kernel.org&gt;
</content>
</entry>
</feed>
