<feed xmlns='http://www.w3.org/2005/Atom'>
<title>user/sven/linux.git/kernel/rcutree.c, branch v3.0.50</title>
<subtitle>Linux Kernel
</subtitle>
<id>https://git.stealer.net/cgit.cgi/user/sven/linux.git/atom?h=v3.0.50</id>
<link rel='self' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/atom?h=v3.0.50'/>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/'/>
<updated>2012-10-12T20:28:11Z</updated>
<entry>
<title>rcu: Fix day-one dyntick-idle stall-warning bug</title>
<updated>2012-10-12T20:28:11Z</updated>
<author>
<name>Paul E. McKenney</name>
<email>paul.mckenney@linaro.org</email>
</author>
<published>2012-09-22T20:55:30Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=3f6ea7b4b5adbb6ee9271d48dd63dd98645e505b'/>
<id>urn:sha1:3f6ea7b4b5adbb6ee9271d48dd63dd98645e505b</id>
<content type='text'>
commit a10d206ef1a83121ab7430cb196e0376a7145b22 upstream.

Each grace period is supposed to have at least one callback waiting
for that grace period to complete.  However, if CONFIG_NO_HZ=n, an
extra callback-free grace period is no big problem -- it will chew up
a tiny bit of CPU time, but it will complete normally.  In contrast,
CONFIG_NO_HZ=y kernels have the potential for all the CPUs to go to
sleep indefinitely, in turn indefinitely delaying completion of the
callback-free grace period.  Given that nothing is waiting on this grace
period, this is also not a problem.

That is, unless RCU CPU stall warnings are also enabled, as they are
in recent kernels.  In this case, if a CPU wakes up after at least one
minute of inactivity, an RCU CPU stall warning will result.  The reason
that no one noticed until quite recently is that most systems have enough
OS noise that they will never remain absolutely idle for a full minute.
But there are some embedded systems with cut-down userspace configurations
that consistently get into this situation.

All this begs the question of exactly how a callback-free grace period
gets started in the first place.  This can happen due to the fact that
CPUs do not necessarily agree on which grace period is in progress.
If a CPU still believes that the grace period that just completed is
still ongoing, it will believe that it has callbacks that need to wait for
another grace period, never mind the fact that the grace period that they
were waiting for just completed.  This CPU can therefore erroneously
decide to start a new grace period.  Note that this can happen in
TREE_RCU and TREE_PREEMPT_RCU even on a single-CPU system:  Deadlock
considerations mean that the CPU that detected the end of the grace
period is not necessarily officially informed of this fact for some time.

Once this CPU notices that the earlier grace period completed, it will
invoke its callbacks.  It then won't have any callbacks left.  If no
other CPU has any callbacks, we now have a callback-free grace period.

This commit therefore makes CPUs check more carefully before starting a
new grace period.  This new check relies on an array of tail pointers
into each CPU's list of callbacks.  If the CPU is up to date on which
grace periods have completed, it checks to see if any callbacks follow
the RCU_DONE_TAIL segment, otherwise it checks to see if any callbacks
follow the RCU_WAIT_TAIL segment.  The reason that this works is that
the RCU_WAIT_TAIL segment will be promoted to the RCU_DONE_TAIL segment
as soon as the CPU is officially notified that the old grace period
has ended.

This change is to cpu_needs_another_gp(), which is called in a number
of places.  The only one that really matters is in rcu_start_gp(), where
the root rcu_node structure's -&gt;lock is held, which prevents any
other CPU from starting or completing a grace period, so that the
comparison that determines whether the CPU is missing the completion
of a grace period is stable.

Reported-by: Becky Bruce &lt;bgillbruce@gmail.com&gt;
Reported-by: Subodh Nijsure &lt;snijsure@grid-net.com&gt;
Reported-by: Paul Walmsley &lt;paul@pwsan.com&gt;
Signed-off-by: Paul E. McKenney &lt;paul.mckenney@linaro.org&gt;
Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
Tested-by: Paul Walmsley &lt;paul@pwsan.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>rcu: Prevent RCU callbacks from executing before scheduler initialized</title>
<updated>2011-07-13T15:17:56Z</updated>
<author>
<name>Paul E. McKenney</name>
<email>paul.mckenney@linaro.org</email>
</author>
<published>2011-07-10T22:57:35Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=b0d304172f49061b4ff78f9e2b02719ac69c8a7e'/>
<id>urn:sha1:b0d304172f49061b4ff78f9e2b02719ac69c8a7e</id>
<content type='text'>
Under some rare but real combinations of configuration parameters, RCU
callbacks are posted during early boot that use kernel facilities that
are not yet initialized.  Therefore, when these callbacks are invoked,
hard hangs and crashes ensue.  This commit therefore prevents RCU
callbacks from being invoked until after the scheduler is fully up and
running, as in after multiple tasks have been spawned.

It might well turn out that a better approach is to identify the specific
RCU callbacks that are causing this problem, but that discussion will
wait until such time as someone really needs an RCU callback to be invoked
(as opposed to merely registered) during early boot.

Reported-by: julie Sullivan &lt;kernelmail.jms@gmail.com&gt;
Reported-by: RKK &lt;kulkarni.ravi4@gmail.com&gt;
Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
Tested-by: Konrad Rzeszutek Wilk &lt;konrad.wilk@oracle.com&gt;
Tested-by: julie Sullivan &lt;kernelmail.jms@gmail.com&gt;
Tested-by: RKK &lt;kulkarni.ravi4@gmail.com&gt;
</content>
</entry>
<entry>
<title>rcu: Move RCU_BOOST #ifdefs to header file</title>
<updated>2011-06-16T23:12:05Z</updated>
<author>
<name>Paul E. McKenney</name>
<email>paul.mckenney@linaro.org</email>
</author>
<published>2011-06-16T15:26:32Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=f8b7fc6b514f34a51875dd48dff70d4d17a54f38'/>
<id>urn:sha1:f8b7fc6b514f34a51875dd48dff70d4d17a54f38</id>
<content type='text'>
The commit "use softirq instead of kthreads except when RCU_BOOST=y"
just applied #ifdef in place.  This commit is a cleanup that moves
the newly #ifdef'ed code to the header file kernel/rcutree_plugin.h.

Signed-off-by: Paul E. McKenney &lt;paul.mckenney@linaro.org&gt;
Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
</content>
</entry>
<entry>
<title>rcu: use softirq instead of kthreads except when RCU_BOOST=y</title>
<updated>2011-06-16T06:07:21Z</updated>
<author>
<name>Paul E. McKenney</name>
<email>paulmck@linux.vnet.ibm.com</email>
</author>
<published>2011-06-15T22:47:09Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=a46e0899eec7a3069bcadd45dfba7bf67c6ed016'/>
<id>urn:sha1:a46e0899eec7a3069bcadd45dfba7bf67c6ed016</id>
<content type='text'>
This patch #ifdefs RCU kthreads out of the kernel unless RCU_BOOST=y,
thus eliminating context-switch overhead if RCU priority boosting has
not been configured.

Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
</content>
</entry>
<entry>
<title>rcu: Use softirq to address performance regression</title>
<updated>2011-06-14T22:25:39Z</updated>
<author>
<name>Shaohua Li</name>
<email>shaohua.li@intel.com</email>
</author>
<published>2011-06-14T05:26:25Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=09223371deac67d08ca0b70bd18787920284c967'/>
<id>urn:sha1:09223371deac67d08ca0b70bd18787920284c967</id>
<content type='text'>
Commit a26ac2455ffcf3(rcu: move TREE_RCU from softirq to kthread)
introduced performance regression. In an AIM7 test, this commit degraded
performance by about 40%.

The commit runs rcu callbacks in a kthread instead of softirq. We observed
high rate of context switch which is caused by this. Out test system has
64 CPUs and HZ is 1000, so we saw more than 64k context switch per second
which is caused by RCU's per-CPU kthread.  A trace showed that most of
the time the RCU per-CPU kthread doesn't actually handle any callbacks,
but instead just does a very small amount of work handling grace periods.
This means that RCU's per-CPU kthreads are making the scheduler do quite
a bit of work in order to allow a very small amount of RCU-related
processing to be done.

Alex Shi's analysis determined that this slowdown is due to lock
contention within the scheduler.  Unfortunately, as Peter Zijlstra points
out, the scheduler's real-time semantics require global action, which
means that this contention is inherent in real-time scheduling.  (Yes,
perhaps someone will come up with a workaround -- otherwise, -rt is not
going to do well on large SMP systems -- but this patch will work around
this issue in the meantime.  And "the meantime" might well be forever.)

This patch therefore re-introduces softirq processing to RCU, but only
for core RCU work.  RCU callbacks are still executed in kthread context,
so that only a small amount of RCU work runs in softirq context in the
common case.  This should minimize ksoftirqd execution, allowing us to
skip boosting of ksoftirqd for CONFIG_RCU_BOOST=y kernels.

Signed-off-by: Shaohua Li &lt;shaohua.li@intel.com&gt;
Tested-by: "Alex,Shi" &lt;alex.shi@intel.com&gt;
Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
</content>
</entry>
<entry>
<title>rcu: Simplify curing of load woes</title>
<updated>2011-06-14T22:25:15Z</updated>
<author>
<name>Paul E. McKenney</name>
<email>paulmck@linux.vnet.ibm.com</email>
</author>
<published>2011-05-31T03:38:55Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=9a432736904d386cda28b987b38ba14dae960ecc'/>
<id>urn:sha1:9a432736904d386cda28b987b38ba14dae960ecc</id>
<content type='text'>
Make the functions creating the kthreads wake them up.  Leverage the
fact that the per-node and boost kthreads can run anywhere, thus
dispensing with the need to wake them up once the incoming CPU has
gone fully online.

Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
Tested-by: Daniel J Blueman &lt;daniel.blueman@gmail.com&gt;
</content>
</entry>
<entry>
<title>rcu: Cure load woes</title>
<updated>2011-05-31T08:01:48Z</updated>
<author>
<name>Peter Zijlstra</name>
<email>peterz@infradead.org</email>
</author>
<published>2011-05-30T11:34:51Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=d72bce0e67e8afc6eb959f656013cbb577426f1e'/>
<id>urn:sha1:d72bce0e67e8afc6eb959f656013cbb577426f1e</id>
<content type='text'>
Commit cc3ce5176d83 (rcu: Start RCU kthreads in TASK_INTERRUPTIBLE
state) fudges a sleeping task' state, resulting in the scheduler seeing
a TASK_UNINTERRUPTIBLE task going to sleep, but a TASK_INTERRUPTIBLE
task waking up. The result is unbalanced load calculation.

The problem that patch tried to address is that the RCU threads could
stay in UNINTERRUPTIBLE state for quite a while and triggering the hung
task detector due to on-demand wake-ups.

Cure the problem differently by always giving the tasks at least one
wake-up once the CPU is fully up and running, this will kick them out of
the initial UNINTERRUPTIBLE state and into the regular INTERRUPTIBLE
wait state.

[ The alternative would be teaching kthread_create() to start threads as
  INTERRUPTIBLE but that needs a tad more thought. ]

Reported-by: Damien Wyart &lt;damien.wyart@free.fr&gt;
Signed-off-by: Peter Zijlstra &lt;a.p.zijlstra@chello.nl&gt;
Acked-by: Paul E. McKenney &lt;paul.mckenney@linaro.org&gt;
Link: http://lkml.kernel.org/r/1306755291.1200.2872.camel@twins
Signed-off-by: Ingo Molnar &lt;mingo@elte.hu&gt;
</content>
</entry>
<entry>
<title>rcu: Start RCU kthreads in TASK_INTERRUPTIBLE state</title>
<updated>2011-05-28T15:41:56Z</updated>
<author>
<name>Paul E. McKenney</name>
<email>paul.mckenney@linaro.org</email>
</author>
<published>2011-05-25T20:42:06Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=cc3ce5176d83cd8ae1134f86e208ea758d6cb78e'/>
<id>urn:sha1:cc3ce5176d83cd8ae1134f86e208ea758d6cb78e</id>
<content type='text'>
Upon creation, kthreads are in TASK_UNINTERRUPTIBLE state, which can
result in softlockup warnings.  Because some of RCU's kthreads can
legitimately be idle indefinitely, start them in TASK_INTERRUPTIBLE
state in order to avoid those warnings.

Suggested-by: Peter Zijlstra &lt;a.p.zijlstra@chello.nl&gt;
Signed-off-by: Paul E. McKenney &lt;paul.mckenney@linaro.org&gt;
Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
Tested-by: Yinghai Lu &lt;yinghai@kernel.org&gt;
Signed-off-by: Ingo Molnar &lt;mingo@elte.hu&gt;
</content>
</entry>
<entry>
<title>rcu: Remove waitqueue usage for cpu, node, and boost kthreads</title>
<updated>2011-05-28T15:41:52Z</updated>
<author>
<name>Peter Zijlstra</name>
<email>a.p.zijlstra@chello.nl</email>
</author>
<published>2011-05-20T23:06:29Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=08bca60a6912ad225254250c0a9c3a05b4152cfa'/>
<id>urn:sha1:08bca60a6912ad225254250c0a9c3a05b4152cfa</id>
<content type='text'>
It is not necessary to use waitqueues for the RCU kthreads because
we always know exactly which thread is to be awakened.  In addition,
wake_up() only issues an actual wakeup when there is a thread waiting on
the queue, which was why there was an extra explicit wake_up_process()
to get the RCU kthreads started.

Eliminating the waitqueues (and wake_up()) in favor of wake_up_process()
eliminates the need for the initial wake_up_process() and also shrinks
the data structure size a bit.  The wakeup logic is placed in a new
rcu_wait() macro.

Signed-off-by: Peter Zijlstra &lt;a.p.zijlstra@chello.nl&gt;
Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
Signed-off-by: Ingo Molnar &lt;mingo@elte.hu&gt;
</content>
</entry>
<entry>
<title>rcu: Avoid acquiring rcu_node locks in timer functions</title>
<updated>2011-05-28T15:41:49Z</updated>
<author>
<name>Paul E. McKenney</name>
<email>paul.mckenney@linaro.org</email>
</author>
<published>2011-05-11T12:41:41Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=8826f3b0397562eee6f8785d548be9dfdb169100'/>
<id>urn:sha1:8826f3b0397562eee6f8785d548be9dfdb169100</id>
<content type='text'>
This commit switches manipulations of the rcu_node -&gt;wakemask field
to atomic operations, which allows rcu_cpu_kthread_timer() to avoid
acquiring the rcu_node lock.  This should avoid the following lockdep
splat reported by Valdis Kletnieks:

[   12.872150] usb 1-4: new high speed USB device number 3 using ehci_hcd
[   12.986667] usb 1-4: New USB device found, idVendor=413c, idProduct=2513
[   12.986679] usb 1-4: New USB device strings: Mfr=0, Product=0, SerialNumber=0
[   12.987691] hub 1-4:1.0: USB hub found
[   12.987877] hub 1-4:1.0: 3 ports detected
[   12.996372] input: PS/2 Generic Mouse as /devices/platform/i8042/serio1/input/input10
[   13.071471] udevadm used greatest stack depth: 3984 bytes left
[   13.172129]
[   13.172130] =======================================================
[   13.172425] [ INFO: possible circular locking dependency detected ]
[   13.172650] 2.6.39-rc6-mmotm0506 #1
[   13.172773] -------------------------------------------------------
[   13.172997] blkid/267 is trying to acquire lock:
[   13.173009]  (&amp;p-&gt;pi_lock){-.-.-.}, at: [&lt;ffffffff81032d8f&gt;] try_to_wake_up+0x29/0x1aa
[   13.173009]
[   13.173009] but task is already holding lock:
[   13.173009]  (rcu_node_level_0){..-...}, at: [&lt;ffffffff810901cc&gt;] rcu_cpu_kthread_timer+0x27/0x58
[   13.173009]
[   13.173009] which lock already depends on the new lock.
[   13.173009]
[   13.173009]
[   13.173009] the existing dependency chain (in reverse order) is:
[   13.173009]
[   13.173009] -&gt; #2 (rcu_node_level_0){..-...}:
[   13.173009]        [&lt;ffffffff810679b9&gt;] check_prevs_add+0x8b/0x104
[   13.173009]        [&lt;ffffffff81067da1&gt;] validate_chain+0x36f/0x3ab
[   13.173009]        [&lt;ffffffff8106846b&gt;] __lock_acquire+0x369/0x3e2
[   13.173009]        [&lt;ffffffff81068a0f&gt;] lock_acquire+0xfc/0x14c
[   13.173009]        [&lt;ffffffff815697f1&gt;] _raw_spin_lock+0x36/0x45
[   13.173009]        [&lt;ffffffff81090794&gt;] rcu_read_unlock_special+0x8c/0x1d5
[   13.173009]        [&lt;ffffffff8109092c&gt;] __rcu_read_unlock+0x4f/0xd7
[   13.173009]        [&lt;ffffffff81027bd3&gt;] rcu_read_unlock+0x21/0x23
[   13.173009]        [&lt;ffffffff8102cc34&gt;] cpuacct_charge+0x6c/0x75
[   13.173009]        [&lt;ffffffff81030cc6&gt;] update_curr+0x101/0x12e
[   13.173009]        [&lt;ffffffff810311d0&gt;] check_preempt_wakeup+0xf7/0x23b
[   13.173009]        [&lt;ffffffff8102acb3&gt;] check_preempt_curr+0x2b/0x68
[   13.173009]        [&lt;ffffffff81031d40&gt;] ttwu_do_wakeup+0x76/0x128
[   13.173009]        [&lt;ffffffff81031e49&gt;] ttwu_do_activate.constprop.63+0x57/0x5c
[   13.173009]        [&lt;ffffffff81031e96&gt;] scheduler_ipi+0x48/0x5d
[   13.173009]        [&lt;ffffffff810177d5&gt;] smp_reschedule_interrupt+0x16/0x18
[   13.173009]        [&lt;ffffffff815710f3&gt;] reschedule_interrupt+0x13/0x20
[   13.173009]        [&lt;ffffffff810b66d1&gt;] rcu_read_unlock+0x21/0x23
[   13.173009]        [&lt;ffffffff810b739c&gt;] find_get_page+0xa9/0xb9
[   13.173009]        [&lt;ffffffff810b8b48&gt;] filemap_fault+0x6a/0x34d
[   13.173009]        [&lt;ffffffff810d1a25&gt;] __do_fault+0x54/0x3e6
[   13.173009]        [&lt;ffffffff810d447a&gt;] handle_pte_fault+0x12c/0x1ed
[   13.173009]        [&lt;ffffffff810d48f7&gt;] handle_mm_fault+0x1cd/0x1e0
[   13.173009]        [&lt;ffffffff8156cfee&gt;] do_page_fault+0x42d/0x5de
[   13.173009]        [&lt;ffffffff8156a75f&gt;] page_fault+0x1f/0x30
[   13.173009]
[   13.173009] -&gt; #1 (&amp;rq-&gt;lock){-.-.-.}:
[   13.173009]        [&lt;ffffffff810679b9&gt;] check_prevs_add+0x8b/0x104
[   13.173009]        [&lt;ffffffff81067da1&gt;] validate_chain+0x36f/0x3ab
[   13.173009]        [&lt;ffffffff8106846b&gt;] __lock_acquire+0x369/0x3e2
[   13.173009]        [&lt;ffffffff81068a0f&gt;] lock_acquire+0xfc/0x14c
[   13.173009]        [&lt;ffffffff815697f1&gt;] _raw_spin_lock+0x36/0x45
[   13.173009]        [&lt;ffffffff81027e19&gt;] __task_rq_lock+0x8b/0xd3
[   13.173009]        [&lt;ffffffff81032f7f&gt;] wake_up_new_task+0x41/0x108
[   13.173009]        [&lt;ffffffff810376c3&gt;] do_fork+0x265/0x33f
[   13.173009]        [&lt;ffffffff81007d02&gt;] kernel_thread+0x6b/0x6d
[   13.173009]        [&lt;ffffffff8153a9dd&gt;] rest_init+0x21/0xd2
[   13.173009]        [&lt;ffffffff81b1db4f&gt;] start_kernel+0x3bb/0x3c6
[   13.173009]        [&lt;ffffffff81b1d29f&gt;] x86_64_start_reservations+0xaf/0xb3
[   13.173009]        [&lt;ffffffff81b1d393&gt;] x86_64_start_kernel+0xf0/0xf7
[   13.173009]
[   13.173009] -&gt; #0 (&amp;p-&gt;pi_lock){-.-.-.}:
[   13.173009]        [&lt;ffffffff81067788&gt;] check_prev_add+0x68/0x20e
[   13.173009]        [&lt;ffffffff810679b9&gt;] check_prevs_add+0x8b/0x104
[   13.173009]        [&lt;ffffffff81067da1&gt;] validate_chain+0x36f/0x3ab
[   13.173009]        [&lt;ffffffff8106846b&gt;] __lock_acquire+0x369/0x3e2
[   13.173009]        [&lt;ffffffff81068a0f&gt;] lock_acquire+0xfc/0x14c
[   13.173009]        [&lt;ffffffff815698ea&gt;] _raw_spin_lock_irqsave+0x44/0x57
[   13.173009]        [&lt;ffffffff81032d8f&gt;] try_to_wake_up+0x29/0x1aa
[   13.173009]        [&lt;ffffffff81032f3c&gt;] wake_up_process+0x10/0x12
[   13.173009]        [&lt;ffffffff810901e9&gt;] rcu_cpu_kthread_timer+0x44/0x58
[   13.173009]        [&lt;ffffffff81045286&gt;] call_timer_fn+0xac/0x1e9
[   13.173009]        [&lt;ffffffff8104556d&gt;] run_timer_softirq+0x1aa/0x1f2
[   13.173009]        [&lt;ffffffff8103e487&gt;] __do_softirq+0x109/0x26a
[   13.173009]        [&lt;ffffffff8157144c&gt;] call_softirq+0x1c/0x30
[   13.173009]        [&lt;ffffffff81003207&gt;] do_softirq+0x44/0xf1
[   13.173009]        [&lt;ffffffff8103e8b9&gt;] irq_exit+0x58/0xc8
[   13.173009]        [&lt;ffffffff81017f5a&gt;] smp_apic_timer_interrupt+0x79/0x87
[   13.173009]        [&lt;ffffffff81570fd3&gt;] apic_timer_interrupt+0x13/0x20
[   13.173009]        [&lt;ffffffff810bd51a&gt;] get_page_from_freelist+0x2aa/0x310
[   13.173009]        [&lt;ffffffff810bdf03&gt;] __alloc_pages_nodemask+0x178/0x243
[   13.173009]        [&lt;ffffffff8101fe2f&gt;] pte_alloc_one+0x1e/0x3a
[   13.173009]        [&lt;ffffffff810d27fe&gt;] __pte_alloc+0x22/0x14b
[   13.173009]        [&lt;ffffffff810d48a8&gt;] handle_mm_fault+0x17e/0x1e0
[   13.173009]        [&lt;ffffffff8156cfee&gt;] do_page_fault+0x42d/0x5de
[   13.173009]        [&lt;ffffffff8156a75f&gt;] page_fault+0x1f/0x30
[   13.173009]
[   13.173009] other info that might help us debug this:
[   13.173009]
[   13.173009] Chain exists of:
[   13.173009]   &amp;p-&gt;pi_lock --&gt; &amp;rq-&gt;lock --&gt; rcu_node_level_0
[   13.173009]
[   13.173009]  Possible unsafe locking scenario:
[   13.173009]
[   13.173009]        CPU0                    CPU1
[   13.173009]        ----                    ----
[   13.173009]   lock(rcu_node_level_0);
[   13.173009]                                lock(&amp;rq-&gt;lock);
[   13.173009]                                lock(rcu_node_level_0);
[   13.173009]   lock(&amp;p-&gt;pi_lock);
[   13.173009]
[   13.173009]  *** DEADLOCK ***
[   13.173009]
[   13.173009] 3 locks held by blkid/267:
[   13.173009]  #0:  (&amp;mm-&gt;mmap_sem){++++++}, at: [&lt;ffffffff8156cdb4&gt;] do_page_fault+0x1f3/0x5de
[   13.173009]  #1:  (&amp;yield_timer){+.-...}, at: [&lt;ffffffff810451da&gt;] call_timer_fn+0x0/0x1e9
[   13.173009]  #2:  (rcu_node_level_0){..-...}, at: [&lt;ffffffff810901cc&gt;] rcu_cpu_kthread_timer+0x27/0x58
[   13.173009]
[   13.173009] stack backtrace:
[   13.173009] Pid: 267, comm: blkid Not tainted 2.6.39-rc6-mmotm0506 #1
[   13.173009] Call Trace:
[   13.173009]  &lt;IRQ&gt;  [&lt;ffffffff8154a529&gt;] print_circular_bug+0xc8/0xd9
[   13.173009]  [&lt;ffffffff81067788&gt;] check_prev_add+0x68/0x20e
[   13.173009]  [&lt;ffffffff8100c861&gt;] ? save_stack_trace+0x28/0x46
[   13.173009]  [&lt;ffffffff810679b9&gt;] check_prevs_add+0x8b/0x104
[   13.173009]  [&lt;ffffffff81067da1&gt;] validate_chain+0x36f/0x3ab
[   13.173009]  [&lt;ffffffff8106846b&gt;] __lock_acquire+0x369/0x3e2
[   13.173009]  [&lt;ffffffff81032d8f&gt;] ? try_to_wake_up+0x29/0x1aa
[   13.173009]  [&lt;ffffffff81068a0f&gt;] lock_acquire+0xfc/0x14c
[   13.173009]  [&lt;ffffffff81032d8f&gt;] ? try_to_wake_up+0x29/0x1aa
[   13.173009]  [&lt;ffffffff810901a5&gt;] ? rcu_check_quiescent_state+0x82/0x82
[   13.173009]  [&lt;ffffffff815698ea&gt;] _raw_spin_lock_irqsave+0x44/0x57
[   13.173009]  [&lt;ffffffff81032d8f&gt;] ? try_to_wake_up+0x29/0x1aa
[   13.173009]  [&lt;ffffffff81032d8f&gt;] try_to_wake_up+0x29/0x1aa
[   13.173009]  [&lt;ffffffff810901a5&gt;] ? rcu_check_quiescent_state+0x82/0x82
[   13.173009]  [&lt;ffffffff81032f3c&gt;] wake_up_process+0x10/0x12
[   13.173009]  [&lt;ffffffff810901e9&gt;] rcu_cpu_kthread_timer+0x44/0x58
[   13.173009]  [&lt;ffffffff810901a5&gt;] ? rcu_check_quiescent_state+0x82/0x82
[   13.173009]  [&lt;ffffffff81045286&gt;] call_timer_fn+0xac/0x1e9
[   13.173009]  [&lt;ffffffff810451da&gt;] ? del_timer+0x75/0x75
[   13.173009]  [&lt;ffffffff810901a5&gt;] ? rcu_check_quiescent_state+0x82/0x82
[   13.173009]  [&lt;ffffffff8104556d&gt;] run_timer_softirq+0x1aa/0x1f2
[   13.173009]  [&lt;ffffffff8103e487&gt;] __do_softirq+0x109/0x26a
[   13.173009]  [&lt;ffffffff8106365f&gt;] ? tick_dev_program_event+0x37/0xf6
[   13.173009]  [&lt;ffffffff810a0e4a&gt;] ? time_hardirqs_off+0x1b/0x2f
[   13.173009]  [&lt;ffffffff8157144c&gt;] call_softirq+0x1c/0x30
[   13.173009]  [&lt;ffffffff81003207&gt;] do_softirq+0x44/0xf1
[   13.173009]  [&lt;ffffffff8103e8b9&gt;] irq_exit+0x58/0xc8
[   13.173009]  [&lt;ffffffff81017f5a&gt;] smp_apic_timer_interrupt+0x79/0x87
[   13.173009]  [&lt;ffffffff81570fd3&gt;] apic_timer_interrupt+0x13/0x20
[   13.173009]  &lt;EOI&gt;  [&lt;ffffffff810bd384&gt;] ? get_page_from_freelist+0x114/0x310
[   13.173009]  [&lt;ffffffff810bd51a&gt;] ? get_page_from_freelist+0x2aa/0x310
[   13.173009]  [&lt;ffffffff812220e7&gt;] ? clear_page_c+0x7/0x10
[   13.173009]  [&lt;ffffffff810bd1ef&gt;] ? prep_new_page+0x14c/0x1cd
[   13.173009]  [&lt;ffffffff810bd51a&gt;] get_page_from_freelist+0x2aa/0x310
[   13.173009]  [&lt;ffffffff810bdf03&gt;] __alloc_pages_nodemask+0x178/0x243
[   13.173009]  [&lt;ffffffff810d46b9&gt;] ? __pmd_alloc+0x87/0x99
[   13.173009]  [&lt;ffffffff8101fe2f&gt;] pte_alloc_one+0x1e/0x3a
[   13.173009]  [&lt;ffffffff810d46b9&gt;] ? __pmd_alloc+0x87/0x99
[   13.173009]  [&lt;ffffffff810d27fe&gt;] __pte_alloc+0x22/0x14b
[   13.173009]  [&lt;ffffffff810d48a8&gt;] handle_mm_fault+0x17e/0x1e0
[   13.173009]  [&lt;ffffffff8156cfee&gt;] do_page_fault+0x42d/0x5de
[   13.173009]  [&lt;ffffffff810d915f&gt;] ? sys_brk+0x32/0x10c
[   13.173009]  [&lt;ffffffff810a0e4a&gt;] ? time_hardirqs_off+0x1b/0x2f
[   13.173009]  [&lt;ffffffff81065c4f&gt;] ? trace_hardirqs_off_caller+0x3f/0x9c
[   13.173009]  [&lt;ffffffff812235dd&gt;] ? trace_hardirqs_off_thunk+0x3a/0x3c
[   13.173009]  [&lt;ffffffff8156a75f&gt;] page_fault+0x1f/0x30
[   14.010075] usb 5-1: new full speed USB device number 2 using uhci_hcd

Reported-by: Valdis Kletnieks &lt;Valdis.Kletnieks@vt.edu&gt;
Signed-off-by: Paul E. McKenney &lt;paul.mckenney@linaro.org&gt;
Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
Signed-off-by: Ingo Molnar &lt;mingo@elte.hu&gt;
</content>
</entry>
</feed>
