<feed xmlns='http://www.w3.org/2005/Atom'>
<title>user/sven/linux.git/kernel, branch v3.2.47</title>
<subtitle>Linux Kernel
</subtitle>
<id>https://git.stealer.net/cgit.cgi/user/sven/linux.git/atom?h=v3.2.47</id>
<link rel='self' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/atom?h=v3.2.47'/>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/'/>
<updated>2013-06-19T01:17:00Z</updated>
<entry>
<title>audit: wait_for_auditd() should use TASK_UNINTERRUPTIBLE</title>
<updated>2013-06-19T01:17:00Z</updated>
<author>
<name>Oleg Nesterov</name>
<email>oleg@redhat.com</email>
</author>
<published>2013-06-12T21:04:46Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=4d839b14d2091a224a6d0a6fa1cffa58fc00d8a7'/>
<id>urn:sha1:4d839b14d2091a224a6d0a6fa1cffa58fc00d8a7</id>
<content type='text'>
commit f000cfdde5de4fc15dead5ccf524359c07eadf2b upstream.

audit_log_start() does wait_for_auditd() in a loop until
audit_backlog_wait_time passes or audit_skb_queue has a room.

If signal_pending() is true this becomes a busy-wait loop, schedule() in
TASK_INTERRUPTIBLE won't block.

Thanks to Guy for fully investigating and explaining the problem.

(akpm: that'll cause the system to lock up on a non-preemptible
uniprocessor kernel)

(Guy: "Our customer was in fact running a uniprocessor machine, and they
reported a system hang.")

Signed-off-by: Oleg Nesterov &lt;oleg@redhat.com&gt;
Reported-by: Guy Streeter &lt;streeter@redhat.com&gt;
Cc: Eric Paris &lt;eparis@redhat.com&gt;
Cc: Al Viro &lt;viro@zeniv.linux.org.uk&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
[bwh: Backported to 3.2: adjust context, indentation]
Signed-off-by: Ben Hutchings &lt;ben@decadent.org.uk&gt;
</content>
</entry>
<entry>
<title>reboot: rigrate shutdown/reboot to boot cpu</title>
<updated>2013-06-19T01:17:00Z</updated>
<author>
<name>Robin Holt</name>
<email>holt@sgi.com</email>
</author>
<published>2013-06-12T21:04:37Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=c32d723b2c7aa214602b626a5af32f8fc4ec5571'/>
<id>urn:sha1:c32d723b2c7aa214602b626a5af32f8fc4ec5571</id>
<content type='text'>
commit cf7df378aa4ff7da3a44769b7ff6e9eef1a9f3db upstream.

We recently noticed that reboot of a 1024 cpu machine takes approx 16
minutes of just stopping the cpus.  The slowdown was tracked to commit
f96972f2dc63 ("kernel/sys.c: call disable_nonboot_cpus() in
kernel_restart()").

The current implementation does all the work of hot removing the cpus
before halting the system.  We are switching to just migrating to the
boot cpu and then continuing with shutdown/reboot.

This also has the effect of not breaking x86's command line parameter
for specifying the reboot cpu.  Note, this code was shamelessly copied
from arch/x86/kernel/reboot.c with bits removed pertaining to the
reboot_cpu command line parameter.

Signed-off-by: Robin Holt &lt;holt@sgi.com&gt;
Tested-by: Shawn Guo &lt;shawn.guo@linaro.org&gt;
Cc: "Srivatsa S. Bhat" &lt;srivatsa.bhat@linux.vnet.ibm.com&gt;
Cc: H. Peter Anvin &lt;hpa@zytor.com&gt;
Cc: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Cc: Ingo Molnar &lt;mingo@elte.hu&gt;
Cc: Russ Anderson &lt;rja@sgi.com&gt;
Cc: Robin Holt &lt;holt@sgi.com&gt;
Cc: Russell King &lt;linux@arm.linux.org.uk&gt;
Cc: Guan Xuetao &lt;gxt@mprc.pku.edu.cn&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Signed-off-by: Ben Hutchings &lt;ben@decadent.org.uk&gt;
</content>
</entry>
<entry>
<title>CPU hotplug: provide a generic helper to disable/enable CPU hotplug</title>
<updated>2013-06-19T01:16:59Z</updated>
<author>
<name>Srivatsa S. Bhat</name>
<email>srivatsa.bhat@linux.vnet.ibm.com</email>
</author>
<published>2013-06-12T21:04:36Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=1b3b08bec4f7bdf1c8b3a822439b070862179415'/>
<id>urn:sha1:1b3b08bec4f7bdf1c8b3a822439b070862179415</id>
<content type='text'>
commit 16e53dbf10a2d7e228709a7286310e629ede5e45 upstream.

There are instances in the kernel where we would like to disable CPU
hotplug (from sysfs) during some important operation.  Today the freezer
code depends on this and the code to do it was kinda tailor-made for
that.

Restructure the code and make it generic enough to be useful for other
usecases too.

Signed-off-by: Srivatsa S. Bhat &lt;srivatsa.bhat@linux.vnet.ibm.com&gt;
Signed-off-by: Robin Holt &lt;holt@sgi.com&gt;
Cc: H. Peter Anvin &lt;hpa@zytor.com&gt;
Cc: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Cc: Ingo Molnar &lt;mingo@elte.hu&gt;
Cc: Russ Anderson &lt;rja@sgi.com&gt;
Cc: Robin Holt &lt;holt@sgi.com&gt;
Cc: Russell King &lt;linux@arm.linux.org.uk&gt;
Cc: Guan Xuetao &lt;gxt@mprc.pku.edu.cn&gt;
Cc: Shawn Guo &lt;shawn.guo@linaro.org&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Signed-off-by: Ben Hutchings &lt;ben@decadent.org.uk&gt;
</content>
</entry>
<entry>
<title>ftrace: Move ftrace_filter_lseek out of CONFIG_DYNAMIC_FTRACE section</title>
<updated>2013-06-19T01:16:47Z</updated>
<author>
<name>Steven Rostedt (Red Hat)</name>
<email>rostedt@goodmis.org</email>
</author>
<published>2013-04-12T20:40:13Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=57321c3df5c8c3f1a485db064282eefb06504ead'/>
<id>urn:sha1:57321c3df5c8c3f1a485db064282eefb06504ead</id>
<content type='text'>
commit 7f49ef69db6bbf756c0abca7e9b65b32e999eec8 upstream.

As ftrace_filter_lseek is now used with ftrace_pid_fops, it needs to
be moved out of the #ifdef CONFIG_DYNAMIC_FTRACE section as the
ftrace_pid_fops is defined when DYNAMIC_FTRACE is not.

Cc: Namhyung Kim &lt;namhyung@kernel.org&gt;
Signed-off-by: Steven Rostedt &lt;rostedt@goodmis.org&gt;
[bwh: Backported to 3.2:
 - ftrace_filter_lseek() is static and not declared in ftrace.h
 - 'whence' parameter was called 'origin']
Signed-off-by: Ben Hutchings &lt;ben@decadent.org.uk&gt;
</content>
</entry>
<entry>
<title>sched/debug: Fix sd-&gt;*_idx limit range avoiding overflow</title>
<updated>2013-05-30T13:35:09Z</updated>
<author>
<name>libin</name>
<email>huawei.libin@huawei.com</email>
</author>
<published>2013-04-08T06:39:12Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=ac93bc6a8ab8b760423ca0c4d42311f0d1f5cfcd'/>
<id>urn:sha1:ac93bc6a8ab8b760423ca0c4d42311f0d1f5cfcd</id>
<content type='text'>
commit fd9b86d37a600488dbd80fe60cca46b822bff1cd upstream.

Commit 201c373e8e ("sched/debug: Limit sd-&gt;*_idx range on
sysctl") was an incomplete bug fix.

This patch fixes sd-&gt;*_idx limit range to [0 ~ CPU_LOAD_IDX_MAX-1]
avoiding array overflow caused by setting sd-&gt;*_idx to CPU_LOAD_IDX_MAX
on sysctl.

Signed-off-by: Libin &lt;huawei.libin@huawei.com&gt;
Cc: &lt;jiang.liu@huawei.com&gt;
Cc: &lt;guohanjun@huawei.com&gt;
Cc: Peter Zijlstra &lt;peterz@infradead.org&gt;
Link: http://lkml.kernel.org/r/51626610.2040607@huawei.com
Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
[bwh: Backported to 3.2: adjust filename]
Signed-off-by: Ben Hutchings &lt;ben@decadent.org.uk&gt;
</content>
</entry>
<entry>
<title>sched/debug: Limit sd-&gt;*_idx range on sysctl</title>
<updated>2013-05-30T13:35:09Z</updated>
<author>
<name>Namhyung Kim</name>
<email>namhyung.kim@lge.com</email>
</author>
<published>2012-08-16T08:03:24Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=077c9f651e2d46f374ed2103b95ed492c4f4b52b'/>
<id>urn:sha1:077c9f651e2d46f374ed2103b95ed492c4f4b52b</id>
<content type='text'>
commit 201c373e8e4823700d3160d5c28e1ab18fd1193e upstream.

Various sd-&gt;*_idx's are used for refering the rq's load average table
when selecting a cpu to run.  However they can be set to any number
with sysctl knobs so that it can crash the kernel if something bad is
given. Fix it by limiting them into the actual range.

Signed-off-by: Namhyung Kim &lt;namhyung@kernel.org&gt;
Signed-off-by: Peter Zijlstra &lt;a.p.zijlstra@chello.nl&gt;
Link: http://lkml.kernel.org/r/1345104204-8317-1-git-send-email-namhyung@kernel.org
Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
[bwh: Backported to 3.2:
 - Adjust filename
 - s/umode_t/mode_t/]
Signed-off-by: Ben Hutchings &lt;ben@decadent.org.uk&gt;
</content>
</entry>
<entry>
<title>usermodehelper: check subprocess_info-&gt;path != NULL</title>
<updated>2013-05-30T13:35:00Z</updated>
<author>
<name>Oleg Nesterov</name>
<email>oleg@redhat.com</email>
</author>
<published>2013-05-16T15:43:55Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=b5bcd909bef8caf59cfc85f02ac2879419c89ab3'/>
<id>urn:sha1:b5bcd909bef8caf59cfc85f02ac2879419c89ab3</id>
<content type='text'>
commit 264b83c07a84223f0efd0d1db9ccc66d6f88288f upstream.

argv_split(empty_or_all_spaces) happily succeeds, it simply returns
argc == 0 and argv[0] == NULL. Change call_usermodehelper_exec() to
check sub_info-&gt;path != NULL to avoid the crash.

This is the minimal fix, todo:

 - perhaps we should change argv_split() to return NULL or change the
   callers.

 - kill or justify -&gt;path[0] check

 - narrow the scope of helper_lock()

Signed-off-by: Oleg Nesterov &lt;oleg@redhat.com&gt;
Acked-By: Lucas De Marchi &lt;lucas.demarchi@intel.com&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Signed-off-by: Ben Hutchings &lt;ben@decadent.org.uk&gt;
</content>
</entry>
<entry>
<title>tracing: Fix leaks of filter preds</title>
<updated>2013-05-30T13:35:00Z</updated>
<author>
<name>Steven Rostedt (Red Hat)</name>
<email>rostedt@goodmis.org</email>
</author>
<published>2013-05-14T19:40:48Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=67544c758f9d9fbc27915ea34d6cd70b27f56104'/>
<id>urn:sha1:67544c758f9d9fbc27915ea34d6cd70b27f56104</id>
<content type='text'>
commit 60705c89460fdc7227f2d153b68b3f34814738a4 upstream.

Special preds are created when folding a series of preds that
can be done in serial. These are allocated in an ops field of
the pred structure. But they were never freed, causing memory
leaks.

This was discovered using the kmemleak checker:

unreferenced object 0xffff8800797fd5e0 (size 32):
  comm "swapper/0", pid 1, jiffies 4294690605 (age 104.608s)
  hex dump (first 32 bytes):
    00 00 01 00 03 00 05 00 07 00 09 00 0b 00 0d 00  ................
    00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
  backtrace:
    [&lt;ffffffff814b52af&gt;] kmemleak_alloc+0x73/0x98
    [&lt;ffffffff8111ff84&gt;] kmemleak_alloc_recursive.constprop.42+0x16/0x18
    [&lt;ffffffff81120e68&gt;] __kmalloc+0xd7/0x125
    [&lt;ffffffff810d47eb&gt;] kcalloc.constprop.24+0x2d/0x2f
    [&lt;ffffffff810d4896&gt;] fold_pred_tree_cb+0xa9/0xf4
    [&lt;ffffffff810d3781&gt;] walk_pred_tree+0x47/0xcc
    [&lt;ffffffff810d5030&gt;] replace_preds.isra.20+0x6f8/0x72f
    [&lt;ffffffff810d50b5&gt;] create_filter+0x4e/0x8b
    [&lt;ffffffff81b1c30d&gt;] ftrace_test_event_filter+0x5a/0x155
    [&lt;ffffffff8100028d&gt;] do_one_initcall+0xa0/0x137
    [&lt;ffffffff81afbedf&gt;] kernel_init_freeable+0x14d/0x1dc
    [&lt;ffffffff814b24b7&gt;] kernel_init+0xe/0xdb
    [&lt;ffffffff814d539c&gt;] ret_from_fork+0x7c/0xb0
    [&lt;ffffffffffffffff&gt;] 0xffffffffffffffff

Cc: Tom Zanussi &lt;tzanussi@gmail.com&gt;
Signed-off-by: Steven Rostedt &lt;rostedt@goodmis.org&gt;
Signed-off-by: Ben Hutchings &lt;ben@decadent.org.uk&gt;
</content>
</entry>
<entry>
<title>timer: Don't reinitialize the cpu base lock during CPU_UP_PREPARE</title>
<updated>2013-05-30T13:34:59Z</updated>
<author>
<name>Tirupathi Reddy</name>
<email>tirupath@codeaurora.org</email>
</author>
<published>2013-05-14T08:29:02Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=9ea470782e37a58e3fc2e592648248fb8fb6ca31'/>
<id>urn:sha1:9ea470782e37a58e3fc2e592648248fb8fb6ca31</id>
<content type='text'>
commit 42a5cf46cd56f46267d2a9fcf2655f4078cd3042 upstream.

An inactive timer's base can refer to a offline cpu's base.

In the current code, cpu_base's lock is blindly reinitialized each
time a CPU is brought up. If a CPU is brought online during the period
that another thread is trying to modify an inactive timer on that CPU
with holding its timer base lock, then the lock will be reinitialized
under its feet. This leads to following SPIN_BUG().

&lt;0&gt; BUG: spinlock already unlocked on CPU#3, kworker/u:3/1466
&lt;0&gt; lock: 0xe3ebe000, .magic: dead4ead, .owner: kworker/u:3/1466, .owner_cpu: 1
&lt;4&gt; [&lt;c0013dc4&gt;] (unwind_backtrace+0x0/0x11c) from [&lt;c026e794&gt;] (do_raw_spin_unlock+0x40/0xcc)
&lt;4&gt; [&lt;c026e794&gt;] (do_raw_spin_unlock+0x40/0xcc) from [&lt;c076c160&gt;] (_raw_spin_unlock+0x8/0x30)
&lt;4&gt; [&lt;c076c160&gt;] (_raw_spin_unlock+0x8/0x30) from [&lt;c009b858&gt;] (mod_timer+0x294/0x310)
&lt;4&gt; [&lt;c009b858&gt;] (mod_timer+0x294/0x310) from [&lt;c00a5e04&gt;] (queue_delayed_work_on+0x104/0x120)
&lt;4&gt; [&lt;c00a5e04&gt;] (queue_delayed_work_on+0x104/0x120) from [&lt;c04eae00&gt;] (sdhci_msm_bus_voting+0x88/0x9c)
&lt;4&gt; [&lt;c04eae00&gt;] (sdhci_msm_bus_voting+0x88/0x9c) from [&lt;c04d8780&gt;] (sdhci_disable+0x40/0x48)
&lt;4&gt; [&lt;c04d8780&gt;] (sdhci_disable+0x40/0x48) from [&lt;c04bf300&gt;] (mmc_release_host+0x4c/0xb0)
&lt;4&gt; [&lt;c04bf300&gt;] (mmc_release_host+0x4c/0xb0) from [&lt;c04c7aac&gt;] (mmc_sd_detect+0x90/0xfc)
&lt;4&gt; [&lt;c04c7aac&gt;] (mmc_sd_detect+0x90/0xfc) from [&lt;c04c2504&gt;] (mmc_rescan+0x7c/0x2c4)
&lt;4&gt; [&lt;c04c2504&gt;] (mmc_rescan+0x7c/0x2c4) from [&lt;c00a6a7c&gt;] (process_one_work+0x27c/0x484)
&lt;4&gt; [&lt;c00a6a7c&gt;] (process_one_work+0x27c/0x484) from [&lt;c00a6e94&gt;] (worker_thread+0x210/0x3b0)
&lt;4&gt; [&lt;c00a6e94&gt;] (worker_thread+0x210/0x3b0) from [&lt;c00aad9c&gt;] (kthread+0x80/0x8c)
&lt;4&gt; [&lt;c00aad9c&gt;] (kthread+0x80/0x8c) from [&lt;c000ea80&gt;] (kernel_thread_exit+0x0/0x8)

As an example, this particular crash occurred when CPU #3 is executing
mod_timer() on an inactive timer whose base is refered to offlined CPU
#2.  The code locked the timer_base corresponding to CPU #2. Before it
could proceed, CPU #2 came online and reinitialized the spinlock
corresponding to its base. Thus now CPU #3 held a lock which was
reinitialized. When CPU #3 finally ended up unlocking the old cpu_base
corresponding to CPU #2, we hit the above SPIN_BUG().

CPU #0		CPU #3				       CPU #2
------		-------				       -------
.....		 ......				      &lt;Offline&gt;
		mod_timer()
		 lock_timer_base
		   spin_lock_irqsave(&amp;base-&gt;lock)

cpu_up(2)	 .....				        ......
							init_timers_cpu()
....		 .....				    	spin_lock_init(&amp;base-&gt;lock)
.....		   spin_unlock_irqrestore(&amp;base-&gt;lock)  ......
		   &lt;spin_bug&gt;

Allocation of per_cpu timer vector bases is done only once under
"tvec_base_done[]" check. In the current code, spinlock_initialization
of base-&gt;lock isn't under this check. When a CPU is up each time the
base lock is reinitialized. Move base spinlock initialization under
the check.

Signed-off-by: Tirupathi Reddy &lt;tirupath@codeaurora.org&gt;
Link: http://lkml.kernel.org/r/1368520142-4136-1-git-send-email-tirupath@codeaurora.org
Signed-off-by: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Signed-off-by: Ben Hutchings &lt;ben@decadent.org.uk&gt;
</content>
</entry>
<entry>
<title>tick: Cleanup NOHZ per cpu data on cpu down</title>
<updated>2013-05-30T13:34:56Z</updated>
<author>
<name>Thomas Gleixner</name>
<email>tglx@linutronix.de</email>
</author>
<published>2013-05-03T13:02:50Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=d9202d65aa6b0378fd833a5098e4dcb855d38f44'/>
<id>urn:sha1:d9202d65aa6b0378fd833a5098e4dcb855d38f44</id>
<content type='text'>
commit 4b0c0f294f60abcdd20994a8341a95c8ac5eeb96 upstream.

Prarit reported a crash on CPU offline/online. The reason is that on
CPU down the NOHZ related per cpu data of the dead cpu is not cleaned
up. If at cpu online an interrupt happens before the per cpu tick
device is registered the irq_enter() check potentially sees stale data
and dereferences a NULL pointer.

Cleanup the data after the cpu is dead.

Reported-by: Prarit Bhargava &lt;prarit@redhat.com&gt;
Cc: Mike Galbraith &lt;bitbucket@online.de&gt;
Link: http://lkml.kernel.org/r/alpine.LFD.2.02.1305031451561.2886@ionos
Signed-off-by: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Signed-off-by: Ben Hutchings &lt;ben@decadent.org.uk&gt;
</content>
</entry>
</feed>
