<feed xmlns='http://www.w3.org/2005/Atom'>
<title>user/sven/linux.git, branch v3.4.56</title>
<subtitle>Linux Kernel
</subtitle>
<id>https://git.stealer.net/cgit.cgi/user/sven/linux.git/atom?h=v3.4.56</id>
<link rel='self' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/atom?h=v3.4.56'/>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/'/>
<updated>2013-08-04T08:35:23Z</updated>
<entry>
<title>Linux 3.4.56</title>
<updated>2013-08-04T08:35:23Z</updated>
<author>
<name>Greg Kroah-Hartman</name>
<email>gregkh@linuxfoundation.org</email>
</author>
<published>2013-08-04T08:35:23Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=7b17c57914788c370756ddc7ed8ca1b43394ef0f'/>
<id>urn:sha1:7b17c57914788c370756ddc7ed8ca1b43394ef0f</id>
<content type='text'>
</content>
</entry>
<entry>
<title>mm/memory-hotplug: fix lowmem count overflow when offline pages</title>
<updated>2013-08-04T08:26:07Z</updated>
<author>
<name>Wanpeng Li</name>
<email>liwanp@linux.vnet.ibm.com</email>
</author>
<published>2013-07-03T22:02:40Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=19d22ea89933ca48bcb10fc7919ed7bbefd52362'/>
<id>urn:sha1:19d22ea89933ca48bcb10fc7919ed7bbefd52362</id>
<content type='text'>
commit cea27eb2a202959783f81254c48c250ddd80e129 upstream.

The logic for the memory-remove code fails to correctly account the
Total High Memory when a memory block which contains High Memory is
offlined as shown in the example below.  The following patch fixes it.

Before logic memory remove:

MemTotal:        7603740 kB
MemFree:         6329612 kB
Buffers:           94352 kB
Cached:           872008 kB
SwapCached:            0 kB
Active:           626932 kB
Inactive:         519216 kB
Active(anon):     180776 kB
Inactive(anon):   222944 kB
Active(file):     446156 kB
Inactive(file):   296272 kB
Unevictable:           0 kB
Mlocked:               0 kB
HighTotal:       7294672 kB
HighFree:        5704696 kB
LowTotal:         309068 kB
LowFree:          624916 kB

After logic memory remove:

MemTotal:        7079452 kB
MemFree:         5805976 kB
Buffers:           94372 kB
Cached:           872000 kB
SwapCached:            0 kB
Active:           626936 kB
Inactive:         519236 kB
Active(anon):     180780 kB
Inactive(anon):   222944 kB
Active(file):     446156 kB
Inactive(file):   296292 kB
Unevictable:           0 kB
Mlocked:               0 kB
HighTotal:       7294672 kB
HighFree:        5181024 kB
LowTotal:       4294752076 kB
LowFree:          624952 kB

[mhocko@suse.cz: fix CONFIG_HIGHMEM=n build]
Signed-off-by: Wanpeng Li &lt;liwanp@linux.vnet.ibm.com&gt;
Reviewed-by: Michal Hocko &lt;mhocko@suse.cz&gt;
Cc: KAMEZAWA Hiroyuki &lt;kamezawa.hiroyu@jp.fujitsu.com&gt;
Cc: David Rientjes &lt;rientjes@google.com&gt;
Cc: &lt;stable@vger.kernel.org&gt;	[2.6.24+]
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Signed-off-by: Zhouping Liu &lt;zliu@redhat.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>virtio_net: fix race in RX VQ processing</title>
<updated>2013-08-04T08:26:06Z</updated>
<author>
<name>Michael S. Tsirkin</name>
<email>mst@redhat.com</email>
</author>
<published>2013-08-04T08:26:06Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=40a017c96a98a29c9d39bf0ca34651288984e9ce'/>
<id>urn:sha1:40a017c96a98a29c9d39bf0ca34651288984e9ce</id>
<content type='text'>
commit cbdadbbf0c790f79350a8f36029208944c5487d0 upstream

virtio net called virtqueue_enable_cq on RX path after napi_complete, so
with NAPI_STATE_SCHED clear - outside the implicit napi lock.
This violates the requirement to synchronize virtqueue_enable_cq wrt
virtqueue_add_buf.  In particular, used event can move backwards,
causing us to lose interrupts.
In a debug build, this can trigger panic within START_USE.

Jason Wang reports that he can trigger the races artificially,
by adding udelay() in virtqueue_enable_cb() after virtio_mb().

However, we must call napi_complete to clear NAPI_STATE_SCHED before
polling the virtqueue for used buffers, otherwise napi_schedule_prep in
a callback will fail, causing us to lose RX events.

To fix, call virtqueue_enable_cb_prepare with NAPI_STATE_SCHED
set (under napi lock), later call virtqueue_poll with
NAPI_STATE_SCHED clear (outside the lock).

Reported-by: Jason Wang &lt;jasowang@redhat.com&gt;
Tested-by: Jason Wang &lt;jasowang@redhat.com&gt;
Acked-by: Jason Wang &lt;jasowang@redhat.com&gt;
Signed-off-by: Michael S. Tsirkin &lt;mst@redhat.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
[wg: Backported to 3.2]
Signed-off-by: Wolfram Gloger &lt;wmglo@dent.med.uni-muenchen.de&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
</entry>
<entry>
<title>virtio: support unlocked queue poll</title>
<updated>2013-08-04T08:26:03Z</updated>
<author>
<name>Michael S. Tsirkin</name>
<email>mst@redhat.com</email>
</author>
<published>2013-07-09T10:19:18Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=2010fa31f8f5c5fa8385459dff03cb844cf79dd1'/>
<id>urn:sha1:2010fa31f8f5c5fa8385459dff03cb844cf79dd1</id>
<content type='text'>
commit cc229884d3f77ec3b1240e467e0236c3e0647c0c upstream.

This adds a way to check ring empty state after enable_cb outside any
locks. Will be used by virtio_net.

Note: there's room for more optimization: caller is likely to have a
memory barrier already, which means we might be able to get rid of a
barrier here.  Deferring this optimization until we do some
benchmarking.

Signed-off-by: Michael S. Tsirkin &lt;mst@redhat.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
[wg: Backported to 3.2]
Signed-off-by: Wolfram Gloger &lt;wmglo@dent.med.uni-muenchen.de&gt;
[bwh: Backported to 3.4: adjust context]
Signed-off-by: Ben Hutchings &lt;ben@decadent.org.uk&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
</entry>
<entry>
<title>s390: move dummy io_remap_pfn_range() to asm/pgtable.h</title>
<updated>2013-08-04T08:26:03Z</updated>
<author>
<name>Linus Torvalds</name>
<email>torvalds@linux-foundation.org</email>
</author>
<published>2013-04-17T15:46:19Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=c733e1a2c459c09fe2510bfa49e2571b532be97c'/>
<id>urn:sha1:c733e1a2c459c09fe2510bfa49e2571b532be97c</id>
<content type='text'>
commit 4f2e29031e6c67802e7370292dd050fd62f337ee upstream.

Commit b4cbb197c7e7 ("vm: add vm_iomap_memory() helper function") added
a helper function wrapper around io_remap_pfn_range(), and every other
architecture defined it in &lt;asm/pgtable.h&gt;.

The s390 choice of &lt;asm/io.h&gt; may make sense, but is not very convenient
for this case, and gratuitous differences like that cause unexpected errors like this:

   mm/memory.c: In function 'vm_iomap_memory':
   mm/memory.c:2439:2: error: implicit declaration of function 'io_remap_pfn_range' [-Werror=implicit-function-declaration]

Glory be the kbuild test robot who noticed this, bisected it, and
reported it to the guilty parties (ie me).

Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Cc: Martin Schwidefsky &lt;schwidefsky@de.ibm.com&gt;
Cc: Heiko Carstens &lt;heiko.carstens@de.ibm.com&gt;
[bwh: Backported to 3.2: the macro was not defined, so this is an addition
 and not a move]
Signed-off-by: Ben Hutchings &lt;ben@decadent.org.uk&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>zfcp: status read buffers on first adapter open with link down</title>
<updated>2013-08-04T08:26:03Z</updated>
<author>
<name>Steffen Maier</name>
<email>maier@linux.vnet.ibm.com</email>
</author>
<published>2013-04-26T15:34:54Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=17a9afa3e0039e9f0d0caa3a1a5a92b2b7671c3e'/>
<id>urn:sha1:17a9afa3e0039e9f0d0caa3a1a5a92b2b7671c3e</id>
<content type='text'>
commit 9edf7d75ee5f21663a0183d21f702682d0ef132f upstream.

Commit 64deb6efdc5504ce97b5c1c6f281fffbc150bd93
"[SCSI] zfcp: Use status_read_buf_num provided by FCP channel"
started using a value returned by the channel but only evaluated the value
if the fabric link is up.
Commit 8d88cf3f3b9af4713642caeb221b6d6a42019001
"[SCSI] zfcp: Update status read mempool"
introduced mempool resizings based on the above value.
On setting an FCP device online for the very first time since boot, a new
zeroed adapter object is allocated. If the link is down, the number of
status read requests remains zero. Since just the config data exchange is
incomplete, we proceed with adapter open recovery. However, we
unconditionally call mempool_resize with adapter-&gt;stat_read_buf_num == 0 in
this case.

This causes a kernel message "kernel BUG at mm/mempool.c:131!" in process
"zfcperp&lt;FCP-device-bus-ID&gt;" with last function mempool_resize in Krnl PSW
and zfcp_erp_thread in the Call Trace.

Don't evaluate channel values which are invalid on link down. The number of
status read requests is always valid, evaluated, and set to a positive
minimum greater than zero. The adapter open recovery can proceed and the
channel has status read buffers to inform us on a future link up event.
While we are not aware of any other code path that could result in mempool
resize attempts of size zero, we still also initialize the number of status
read buffers to be posted to a static minimum number on adapter object
allocation.

Backported for 3.4-stable. commit a53c8fa since v3.6-rc1 unified
copyright messages, e.g: revise such messages 'Copyright IBM Corporation'
as 'Copyright IBM Corp', so updated the messages as a53c8fa did.

Signed-off-by: Steffen Maier &lt;maier@linux.vnet.ibm.com&gt;
Cc: &lt;stable@vger.kernel.org&gt; #2.6.35+
Signed-off-by: James Bottomley &lt;JBottomley@Parallels.com&gt;
Signed-off-by: Zhouping Liu &lt;zliu@redhat.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>firewire: fix libdc1394/FlyCap2 iso event regression</title>
<updated>2013-08-04T08:26:02Z</updated>
<author>
<name>Clemens Ladisch</name>
<email>clemens@ladisch.de</email>
</author>
<published>2013-07-22T19:32:09Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=a90a3adeda28c4b701b11770817cf86d92db3228'/>
<id>urn:sha1:a90a3adeda28c4b701b11770817cf86d92db3228</id>
<content type='text'>
commit 0699a73af3811b66b1ab5650575acee5eea841ab upstream.

Commit 18d627113b83 (firewire: prevent dropping of completed iso packet
header data) was intended to be an obvious bug fix, but libdc1394 and
FlyCap2 depend on the old behaviour by ignoring all returned information
and thus not noticing that not all packets have been received yet.  The
result was that the video frame buffers would be saved before they
contained the correct data.

Reintroduce the old behaviour for old clients.

Tested-by: Stepan Salenikovich &lt;stepan.salenikovich@gmail.com&gt;
Tested-by: Josep Bosch &lt;jep250@gmail.com&gt;
Cc: &lt;stable@vger.kernel.org&gt; # 3.4+
Signed-off-by: Clemens Ladisch &lt;clemens@ladisch.de&gt;
Signed-off-by: Stefan Richter &lt;stefanr@s5r6.in-berlin.de&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>xen/evtchn: avoid a deadlock when unbinding an event channel</title>
<updated>2013-08-04T08:26:00Z</updated>
<author>
<name>David Vrabel</name>
<email>david.vrabel@citrix.com</email>
</author>
<published>2013-07-19T14:51:58Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=8924588cf02036c239cfec4302f8b44ca6e2c6bb'/>
<id>urn:sha1:8924588cf02036c239cfec4302f8b44ca6e2c6bb</id>
<content type='text'>
commit 179fbd5a45f0d4034cc6fd37b8d367a3b79663c4 upstream.

Unbinding an event channel (either with the ioctl or when the evtchn
device is closed) may deadlock because disable_irq() is called with
port_user_lock held which is also locked by the interrupt handler.

Think of the IOCTL_EVTCHN_UNBIND is being serviced, the routine has
just taken the lock, and an interrupt happens. The evtchn_interrupt
is invoked, tries to take the lock and spins forever.

A quick glance at the code shows that the spinlock is a local IRQ
variant. Unfortunately that does not help as "disable_irq() waits for
the interrupt handler on all CPUs to stop running.  If the irq occurs
on another VCPU, it tries to take port_user_lock and can't because
the unbind ioctl is holding it." (from David). Hence we cannot
depend on the said spinlock to protect us. We could make it a system
wide IRQ disable spinlock but there is a better way.

We can piggyback on the fact that the existence of the spinlock is
to make get_port_user() checks be up-to-date. And we can alter those
checks to not depend on the spin lock (as it's protected by u-&gt;bind_mutex
in the ioctl) and can remove the unnecessary locking (this is
IOCTL_EVTCHN_UNBIND) path.

In the interrupt handler we cannot use the mutex, but we do not
need it.

"The unbind disables the irq before making the port user stale, so when
you clear it you are guaranteed that the interrupt handler that might
use that port cannot be running." (from David).

Hence this patch removes the spinlock usage on the teardown path
and piggybacks on disable_irq happening before we muck with the
get_port_user() data. This ensures that the interrupt handler will
never run on stale data.

Signed-off-by: David Vrabel &lt;david.vrabel@citrix.com&gt;
Signed-off-by: Konrad Rzeszutek Wilk &lt;konrad.wilk@oracle.com&gt;
[v1: Expanded the commit description a bit]
Signed-off-by: Jonghwan Choi &lt;jhbird.choi@samsung.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>md/raid10: remove use-after-free bug.</title>
<updated>2013-08-04T08:26:00Z</updated>
<author>
<name>NeilBrown</name>
<email>neilb@suse.de</email>
</author>
<published>2013-07-24T05:37:42Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=aa2f8abe27dd7058fd7bfe52401121037a285406'/>
<id>urn:sha1:aa2f8abe27dd7058fd7bfe52401121037a285406</id>
<content type='text'>
commit 0eb25bb027a100f5a9df8991f2f628e7d851bc1e upstream.

We always need to be careful when calling generic_make_request, as it
can start a chain of events which might free something that we are
using.

Here is one place I wasn't careful enough.  If the wbio2 is not in
use, then it might get freed at the first generic_make_request call.
So perform all necessary tests first.

This bug was introduced in 3.3-rc3 (24afd80d99) and can cause an
oops, so fix is suitable for any -stable since then.

Signed-off-by: NeilBrown &lt;neilb@suse.de&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>md/raid5: fix interaction of 'replace' and 'recovery'.</title>
<updated>2013-08-04T08:26:00Z</updated>
<author>
<name>NeilBrown</name>
<email>neilb@suse.de</email>
</author>
<published>2013-07-22T02:57:21Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=0761d079bbc23ffd309c586ddd142b3cb5f11e0d'/>
<id>urn:sha1:0761d079bbc23ffd309c586ddd142b3cb5f11e0d</id>
<content type='text'>
commit f94c0b6658c7edea8bc19d13be321e3860a3fa54 upstream.

If a device in a RAID4/5/6 is being replaced while another is being
recovered, then the writes to the replacement device currently don't
happen, resulting in corruption when the replacement completes and the
new drive takes over.

This is because the replacement writes are only triggered when
's.replacing' is set and not when the similar 's.sync' is set (which
is the case during resync and recovery - it means all devices need to
be read).

So schedule those writes when s.replacing is set as well.

In this case we cannot use "STRIPE_INSYNC" to record that the
replacement has happened as that is needed for recording that any
parity calculation is complete.  So introduce STRIPE_REPLACED to
record if the replacement has happened.

For safety we should also check that STRIPE_COMPUTE_RUN is not set.
This has a similar effect to the "s.locked == 0" test.  The latter
ensure that now IO has been flagged but not started.  The former
checks if any parity calculation has been flagged by not started.
We must wait for both of these to complete before triggering the
'replace'.

Add a similar test to the subsequent check for "are we finished yet".
This possibly isn't needed (is subsumed in the STRIPE_INSYNC test),
but it makes it more obvious that the REPLACE will happen before we
think we are finished.

Finally if a NeedReplace device is not UPTODATE then that is an
error.  We really must trigger a warning.

This bug was introduced in commit 9a3e1101b827a59ac9036a672f5fa8d5279d0fe2
(md/raid5:  detect and handle replacements during recovery.)
which introduced replacement for raid5.
That was in 3.3-rc3, so any stable kernel since then would benefit
from this fix.

Reported-by: qindehua &lt;13691222965@163.com&gt;
Tested-by: qindehua &lt;qindehua@163.com&gt;
Signed-off-by: NeilBrown &lt;neilb@suse.de&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
</feed>
