<feed xmlns='http://www.w3.org/2005/Atom'>
<title>user/sven/linux.git/include/linux, branch v4.2.5</title>
<subtitle>Linux Kernel
</subtitle>
<id>https://git.stealer.net/cgit.cgi/user/sven/linux.git/atom?h=v4.2.5</id>
<link rel='self' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/atom?h=v4.2.5'/>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/'/>
<updated>2015-10-27T00:53:35Z</updated>
<entry>
<title>skbuff: Fix skb checksum partial check.</title>
<updated>2015-10-27T00:53:35Z</updated>
<author>
<name>Pravin B Shelar</name>
<email>pshelar@nicira.com</email>
</author>
<published>2015-09-29T00:24:25Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=dbce2220803222ccdaca8a22a4df9688738dbbc6'/>
<id>urn:sha1:dbce2220803222ccdaca8a22a4df9688738dbbc6</id>
<content type='text'>
[ Upstream commit 31b33dfb0a144469dd805514c9e63f4993729a48 ]

Earlier patch 6ae459bda tried to detect void ckecksum partial
skb by comparing pull length to checksum offset. But it does
not work for all cases since checksum-offset depends on
updates to skb-&gt;data.

Following patch fixes it by validating checksum start offset
after skb-data pointer is updated. Negative value of checksum
offset start means there is no need to checksum.

Fixes: 6ae459bda ("skbuff: Fix skb checksum flag on skb pull")
Reported-by: Andrew Vagin &lt;avagin@odin.com&gt;
Signed-off-by: Pravin B Shelar &lt;pshelar@nicira.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
</entry>
<entry>
<title>skbuff: Fix skb checksum flag on skb pull</title>
<updated>2015-10-27T00:53:35Z</updated>
<author>
<name>Pravin B Shelar</name>
<email>pshelar@nicira.com</email>
</author>
<published>2015-09-22T19:57:53Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=d20c97ca3264c3c5b70eb25cd126ee77e4cae851'/>
<id>urn:sha1:d20c97ca3264c3c5b70eb25cd126ee77e4cae851</id>
<content type='text'>
[ Upstream commit 6ae459bdaaeebc632b16e54dcbabb490c6931d61 ]

VXLAN device can receive skb with checksum partial. But the checksum
offset could be in outer header which is pulled on receive. This results
in negative checksum offset for the skb. Such skb can cause the assert
failure in skb_checksum_help(). Following patch fixes the bug by setting
checksum-none while pulling outer header.

Following is the kernel panic msg from old kernel hitting the bug.

------------[ cut here ]------------
kernel BUG at net/core/dev.c:1906!
RIP: 0010:[&lt;ffffffff81518034&gt;] skb_checksum_help+0x144/0x150
Call Trace:
&lt;IRQ&gt;
[&lt;ffffffffa0164c28&gt;] queue_userspace_packet+0x408/0x470 [openvswitch]
[&lt;ffffffffa016614d&gt;] ovs_dp_upcall+0x5d/0x60 [openvswitch]
[&lt;ffffffffa0166236&gt;] ovs_dp_process_packet_with_key+0xe6/0x100 [openvswitch]
[&lt;ffffffffa016629b&gt;] ovs_dp_process_received_packet+0x4b/0x80 [openvswitch]
[&lt;ffffffffa016c51a&gt;] ovs_vport_receive+0x2a/0x30 [openvswitch]
[&lt;ffffffffa0171383&gt;] vxlan_rcv+0x53/0x60 [openvswitch]
[&lt;ffffffffa01734cb&gt;] vxlan_udp_encap_recv+0x8b/0xf0 [openvswitch]
[&lt;ffffffff8157addc&gt;] udp_queue_rcv_skb+0x2dc/0x3b0
[&lt;ffffffff8157b56f&gt;] __udp4_lib_rcv+0x1cf/0x6c0
[&lt;ffffffff8157ba7a&gt;] udp_rcv+0x1a/0x20
[&lt;ffffffff8154fdbd&gt;] ip_local_deliver_finish+0xdd/0x280
[&lt;ffffffff81550128&gt;] ip_local_deliver+0x88/0x90
[&lt;ffffffff8154fa7d&gt;] ip_rcv_finish+0x10d/0x370
[&lt;ffffffff81550365&gt;] ip_rcv+0x235/0x300
[&lt;ffffffff8151ba1d&gt;] __netif_receive_skb+0x55d/0x620
[&lt;ffffffff8151c360&gt;] netif_receive_skb+0x80/0x90
[&lt;ffffffff81459935&gt;] virtnet_poll+0x555/0x6f0
[&lt;ffffffff8151cd04&gt;] net_rx_action+0x134/0x290
[&lt;ffffffff810683d8&gt;] __do_softirq+0xa8/0x210
[&lt;ffffffff8162fe6c&gt;] call_softirq+0x1c/0x30
[&lt;ffffffff810161a5&gt;] do_softirq+0x65/0xa0
[&lt;ffffffff810687be&gt;] irq_exit+0x8e/0xb0
[&lt;ffffffff81630733&gt;] do_IRQ+0x63/0xe0
[&lt;ffffffff81625f2e&gt;] common_interrupt+0x6e/0x6e

Reported-by: Anupam Chanda &lt;achanda@vmware.com&gt;
Signed-off-by: Pravin B Shelar &lt;pshelar@nicira.com&gt;
Acked-by: Tom Herbert &lt;tom@herbertland.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
</entry>
<entry>
<title>sched/preempt: Fix cond_resched_lock() and cond_resched_softirq()</title>
<updated>2015-10-22T21:49:35Z</updated>
<author>
<name>Konstantin Khlebnikov</name>
<email>khlebnikov@yandex-team.ru</email>
</author>
<published>2015-07-15T09:52:04Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=29b8eaffcba906afc07374022f43351903b94cee'/>
<id>urn:sha1:29b8eaffcba906afc07374022f43351903b94cee</id>
<content type='text'>
commit fe32d3cd5e8eb0f82e459763374aa80797023403 upstream.

These functions check should_resched() before unlocking spinlock/bh-enable:
preempt_count always non-zero =&gt; should_resched() always returns false.
cond_resched_lock() worked iff spin_needbreak is set.

This patch adds argument "preempt_offset" to should_resched().

preempt_count offset constants for that:

  PREEMPT_DISABLE_OFFSET  - offset after preempt_disable()
  PREEMPT_LOCK_OFFSET     - offset after spin_lock()
  SOFTIRQ_DISABLE_OFFSET  - offset after local_bh_distable()
  SOFTIRQ_LOCK_OFFSET     - offset after spin_lock_bh()

Signed-off-by: Konstantin Khlebnikov &lt;khlebnikov@yandex-team.ru&gt;
Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
Cc: Alexander Graf &lt;agraf@suse.de&gt;
Cc: Boris Ostrovsky &lt;boris.ostrovsky@oracle.com&gt;
Cc: David Vrabel &lt;david.vrabel@citrix.com&gt;
Cc: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Cc: Mike Galbraith &lt;efault@gmx.de&gt;
Cc: Paul Mackerras &lt;paulus@samba.org&gt;
Cc: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Fixes: bdb438065890 ("sched: Extract the basic add/sub preempt_count modifiers")
Link: http://lkml.kernel.org/r/20150715095204.12246.98268.stgit@buzz
Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>security: fix typo in security_task_prctl</title>
<updated>2015-10-22T21:49:29Z</updated>
<author>
<name>Jann Horn</name>
<email>jann@thejh.net</email>
</author>
<published>2015-09-18T21:41:23Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=e716af1ed89ba0a44094e149939810dca2172226'/>
<id>urn:sha1:e716af1ed89ba0a44094e149939810dca2172226</id>
<content type='text'>
commit b7f76ea2ef6739ee484a165ffbac98deb855d3d3 upstream.

Signed-off-by: Jann Horn &lt;jann@thejh.net&gt;
Reviewed-by: Andy Lutomirski &lt;luto@kernel.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>memcg: fix dirty page migration</title>
<updated>2015-10-22T21:49:20Z</updated>
<author>
<name>Greg Thelen</name>
<email>gthelen@google.com</email>
</author>
<published>2015-10-01T22:37:02Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=aba3953d525672ee7f6c59d340e0ad59a8838275'/>
<id>urn:sha1:aba3953d525672ee7f6c59d340e0ad59a8838275</id>
<content type='text'>
commit 0610c25daa3e76e38ad5a8fae683a89ff9f71798 upstream.

The problem starts with a file backed dirty page which is charged to a
memcg.  Then page migration is used to move oldpage to newpage.

Migration:
 - copies the oldpage's data to newpage
 - clears oldpage.PG_dirty
 - sets newpage.PG_dirty
 - uncharges oldpage from memcg
 - charges newpage to memcg

Clearing oldpage.PG_dirty decrements the charged memcg's dirty page
count.

However, because newpage is not yet charged, setting newpage.PG_dirty
does not increment the memcg's dirty page count.  After migration
completes newpage.PG_dirty is eventually cleared, often in
account_page_cleaned().  At this time newpage is charged to a memcg so
the memcg's dirty page count is decremented which causes underflow
because the count was not previously incremented by migration.  This
underflow causes balance_dirty_pages() to see a very large unsigned
number of dirty memcg pages which leads to aggressive throttling of
buffered writes by processes in non root memcg.

This issue:
 - can harm performance of non root memcg buffered writes.
 - can report too small (even negative) values in
   memory.stat[(total_)dirty] counters of all memcg, including the root.

To avoid polluting migrate.c with #ifdef CONFIG_MEMCG checks, introduce
page_memcg() and set_page_memcg() helpers.

Test:
    0) setup and enter limited memcg
    mkdir /sys/fs/cgroup/test
    echo 1G &gt; /sys/fs/cgroup/test/memory.limit_in_bytes
    echo $$ &gt; /sys/fs/cgroup/test/cgroup.procs

    1) buffered writes baseline
    dd if=/dev/zero of=/data/tmp/foo bs=1M count=1k
    sync
    grep ^dirty /sys/fs/cgroup/test/memory.stat

    2) buffered writes with compaction antagonist to induce migration
    yes 1 &gt; /proc/sys/vm/compact_memory &amp;
    rm -rf /data/tmp/foo
    dd if=/dev/zero of=/data/tmp/foo bs=1M count=1k
    kill %
    sync
    grep ^dirty /sys/fs/cgroup/test/memory.stat

    3) buffered writes without antagonist, should match baseline
    rm -rf /data/tmp/foo
    dd if=/dev/zero of=/data/tmp/foo bs=1M count=1k
    sync
    grep ^dirty /sys/fs/cgroup/test/memory.stat

                       (speed, dirty residue)
             unpatched                       patched
    1) 841 MB/s 0 dirty pages          886 MB/s 0 dirty pages
    2) 611 MB/s -33427456 dirty pages  793 MB/s 0 dirty pages
    3) 114 MB/s -33427456 dirty pages  891 MB/s 0 dirty pages

    Notice that unpatched baseline performance (1) fell after
    migration (3): 841 -&gt; 114 MB/s.  In the patched kernel, post
    migration performance matches baseline.

Fixes: c4843a7593a9 ("memcg: add per cgroup dirty page accounting")
Signed-off-by: Greg Thelen &lt;gthelen@google.com&gt;
Reported-by: Dave Hansen &lt;dave.hansen@intel.com&gt;
Acked-by: Michal Hocko &lt;mhocko@suse.com&gt;
Acked-by: Johannes Weiner &lt;hannes@cmpxchg.org&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>Revert "sched, cgroup: replace signal_struct-&gt;group_rwsem with a global percpu_rwsem"</title>
<updated>2015-10-22T21:49:19Z</updated>
<author>
<name>Tejun Heo</name>
<email>tj@kernel.org</email>
</author>
<published>2015-09-16T15:51:12Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=205f67610791fcbad0b8f61e43d2f2bd3c5188ae'/>
<id>urn:sha1:205f67610791fcbad0b8f61e43d2f2bd3c5188ae</id>
<content type='text'>
commit 0c986253b939cc14c69d4adbe2b4121bdf4aa220 upstream.

This reverts commit d59cfc09c32a2ae31f1c3bc2983a0cd79afb3f14.

d59cfc09c32a ("sched, cgroup: replace signal_struct-&gt;group_rwsem with
a global percpu_rwsem") and b5ba75b5fc0e ("cgroup: simplify
threadgroup locking") changed how cgroup synchronizes against task
fork and exits so that it uses global percpu_rwsem instead of
per-process rwsem; unfortunately, the write [un]lock paths of
percpu_rwsem always involve synchronize_rcu_expedited() which turned
out to be too expensive.

Improvements for percpu_rwsem are scheduled to be merged in the coming
v4.4-rc1 merge window which alleviates this issue.  For now, revert
the two commits to restore per-process rwsem.  They will be re-applied
for the v4.4-rc1 merge window.

Signed-off-by: Tejun Heo &lt;tj@kernel.org&gt;
Link: http://lkml.kernel.org/g/55F8097A.7000206@de.ibm.com
Reported-by: Christian Borntraeger &lt;borntraeger@de.ibm.com&gt;
Cc: Oleg Nesterov &lt;oleg@redhat.com&gt;
Cc: "Paul E. McKenney" &lt;paulmck@linux.vnet.ibm.com&gt;
Cc: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Paolo Bonzini &lt;pbonzini@redhat.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>jbd2: avoid infinite loop when destroying aborted journal</title>
<updated>2015-09-29T17:33:37Z</updated>
<author>
<name>Jan Kara</name>
<email>jack@suse.com</email>
</author>
<published>2015-07-28T18:57:14Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=b01684edc737c22add3dcd471da6371cd4914ada'/>
<id>urn:sha1:b01684edc737c22add3dcd471da6371cd4914ada</id>
<content type='text'>
commit 841df7df196237ea63233f0f9eaa41db53afd70f upstream.

Commit 6f6a6fda2945 "jbd2: fix ocfs2 corrupt when updating journal
superblock fails" changed jbd2_cleanup_journal_tail() to return EIO
when the journal is aborted. That makes logic in
jbd2_log_do_checkpoint() bail out which is fine, except that
jbd2_journal_destroy() expects jbd2_log_do_checkpoint() to always make
a progress in cleaning the journal. Without it jbd2_journal_destroy()
just loops in an infinite loop.

Fix jbd2_journal_destroy() to cleanup journal checkpoint lists of
jbd2_log_do_checkpoint() fails with error.

Reported-by: Eryu Guan &lt;guaneryu@gmail.com&gt;
Tested-by: Eryu Guan &lt;guaneryu@gmail.com&gt;
Fixes: 6f6a6fda294506dfe0e3e0a253bb2d2923f28f0a
Signed-off-by: Jan Kara &lt;jack@suse.com&gt;
Signed-off-by: Theodore Ts'o &lt;tytso@mit.edu&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>SUNRPC: Ensure that we wait for connections to complete before retrying</title>
<updated>2015-09-29T17:33:31Z</updated>
<author>
<name>Trond Myklebust</name>
<email>trond.myklebust@primarydata.com</email>
</author>
<published>2015-09-17T03:43:17Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=bd6f589293244cf406585cfac2b6e9a5742181ae'/>
<id>urn:sha1:bd6f589293244cf406585cfac2b6e9a5742181ae</id>
<content type='text'>
commit 0fdea1e8a2853f79d39b8555cc9de16a7e0ab26f upstream.

Commit 718ba5b87343, moved the responsibility for unlocking the socket to
xs_tcp_setup_socket, meaning that the socket will be unlocked before we
know that it has finished trying to connect. The following patch is based on
an initial patch by Russell King to ensure that we delay clearing the
XPRT_CONNECTING flag until we either know that we failed to initiate
a connection attempt, or the connection attempt itself failed.

Fixes: 718ba5b87343 ("SUNRPC: Add helpers to prevent socket create from racing")
Reported-by: Russell King &lt;linux@arm.linux.org.uk&gt;
Reported-by: Russell King &lt;rmk+kernel@arm.linux.org.uk&gt;
Tested-by: Russell King &lt;rmk+kernel@arm.linux.org.uk&gt;
Tested-by: Benjamin Coddington &lt;bcodding@redhat.com&gt;
Signed-off-by: Trond Myklebust &lt;trond.myklebust@primarydata.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>svcrdma: Change maximum server payload back to RPCSVC_MAXPAYLOAD</title>
<updated>2015-09-29T17:33:30Z</updated>
<author>
<name>Chuck Lever</name>
<email>chuck.lever@oracle.com</email>
</author>
<published>2015-08-07T20:55:46Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=8fd1f7a1c4bbfb43f3f17d12f5c60925ebd9dd6c'/>
<id>urn:sha1:8fd1f7a1c4bbfb43f3f17d12f5c60925ebd9dd6c</id>
<content type='text'>
commit cc9a903d915c21626b6b2fbf8ed0ff16a7f82210 upstream.

Both commit 0380a3f375 ("svcrdma: Add a separate "max data segs"
macro for svcrdma") and commit 7e5be28827bf ("svcrdma: advertise
the correct max payload") are incorrect. This commit reverts both
changes, restoring the server's maximum payload size to 1MB.

Commit 7e5be28827bf based the server's maximum payload on the
_client's_ RPCRDMA_MAX_DATA_SEGS value. That was wrong.

Commit 0380a3f375 tried to fix this so that the client maximum
payload size could be raised without affecting the server, but
managed to confuse matters more on the server side.

More importantly, limiting the advertised maximum payload size was
meant to be a workaround, not the actual fix. We need to revisit

  https://bugzilla.linux-nfs.org/show_bug.cgi?id=270

A Linux client on a platform with 64KB pages can overrun and crash
an x86_64 NFS/RDMA server when the r/wsize is 1MB. An x86/64 Linux
client seems to work fine using 1MB reads and writes when the Linux
server's maximum payload size is restored to 1MB.

BugLink: https://bugzilla.linux-nfs.org/show_bug.cgi?id=270
Fixes: 0380a3f375 ("svcrdma: Add a separate "max data segs" macro")
Signed-off-by: Chuck Lever &lt;chuck.lever@oracle.com&gt;
Signed-off-by: J. Bruce Fields &lt;bfields@redhat.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>nfc: st-nci: Remove duplicate file platform_data/st_nci.h</title>
<updated>2015-09-29T17:33:14Z</updated>
<author>
<name>Christophe Ricard</name>
<email>christophe.ricard@gmail.com</email>
</author>
<published>2015-08-14T20:33:30Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=d0d7862a23b17851f6fe7a1b20c4f4e3b702e126'/>
<id>urn:sha1:d0d7862a23b17851f6fe7a1b20c4f4e3b702e126</id>
<content type='text'>
commit 76b733d15874128ee2d0365b4cbe7d51decd8d37 upstream.

commit "nfc: st-nci: Rename st21nfcb to st-nci" adds
include/linux/platform_data/st_nci.h duplicated with
include/linux/platform_data/st-nci.h.

Only drivers/nfc/st-nci/i2c.c uses platform_data/st_nci.h.

Reported-by: Hauke Mehrtens &lt;hauke@hauke-m.de&gt;
Signed-off-by: Christophe Ricard &lt;christophe-h.ricard@st.com&gt;
Signed-off-by: Samuel Ortiz &lt;sameo@linux.intel.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
</feed>
