<feed xmlns='http://www.w3.org/2005/Atom'>
<title>user/sven/linux.git/include, branch v5.7.17</title>
<subtitle>Linux Kernel
</subtitle>
<id>https://git.stealer.net/cgit.cgi/user/sven/linux.git/atom?h=v5.7.17</id>
<link rel='self' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/atom?h=v5.7.17'/>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/'/>
<updated>2020-08-21T11:07:36Z</updated>
<entry>
<title>iommu/vt-d: Enforce PASID devTLB field mask</title>
<updated>2020-08-21T11:07:36Z</updated>
<author>
<name>Liu Yi L</name>
<email>yi.l.liu@intel.com</email>
</author>
<published>2020-07-24T01:49:14Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=31f383287c93399c798cd0e707c69ca4df7ea312'/>
<id>urn:sha1:31f383287c93399c798cd0e707c69ca4df7ea312</id>
<content type='text'>
[ Upstream commit 5f77d6ca5ca74e4b4a5e2e010f7ff50c45dea326 ]

Set proper masks to avoid invalid input spillover to reserved bits.

Signed-off-by: Liu Yi L &lt;yi.l.liu@intel.com&gt;
Signed-off-by: Jacob Pan &lt;jacob.jun.pan@linux.intel.com&gt;
Signed-off-by: Lu Baolu &lt;baolu.lu@linux.intel.com&gt;
Reviewed-by: Eric Auger &lt;eric.auger@redhat.com&gt;
Link: https://lore.kernel.org/r/20200724014925.15523-2-baolu.lu@linux.intel.com
Signed-off-by: Joerg Roedel &lt;jroedel@suse.de&gt;
Signed-off-by: Sasha Levin &lt;sashal@kernel.org&gt;
</content>
</entry>
<entry>
<title>crypto: algif_aead - Only wake up when ctx-&gt;more is zero</title>
<updated>2020-08-21T11:07:31Z</updated>
<author>
<name>Herbert Xu</name>
<email>herbert@gondor.apana.org.au</email>
</author>
<published>2020-05-29T14:23:49Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=e26cdc36de7b1bb0c75e9e46df2fb55c2b24e713'/>
<id>urn:sha1:e26cdc36de7b1bb0c75e9e46df2fb55c2b24e713</id>
<content type='text'>
[ Upstream commit f3c802a1f30013f8f723b62d7fa49eb9e991da23 ]

AEAD does not support partial requests so we must not wake up
while ctx-&gt;more is set.  In order to distinguish between the
case of no data sent yet and a zero-length request, a new init
flag has been added to ctx.

SKCIPHER has also been modified to ensure that at least a block
of data is available if there is more data to come.

Fixes: 2d97591ef43d ("crypto: af_alg - consolidation of...")
Signed-off-by: Herbert Xu &lt;herbert@gondor.apana.org.au&gt;
Signed-off-by: Sasha Levin &lt;sashal@kernel.org&gt;
</content>
</entry>
<entry>
<title>hugetlbfs: remove call to huge_pte_alloc without i_mmap_rwsem</title>
<updated>2020-08-21T11:07:27Z</updated>
<author>
<name>Mike Kravetz</name>
<email>mike.kravetz@oracle.com</email>
</author>
<published>2020-08-12T01:31:38Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=216944c62eacec737a260829e4ba8de3dac3c6e3'/>
<id>urn:sha1:216944c62eacec737a260829e4ba8de3dac3c6e3</id>
<content type='text'>
commit 34ae204f18519f0920bd50a644abd6fefc8dbfcf upstream.

Commit c0d0381ade79 ("hugetlbfs: use i_mmap_rwsem for more pmd sharing
synchronization") requires callers of huge_pte_alloc to hold i_mmap_rwsem
in at least read mode.  This is because the explicit locking in
huge_pmd_share (called by huge_pte_alloc) was removed.  When restructuring
the code, the call to huge_pte_alloc in the else block at the beginning of
hugetlb_fault was missed.

Unfortunately, that else clause is exercised when there is no page table
entry.  This will likely lead to a call to huge_pmd_share.  If
huge_pmd_share thinks pmd sharing is possible, it will traverse the
mapping tree (i_mmap) without holding i_mmap_rwsem.  If someone else is
modifying the tree, bad things such as addressing exceptions or worse
could happen.

Simply remove the else clause.  It should have been removed previously.
The code following the else will call huge_pte_alloc with the appropriate
locking.

To prevent this type of issue in the future, add routines to assert that
i_mmap_rwsem is held, and call these routines in huge pmd sharing
routines.

Fixes: c0d0381ade79 ("hugetlbfs: use i_mmap_rwsem for more pmd sharing synchronization")
Suggested-by: Matthew Wilcox &lt;willy@infradead.org&gt;
Signed-off-by: Mike Kravetz &lt;mike.kravetz@oracle.com&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Cc: Michal Hocko &lt;mhocko@kernel.org&gt;
Cc: Hugh Dickins &lt;hughd@google.com&gt;
Cc: Naoya Horiguchi &lt;n-horiguchi@ah.jp.nec.com&gt;
Cc: "Aneesh Kumar K.V" &lt;aneesh.kumar@linux.vnet.ibm.com&gt;
Cc: Andrea Arcangeli &lt;aarcange@redhat.com&gt;
Cc: "Kirill A.Shutemov" &lt;kirill.shutemov@linux.intel.com&gt;
Cc: Davidlohr Bueso &lt;dave@stgolabs.net&gt;
Cc: Prakash Sangappa &lt;prakash.sangappa@oracle.com&gt;
Cc: &lt;stable@vger.kernel.org&gt;
Link: http://lkml.kernel.org/r/e670f327-5cf9-1959-96e4-6dc7cc30d3d5@oracle.com
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>net/compat: Add missing sock updates for SCM_RIGHTS</title>
<updated>2020-08-21T11:07:25Z</updated>
<author>
<name>Kees Cook</name>
<email>keescook@chromium.org</email>
</author>
<published>2020-06-09T23:11:29Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=342a9b03f7944c0164bb4352ba7544d22af07654'/>
<id>urn:sha1:342a9b03f7944c0164bb4352ba7544d22af07654</id>
<content type='text'>
commit d9539752d23283db4692384a634034f451261e29 upstream.

Add missed sock updates to compat path via a new helper, which will be
used more in coming patches. (The net/core/scm.c code is left as-is here
to assist with -stable backports for the compat path.)

Cc: Christoph Hellwig &lt;hch@lst.de&gt;
Cc: Sargun Dhillon &lt;sargun@sargun.me&gt;
Cc: Jakub Kicinski &lt;kuba@kernel.org&gt;
Cc: stable@vger.kernel.org
Fixes: 48a87cc26c13 ("net: netprio: fd passed in SCM_RIGHTS datagram not set correctly")
Fixes: d84295067fc7 ("net: net_cls: fd passed in SCM_RIGHTS datagram not set correctly")
Acked-by: Christian Brauner &lt;christian.brauner@ubuntu.com&gt;
Signed-off-by: Kees Cook &lt;keescook@chromium.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>btrfs: pass checksum type via BTRFS_IOC_FS_INFO ioctl</title>
<updated>2020-08-21T11:07:19Z</updated>
<author>
<name>Johannes Thumshirn</name>
<email>johannes.thumshirn@wdc.com</email>
</author>
<published>2020-07-13T12:28:58Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=71d24eb39872d4b285fbf69f2a7282d81951a223'/>
<id>urn:sha1:71d24eb39872d4b285fbf69f2a7282d81951a223</id>
<content type='text'>
commit 137c541821a83debb63b3fa8abdd1cbc41bdf3a1 upstream.

With the recent addition of filesystem checksum types other than CRC32c,
it is not anymore hard-coded which checksum type a btrfs filesystem uses.

Up to now there is no good way to read the filesystem checksum, apart from
reading the filesystem UUID and then query sysfs for the checksum type.

Add a new csum_type and csum_size fields to the BTRFS_IOC_FS_INFO ioctl
command which usually is used to query filesystem features. Also add a
flags member indicating that the kernel responded with a set csum_type and
csum_size field.

For compatibility reasons, only return the csum_type and csum_size if
the BTRFS_FS_INFO_FLAG_CSUM_INFO flag was passed to the kernel. Also
clear any unknown flags so we don't pass false positives to user-space
newer than the kernel.

To simplify further additions to the ioctl, also switch the padding to a
u8 array. Pahole was used to verify the result of this switch:

The csum members are added before flags, which might look odd, but this
is to keep the alignment requirements and not to introduce holes in the
structure.

  $ pahole -C btrfs_ioctl_fs_info_args fs/btrfs/btrfs.ko
  struct btrfs_ioctl_fs_info_args {
	  __u64                      max_id;               /*     0     8 */
	  __u64                      num_devices;          /*     8     8 */
	  __u8                       fsid[16];             /*    16    16 */
	  __u32                      nodesize;             /*    32     4 */
	  __u32                      sectorsize;           /*    36     4 */
	  __u32                      clone_alignment;      /*    40     4 */
	  __u16                      csum_type;            /*    44     2 */
	  __u16                      csum_size;            /*    46     2 */
	  __u64                      flags;                /*    48     8 */
	  __u8                       reserved[968];        /*    56   968 */

	  /* size: 1024, cachelines: 16, members: 10 */
  };

Fixes: 3951e7f050ac ("btrfs: add xxhash64 to checksumming algorithms")
Fixes: 3831bf0094ab ("btrfs: add sha256 to checksumming algorithm")
CC: stable@vger.kernel.org # 5.5+
Signed-off-by: Johannes Thumshirn &lt;johannes.thumshirn@wdc.com&gt;
Reviewed-by: David Sterba &lt;dsterba@suse.com&gt;
Signed-off-by: David Sterba &lt;dsterba@suse.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>PCI/ATS: Add pci_pri_supported() to check device or associated PF</title>
<updated>2020-08-21T11:07:17Z</updated>
<author>
<name>Ashok Raj</name>
<email>ashok.raj@intel.com</email>
</author>
<published>2020-07-23T22:37:29Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=91944d34c5179616e799685b5af31bca721f48df'/>
<id>urn:sha1:91944d34c5179616e799685b5af31bca721f48df</id>
<content type='text'>
commit 3f9a7a13fe4cb6e119e4e4745fbf975d30bfac9b upstream.

For SR-IOV, the PF PRI is shared between the PF and any associated VFs, and
the PRI Capability is allowed for PFs but not for VFs.  Searching for the
PRI Capability on a VF always fails, even if its associated PF supports
PRI.

Add pci_pri_supported() to check whether device or its associated PF
supports PRI.

[bhelgaas: commit log, avoid "!!"]
Fixes: b16d0cb9e2fc ("iommu/vt-d: Always enable PASID/PRI PCI capabilities before ATS")
Link: https://lore.kernel.org/r/1595543849-19692-1-git-send-email-ashok.raj@intel.com
Signed-off-by: Ashok Raj &lt;ashok.raj@intel.com&gt;
Signed-off-by: Bjorn Helgaas &lt;bhelgaas@google.com&gt;
Reviewed-by: Lu Baolu &lt;baolu.lu@linux.intel.com&gt;
Acked-by: Joerg Roedel &lt;jroedel@suse.de&gt;
Cc: stable@vger.kernel.org	# v4.4+
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>genirq/affinity: Make affinity setting if activated opt-in</title>
<updated>2020-08-21T11:07:17Z</updated>
<author>
<name>Thomas Gleixner</name>
<email>tglx@linutronix.de</email>
</author>
<published>2020-07-24T20:44:41Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=af0506d7904eb9530da48dd0b60aa58d0d857ac8'/>
<id>urn:sha1:af0506d7904eb9530da48dd0b60aa58d0d857ac8</id>
<content type='text'>
commit f0c7baca180046824e07fc5f1326e83a8fd150c7 upstream.

John reported that on a RK3288 system the perf per CPU interrupts are all
affine to CPU0 and provided the analysis:

 "It looks like what happens is that because the interrupts are not per-CPU
  in the hardware, armpmu_request_irq() calls irq_force_affinity() while
  the interrupt is deactivated and then request_irq() with IRQF_PERCPU |
  IRQF_NOBALANCING.

  Now when irq_startup() runs with IRQ_STARTUP_NORMAL, it calls
  irq_setup_affinity() which returns early because IRQF_PERCPU and
  IRQF_NOBALANCING are set, leaving the interrupt on its original CPU."

This was broken by the recent commit which blocked interrupt affinity
setting in hardware before activation of the interrupt. While this works in
general, it does not work for this particular case. As contrary to the
initial analysis not all interrupt chip drivers implement an activate
callback, the safe cure is to make the deferred interrupt affinity setting
at activation time opt-in.

Implement the necessary core logic and make the two irqchip implementations
for which this is required opt-in. In hindsight this would have been the
right thing to do, but ...

Fixes: baedb87d1b53 ("genirq/affinity: Handle affinity setting on inactive interrupts correctly")
Reported-by: John Keeping &lt;john@metanate.com&gt;
Signed-off-by: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Tested-by: Marc Zyngier &lt;maz@kernel.org&gt;
Acked-by: Marc Zyngier &lt;maz@kernel.org&gt;
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/87blk4tzgm.fsf@nanos.tec.linutronix.de
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>include/asm-generic/vmlinux.lds.h: align ro_after_init</title>
<updated>2020-08-19T06:24:16Z</updated>
<author>
<name>Romain Naour</name>
<email>romain.naour@gmail.com</email>
</author>
<published>2020-08-15T00:31:57Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=123d7603cbcaceaa4f513be5b7c60ba16746d73e'/>
<id>urn:sha1:123d7603cbcaceaa4f513be5b7c60ba16746d73e</id>
<content type='text'>
commit 7f897acbe5d57995438c831670b7c400e9c0dc00 upstream.

Since the patch [1], building the kernel using a toolchain built with
binutils 2.33.1 prevents booting a sh4 system under Qemu.  Apply the patch
provided by Alan Modra [2] that fix alignment of rodata.

[1] https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;h=ebd2263ba9a9124d93bbc0ece63d7e0fae89b40e
[2] https://www.sourceware.org/ml/binutils/2019-12/msg00112.html

Signed-off-by: Romain Naour &lt;romain.naour@gmail.com&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Cc: Alan Modra &lt;amodra@gmail.com&gt;
Cc: Bin Meng &lt;bin.meng@windriver.com&gt;
Cc: Chen Zhou &lt;chenzhou10@huawei.com&gt;
Cc: Geert Uytterhoeven &lt;geert+renesas@glider.be&gt;
Cc: John Paul Adrian Glaubitz &lt;glaubitz@physik.fu-berlin.de&gt;
Cc: Krzysztof Kozlowski &lt;krzk@kernel.org&gt;
Cc: Kuninori Morimoto &lt;kuninori.morimoto.gx@renesas.com&gt;
Cc: Rich Felker &lt;dalias@libc.org&gt;
Cc: Sam Ravnborg &lt;sam@ravnborg.org&gt;
Cc: Yoshinori Sato &lt;ysato@users.sourceforge.jp&gt;
Cc: Arnd Bergmann &lt;arnd@arndb.de&gt;
Cc: &lt;stable@vger.kernel.org&gt;
Link: https://marc.info/?l=linux-sh&amp;m=158429470221261
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>bitfield.h: don't compile-time validate _val in FIELD_FIT</title>
<updated>2020-08-19T06:24:13Z</updated>
<author>
<name>Jakub Kicinski</name>
<email>kuba@kernel.org</email>
</author>
<published>2020-08-10T18:21:11Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=73cec4c441bf4ab7b1d8974162f487c5bb252143'/>
<id>urn:sha1:73cec4c441bf4ab7b1d8974162f487c5bb252143</id>
<content type='text'>
commit 444da3f52407d74c9aa12187ac6b01f76ee47d62 upstream.

When ur_load_imm_any() is inlined into jeq_imm(), it's possible for the
compiler to deduce a case where _val can only have the value of -1 at
compile time. Specifically,

/* struct bpf_insn: _s32 imm */
u64 imm = insn-&gt;imm; /* sign extend */
if (imm &gt;&gt; 32) { /* non-zero only if insn-&gt;imm is negative */
  /* inlined from ur_load_imm_any */
  u32 __imm = imm &gt;&gt; 32; /* therefore, always 0xffffffff */
  if (__builtin_constant_p(__imm) &amp;&amp; __imm &gt; 255)
    compiletime_assert_XXX()

This can result in tripping a BUILD_BUG_ON() in __BF_FIELD_CHECK() that
checks that a given value is representable in one byte (interpreted as
unsigned).

FIELD_FIT() should return true or false at runtime for whether a value
can fit for not. Don't break the build over a value that's too large for
the mask. We'd prefer to keep the inlining and compiler optimizations
though we know this case will always return false.

Cc: stable@vger.kernel.org
Fixes: 1697599ee301a ("bitfield.h: add FIELD_FIT() helper")
Link: https://lore.kernel.org/kernel-hardening/CAK7LNASvb0UDJ0U5wkYYRzTAdnEs64HjXpEUL7d=V0CXiAXcNw@mail.gmail.com/
Reported-by: Masahiro Yamada &lt;masahiroy@kernel.org&gt;
Debugged-by: Sami Tolvanen &lt;samitolvanen@google.com&gt;
Signed-off-by: Jakub Kicinski &lt;kuba@kernel.org&gt;
Signed-off-by: Nick Desaulniers &lt;ndesaulniers@google.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>tpm: Unify the mismatching TPM space buffer sizes</title>
<updated>2020-08-19T06:24:12Z</updated>
<author>
<name>Jarkko Sakkinen</name>
<email>jarkko.sakkinen@linux.intel.com</email>
</author>
<published>2020-07-02T22:55:59Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=46f06b626bfd6a61b66ff0fdb9cf11540b221ef9'/>
<id>urn:sha1:46f06b626bfd6a61b66ff0fdb9cf11540b221ef9</id>
<content type='text'>
commit 6c4e79d99e6f42b79040f1a33cd4018f5425030b upstream.

The size of the buffers for storing context's and sessions can vary from
arch to arch as PAGE_SIZE can be anything between 4 kB and 256 kB (the
maximum for PPC64). Define a fixed buffer size set to 16 kB. This should be
enough for most use with three handles (that is how many we allow at the
moment). Parametrize the buffer size while doing this, so that it is easier
to revisit this later on if required.

Cc: stable@vger.kernel.org
Reported-by: Stefan Berger &lt;stefanb@linux.ibm.com&gt;
Fixes: 745b361e989a ("tpm: infrastructure for TPM spaces")
Reviewed-by: Jerry Snitselaar &lt;jsnitsel@redhat.com&gt;
Tested-by: Stefan Berger &lt;stefanb@linux.ibm.com&gt;
Signed-off-by: Jarkko Sakkinen &lt;jarkko.sakkinen@linux.intel.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
</feed>
