<feed xmlns='http://www.w3.org/2005/Atom'>
<title>user/sven/linux.git/include/linux/page-flags.h, branch v5.15.2</title>
<subtitle>Linux Kernel
</subtitle>
<id>https://git.stealer.net/cgit.cgi/user/sven/linux.git/atom?h=v5.15.2</id>
<link rel='self' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/atom?h=v5.15.2'/>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/'/>
<updated>2021-10-29T00:18:55Z</updated>
<entry>
<title>mm: filemap: check if THP has hwpoisoned subpage for PMD page fault</title>
<updated>2021-10-29T00:18:55Z</updated>
<author>
<name>Yang Shi</name>
<email>shy828301@gmail.com</email>
</author>
<published>2021-10-28T21:36:11Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=eac96c3efdb593df1a57bb5b95dbe037bfa9a522'/>
<id>urn:sha1:eac96c3efdb593df1a57bb5b95dbe037bfa9a522</id>
<content type='text'>
When handling shmem page fault the THP with corrupted subpage could be
PMD mapped if certain conditions are satisfied.  But kernel is supposed
to send SIGBUS when trying to map hwpoisoned page.

There are two paths which may do PMD map: fault around and regular
fault.

Before commit f9ce0be71d1f ("mm: Cleanup faultaround and finish_fault()
codepaths") the thing was even worse in fault around path.  The THP
could be PMD mapped as long as the VMA fits regardless what subpage is
accessed and corrupted.  After this commit as long as head page is not
corrupted the THP could be PMD mapped.

In the regular fault path the THP could be PMD mapped as long as the
corrupted page is not accessed and the VMA fits.

This loophole could be fixed by iterating every subpage to check if any
of them is hwpoisoned or not, but it is somewhat costly in page fault
path.

So introduce a new page flag called HasHWPoisoned on the first tail
page.  It indicates the THP has hwpoisoned subpage(s).  It is set if any
subpage of THP is found hwpoisoned by memory failure and after the
refcount is bumped successfully, then cleared when the THP is freed or
split.

The soft offline path doesn't need this since soft offline handler just
marks a subpage hwpoisoned when the subpage is migrated successfully.
But shmem THP didn't get split then migrated at all.

Link: https://lkml.kernel.org/r/20211020210755.23964-3-shy828301@gmail.com
Fixes: 800d8c63b2e9 ("shmem: add huge pages support")
Signed-off-by: Yang Shi &lt;shy828301@gmail.com&gt;
Reviewed-by: Naoya Horiguchi &lt;naoya.horiguchi@nec.com&gt;
Suggested-by: Kirill A. Shutemov &lt;kirill.shutemov@linux.intel.com&gt;
Cc: Hugh Dickins &lt;hughd@google.com&gt;
Cc: Matthew Wilcox &lt;willy@infradead.org&gt;
Cc: Oscar Salvador &lt;osalvador@suse.de&gt;
Cc: Peter Xu &lt;peterx@redhat.com&gt;
Cc: &lt;stable@vger.kernel.org&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
</content>
</entry>
<entry>
<title>Merge branch 'akpm' (patches from Andrew)</title>
<updated>2021-09-08T19:55:35Z</updated>
<author>
<name>Linus Torvalds</name>
<email>torvalds@linux-foundation.org</email>
</author>
<published>2021-09-08T19:55:35Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=2d338201d5311bcd79d42f66df4cecbcbc5f4f2c'/>
<id>urn:sha1:2d338201d5311bcd79d42f66df4cecbcbc5f4f2c</id>
<content type='text'>
Merge more updates from Andrew Morton:
 "147 patches, based on 7d2a07b769330c34b4deabeed939325c77a7ec2f.

  Subsystems affected by this patch series: mm (memory-hotplug, rmap,
  ioremap, highmem, cleanups, secretmem, kfence, damon, and vmscan),
  alpha, percpu, procfs, misc, core-kernel, MAINTAINERS, lib,
  checkpatch, epoll, init, nilfs2, coredump, fork, pids, criu, kconfig,
  selftests, ipc, and scripts"

* emailed patches from Andrew Morton &lt;akpm@linux-foundation.org&gt;: (94 commits)
  scripts: check_extable: fix typo in user error message
  mm/workingset: correct kernel-doc notations
  ipc: replace costly bailout check in sysvipc_find_ipc()
  selftests/memfd: remove unused variable
  Kconfig.debug: drop selecting non-existing HARDLOCKUP_DETECTOR_ARCH
  configs: remove the obsolete CONFIG_INPUT_POLLDEV
  prctl: allow to setup brk for et_dyn executables
  pid: cleanup the stale comment mentioning pidmap_init().
  kernel/fork.c: unexport get_{mm,task}_exe_file
  coredump: fix memleak in dump_vma_snapshot()
  fs/coredump.c: log if a core dump is aborted due to changed file permissions
  nilfs2: use refcount_dec_and_lock() to fix potential UAF
  nilfs2: fix memory leak in nilfs_sysfs_delete_snapshot_group
  nilfs2: fix memory leak in nilfs_sysfs_create_snapshot_group
  nilfs2: fix memory leak in nilfs_sysfs_delete_##name##_group
  nilfs2: fix memory leak in nilfs_sysfs_create_##name##_group
  nilfs2: fix NULL pointer in nilfs_##name##_attr_release
  nilfs2: fix memory leak in nilfs_sysfs_create_device_group
  trap: cleanup trap_init()
  init: move usermodehelper_enable() to populate_rootfs()
  ...
</content>
</entry>
<entry>
<title>Merge tag 'mm-slub-5.15-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux</title>
<updated>2021-09-08T19:36:00Z</updated>
<author>
<name>Linus Torvalds</name>
<email>torvalds@linux-foundation.org</email>
</author>
<published>2021-09-08T19:36:00Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=cc09ee80c3b18ae1a897a30a17fe710b2b2f620a'/>
<id>urn:sha1:cc09ee80c3b18ae1a897a30a17fe710b2b2f620a</id>
<content type='text'>
Pull SLUB updates from Vlastimil Babka:
 "SLUB: reduce irq disabled scope and make it RT compatible

  This series was initially inspired by Mel's pcplist local_lock
  rewrite, and also interest to better understand SLUB's locking and the
  new primitives and RT variants and implications. It makes SLUB
  compatible with PREEMPT_RT and generally more preemption-friendly,
  apparently without significant regressions, as the fast paths are not
  affected.

  The main changes to SLUB by this series:

   - irq disabling is now only done for minimum amount of time needed to
     protect the strict kmem_cache_cpu fields, and as part of spin lock,
     local lock and bit lock operations to make them irq-safe

   - SLUB is fully PREEMPT_RT compatible

  The series should now be sufficiently tested in both RT and !RT
  configs, mainly thanks to Mike.

  The RFC/v1 version also got basic performance screening by Mel that
  didn't show major regressions. Mike's testing with hackbench of v2 on
  !RT reported negligible differences [6]:

    virgin(ish) tip
    5.13.0.g60ab3ed-tip
              7,320.67 msec task-clock                #    7.792 CPUs utilized            ( +-  0.31% )
               221,215      context-switches          #    0.030 M/sec                    ( +-  3.97% )
                16,234      cpu-migrations            #    0.002 M/sec                    ( +-  4.07% )
                13,233      page-faults               #    0.002 M/sec                    ( +-  0.91% )
        27,592,205,252      cycles                    #    3.769 GHz                      ( +-  0.32% )
         8,309,495,040      instructions              #    0.30  insn per cycle           ( +-  0.37% )
         1,555,210,607      branches                  #  212.441 M/sec                    ( +-  0.42% )
             5,484,209      branch-misses             #    0.35% of all branches          ( +-  2.13% )

               0.93949 +- 0.00423 seconds time elapsed  ( +-  0.45% )
               0.94608 +- 0.00384 seconds time elapsed  ( +-  0.41% ) (repeat)
               0.94422 +- 0.00410 seconds time elapsed  ( +-  0.43% )

    5.13.0.g60ab3ed-tip +slub-local-lock-v2r3
              7,343.57 msec task-clock                #    7.776 CPUs utilized            ( +-  0.44% )
               223,044      context-switches          #    0.030 M/sec                    ( +-  3.02% )
                16,057      cpu-migrations            #    0.002 M/sec                    ( +-  4.03% )
                13,164      page-faults               #    0.002 M/sec                    ( +-  0.97% )
        27,684,906,017      cycles                    #    3.770 GHz                      ( +-  0.45% )
         8,323,273,871      instructions              #    0.30  insn per cycle           ( +-  0.28% )
         1,556,106,680      branches                  #  211.901 M/sec                    ( +-  0.31% )
             5,463,468      branch-misses             #    0.35% of all branches          ( +-  1.33% )

               0.94440 +- 0.00352 seconds time elapsed  ( +-  0.37% )
               0.94830 +- 0.00228 seconds time elapsed  ( +-  0.24% ) (repeat)
               0.93813 +- 0.00440 seconds time elapsed  ( +-  0.47% ) (repeat)

  RT configs showed some throughput regressions, but that's expected
  tradeoff for the preemption improvements through the RT mutex. It
  didn't prevent the v2 to be incorporated to the 5.13 RT tree [7],
  leading to testing exposure and bugfixes.

  Before the series, SLUB is lockless in both allocation and free fast
  paths, but elsewhere, it's disabling irqs for considerable periods of
  time - especially in allocation slowpath and the bulk allocation,
  where IRQs are re-enabled only when a new page from the page allocator
  is needed, and the context allows blocking. The irq disabled sections
  can then include deactivate_slab() which walks a full freelist and
  frees the slab back to page allocator or unfreeze_partials() going
  through a list of percpu partial slabs. The RT tree currently has some
  patches mitigating these, but we can do much better in mainline too.

  Patches 1-6 are straightforward improvements or cleanups that could
  exist outside of this series too, but are prerequsities.

  Patches 7-9 are also preparatory code changes without functional
  changes, but not so useful without the rest of the series.

  Patch 10 simplifies the fast paths on systems with preemption, based
  on (hopefully correct) observation that the current loops to verify
  tid are unnecessary.

  Patches 11-20 focus on reducing irq disabled scope in the allocation
  slowpath:

   - patch 11 moves disabling of irqs into ___slab_alloc() from its
     callers, which are the allocation slowpath, and bulk allocation.
     Instead these callers only disable preemption to stabilize the cpu.

   - The following patches then gradually reduce the scope of disabled
     irqs in ___slab_alloc() and the functions called from there. As of
     patch 14, the re-enabling of irqs based on gfp flags before calling
     the page allocator is removed from allocate_slab(). As of patch 17,
     it's possible to reach the page allocator (in case of existing
     slabs depleted) without disabling and re-enabling irqs a single
     time.

  Pathces 21-26 reduce the scope of disabled irqs in functions related
  to unfreezing percpu partial slab.

  Patch 27 is preparatory. Patch 28 is adopted from the RT tree and
  converts the flushing of percpu slabs on all cpus from using IPI to
  workqueue, so that the processing isn't happening with irqs disabled
  in the IPI handler. The flushing is not performance critical so it
  should be acceptable.

  Patch 29 also comes from RT tree and makes object_map_lock RT
  compatible.

  Patch 30 make slab_lock irq-safe on RT where we cannot rely on having
  irq disabled from the list_lock spin lock usage.

  Patch 31 changes kmem_cache_cpu-&gt;partial handling in put_cpu_partial()
  from cmpxchg loop to a short irq disabled section, which is used by
  all other code modifying the field. This addresses a theoretical race
  scenario pointed out by Jann, and makes the critical section safe wrt
  with RT local_lock semantics after the conversion in patch 35.

  Patch 32 changes preempt disable to migrate disable, so that the
  nested list_lock spinlock is safe to take on RT. Because
  migrate_disable() is a function call even on !RT, a small set of
  private wrappers is introduced to keep using the cheaper
  preempt_disable() on !PREEMPT_RT configurations. As of this patch,
  SLUB should be already compatible with RT's lock semantics.

  Finally, patch 33 changes irq disabled sections that protect
  kmem_cache_cpu fields in the slow paths, with a local lock. However on
  PREEMPT_RT it means the lockless fast paths can now preempt slow paths
  which don't expect that, so the local lock has to be taken also in the
  fast paths and they are no longer lockless. RT folks seem to not mind
  this tradeoff. The patch also updates the locking documentation in the
  file's comment"

Mike Galbraith and Mel Gorman verified that their earlier testing
observations still hold for the final series:

Link: https://lore.kernel.org/lkml/89ba4f783114520c167cc915ba949ad2c04d6790.camel@gmx.de/
Link: https://lore.kernel.org/lkml/20210907082010.GB3959@techsingularity.net/

* tag 'mm-slub-5.15-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux: (33 commits)
  mm, slub: convert kmem_cpu_slab protection to local_lock
  mm, slub: use migrate_disable() on PREEMPT_RT
  mm, slub: protect put_cpu_partial() with disabled irqs instead of cmpxchg
  mm, slub: make slab_lock() disable irqs with PREEMPT_RT
  mm: slub: make object_map_lock a raw_spinlock_t
  mm: slub: move flush_cpu_slab() invocations __free_slab() invocations out of IRQ context
  mm, slab: split out the cpu offline variant of flush_slab()
  mm, slub: don't disable irqs in slub_cpu_dead()
  mm, slub: only disable irq with spin_lock in __unfreeze_partials()
  mm, slub: separate detaching of partial list in unfreeze_partials() from unfreezing
  mm, slub: detach whole partial list at once in unfreeze_partials()
  mm, slub: discard slabs in unfreeze_partials() without irqs disabled
  mm, slub: move irq control into unfreeze_partials()
  mm, slub: call deactivate_slab() without disabling irqs
  mm, slub: make locking in deactivate_slab() irq-safe
  mm, slub: move reset of c-&gt;page and freelist out of deactivate_slab()
  mm, slub: stop disabling irqs around get_partial()
  mm, slub: check new pages with restored irqs
  mm, slub: validate slab from partial list or page allocator before making it cpu slab
  mm, slub: restore irqs around calling new_slab()
  ...
</content>
</entry>
<entry>
<title>mm/idle_page_tracking: make PG_idle reusable</title>
<updated>2021-09-08T18:50:24Z</updated>
<author>
<name>SeongJae Park</name>
<email>sjpark@amazon.de</email>
</author>
<published>2021-09-08T02:56:40Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=1c676e0d9b1a59b98885b24a0e16a81fe4cc8301'/>
<id>urn:sha1:1c676e0d9b1a59b98885b24a0e16a81fe4cc8301</id>
<content type='text'>
PG_idle and PG_young allow the two PTE Accessed bit users, Idle Page
Tracking and the reclaim logic concurrently work while not interfering
with each other.  That is, when they need to clear the Accessed bit, they
set PG_young to represent the previous state of the bit, respectively.
And when they need to read the bit, if the bit is cleared, they further
read the PG_young to know whether the other has cleared the bit meanwhile
or not.

For yet another user of the PTE Accessed bit, we could add another page
flag, or extend the mechanism to use the flags.  For the DAMON usecase,
however, we don't need to do that just yet.  IDLE_PAGE_TRACKING and DAMON
are mutually exclusive, so there's only ever going to be one user of the
current set of flags.

In this commit, we split out the CONFIG options to allow for the use of
PG_young and PG_idle outside of idle page tracking.

In the next commit, DAMON's reference implementation of the virtual memory
address space monitoring primitives will use it.

[sjpark@amazon.de: set PAGE_EXTENSION for non-64BIT]
  Link: https://lkml.kernel.org/r/20210806095153.6444-1-sj38.park@gmail.com
[akpm@linux-foundation.org: tweak Kconfig text]
[sjpark@amazon.de: hide PAGE_IDLE_FLAG from users]
  Link: https://lkml.kernel.org/r/20210813081238.34705-1-sj38.park@gmail.com

Link: https://lkml.kernel.org/r/20210716081449.22187-5-sj38.park@gmail.com
Signed-off-by: SeongJae Park &lt;sjpark@amazon.de&gt;
Reviewed-by: Shakeel Butt &lt;shakeelb@google.com&gt;
Reviewed-by: Fernand Sieber &lt;sieberf@amazon.com&gt;
Cc: Alexander Shishkin &lt;alexander.shishkin@linux.intel.com&gt;
Cc: Amit Shah &lt;amit@kernel.org&gt;
Cc: Benjamin Herrenschmidt &lt;benh@kernel.crashing.org&gt;
Cc: Brendan Higgins &lt;brendanhiggins@google.com&gt;
Cc: David Hildenbrand &lt;david@redhat.com&gt;
Cc: David Rientjes &lt;rientjes@google.com&gt;
Cc: David Woodhouse &lt;dwmw@amazon.com&gt;
Cc: Fan Du &lt;fan.du@intel.com&gt;
Cc: Greg Kroah-Hartman &lt;greg@kroah.com&gt;
Cc: Greg Thelen &lt;gthelen@google.com&gt;
Cc: Ingo Molnar &lt;mingo@redhat.com&gt;
Cc: Joe Perches &lt;joe@perches.com&gt;
Cc: Jonathan Cameron &lt;Jonathan.Cameron@huawei.com&gt;
Cc: Jonathan Corbet &lt;corbet@lwn.net&gt;
Cc: Leonard Foerster &lt;foersleo@amazon.de&gt;
Cc: Marco Elver &lt;elver@google.com&gt;
Cc: Markus Boehme &lt;markubo@amazon.de&gt;
Cc: Maximilian Heyne &lt;mheyne@amazon.de&gt;
Cc: Mel Gorman &lt;mgorman@suse.de&gt;
Cc: Minchan Kim &lt;minchan@kernel.org&gt;
Cc: Namhyung Kim &lt;namhyung@kernel.org&gt;
Cc: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Rik van Riel &lt;riel@surriel.com&gt;
Cc: Shuah Khan &lt;shuah@kernel.org&gt;
Cc: Steven Rostedt (VMware) &lt;rostedt@goodmis.org&gt;
Cc: Vladimir Davydov &lt;vdavydov.dev@gmail.com&gt;
Cc: Vlastimil Babka &lt;vbabka@suse.cz&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
</content>
</entry>
<entry>
<title>mm: introduce PAGEFLAGS_MASK to replace ((1UL &lt;&lt; NR_PAGEFLAGS) - 1)</title>
<updated>2021-09-08T18:50:24Z</updated>
<author>
<name>Muchun Song</name>
<email>songmuchun@bytedance.com</email>
</author>
<published>2021-09-08T02:56:15Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=41c961b9013ee9b6d0491f6926df546e37964b1f'/>
<id>urn:sha1:41c961b9013ee9b6d0491f6926df546e37964b1f</id>
<content type='text'>
Instead of hard-coding ((1UL &lt;&lt; NR_PAGEFLAGS) - 1) everywhere, introducing
PAGEFLAGS_MASK to make the code clear to get the page flags.

Link: https://lkml.kernel.org/r/20210819150712.59948-1-songmuchun@bytedance.com
Signed-off-by: Muchun Song &lt;songmuchun@bytedance.com&gt;
Reviewed-by: Roman Gushchin &lt;guro@fb.com&gt;
Acked-by: Johannes Weiner &lt;hannes@cmpxchg.org&gt;
Reviewed-by: Shakeel Butt &lt;shakeelb@google.com&gt;
Cc: Michal Hocko &lt;mhocko@kernel.org&gt;
Cc: Vladimir Davydov &lt;vdavydov.dev@gmail.com&gt;
Cc: Matthew Wilcox (Oracle) &lt;willy@infradead.org&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
</content>
</entry>
<entry>
<title>mm, slub: do initial checks in ___slab_alloc() with irqs enabled</title>
<updated>2021-09-03T23:12:21Z</updated>
<author>
<name>Vlastimil Babka</name>
<email>vbabka@suse.cz</email>
</author>
<published>2021-05-08T00:28:02Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=0b303fb402862dcb7948eeeed2439bd8c99948b5'/>
<id>urn:sha1:0b303fb402862dcb7948eeeed2439bd8c99948b5</id>
<content type='text'>
As another step of shortening irq disabled sections in ___slab_alloc(), delay
disabling irqs until we pass the initial checks if there is a cached percpu
slab and it's suitable for our allocation.

Now we have to recheck c-&gt;page after actually disabling irqs as an allocation
in irq handler might have replaced it.

Because we call pfmemalloc_match() as one of the checks, we might hit
VM_BUG_ON_PAGE(!PageSlab(page)) in PageSlabPfmemalloc in case we get
interrupted and the page is freed. Thus introduce a pfmemalloc_match_unsafe()
variant that lacks the PageSlab check.

Signed-off-by: Vlastimil Babka &lt;vbabka@suse.cz&gt;
Acked-by: Mel Gorman &lt;mgorman@techsingularity.net&gt;
</content>
</entry>
<entry>
<title>KVM: Remove kvm_is_transparent_hugepage() and PageTransCompoundMap()</title>
<updated>2021-08-02T13:05:58Z</updated>
<author>
<name>Marc Zyngier</name>
<email>maz@kernel.org</email>
</author>
<published>2021-07-26T15:35:50Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=205d76ff0684a0b4fe3ff3a283d143a47439d191'/>
<id>urn:sha1:205d76ff0684a0b4fe3ff3a283d143a47439d191</id>
<content type='text'>
Now that arm64 has stopped using kvm_is_transparent_hugepage(),
we can remove it, as well as PageTransCompoundMap() which was
only used by the former.

Acked-by: Paolo Bonzini &lt;pbonzini@redhat.com&gt;
Signed-off-by: Marc Zyngier &lt;maz@kernel.org&gt;
Link: https://lore.kernel.org/r/20210726153552.1535838-5-maz@kernel.org
</content>
</entry>
<entry>
<title>Merge branch 'akpm' (patches from Andrew)</title>
<updated>2021-07-02T19:08:10Z</updated>
<author>
<name>Linus Torvalds</name>
<email>torvalds@linux-foundation.org</email>
</author>
<published>2021-07-02T19:08:10Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=71bd9341011f626d692aabe024f099820f02c497'/>
<id>urn:sha1:71bd9341011f626d692aabe024f099820f02c497</id>
<content type='text'>
Merge more updates from Andrew Morton:
 "190 patches.

  Subsystems affected by this patch series: mm (hugetlb, userfaultfd,
  vmscan, kconfig, proc, z3fold, zbud, ras, mempolicy, memblock,
  migration, thp, nommu, kconfig, madvise, memory-hotplug, zswap,
  zsmalloc, zram, cleanups, kfence, and hmm), procfs, sysctl, misc,
  core-kernel, lib, lz4, checkpatch, init, kprobes, nilfs2, hfs,
  signals, exec, kcov, selftests, compress/decompress, and ipc"

* emailed patches from Andrew Morton &lt;akpm@linux-foundation.org&gt;: (190 commits)
  ipc/util.c: use binary search for max_idx
  ipc/sem.c: use READ_ONCE()/WRITE_ONCE() for use_global_lock
  ipc: use kmalloc for msg_queue and shmid_kernel
  ipc sem: use kvmalloc for sem_undo allocation
  lib/decompressors: remove set but not used variabled 'level'
  selftests/vm/pkeys: exercise x86 XSAVE init state
  selftests/vm/pkeys: refill shadow register after implicit kernel write
  selftests/vm/pkeys: handle negative sys_pkey_alloc() return code
  selftests/vm/pkeys: fix alloc_random_pkey() to make it really, really random
  kcov: add __no_sanitize_coverage to fix noinstr for all architectures
  exec: remove checks in __register_bimfmt()
  x86: signal: don't do sas_ss_reset() until we are certain that sigframe won't be abandoned
  hfsplus: report create_date to kstat.btime
  hfsplus: remove unnecessary oom message
  nilfs2: remove redundant continue statement in a while-loop
  kprobes: remove duplicated strong free_insn_page in x86 and s390
  init: print out unknown kernel parameters
  checkpatch: do not complain about positive return values starting with EPOLL
  checkpatch: improve the indented label test
  checkpatch: scripts/spdxcheck.py now requires python3
  ...
</content>
</entry>
<entry>
<title>mm: introduce page_offline_(begin|end|freeze|thaw) to synchronize setting PageOffline()</title>
<updated>2021-07-01T03:47:28Z</updated>
<author>
<name>David Hildenbrand</name>
<email>david@redhat.com</email>
</author>
<published>2021-07-01T01:50:14Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=82840451936f0301781ece80322230fd8edfc648'/>
<id>urn:sha1:82840451936f0301781ece80322230fd8edfc648</id>
<content type='text'>
A driver might set a page logically offline -- PageOffline() -- and turn
the page inaccessible in the hypervisor; after that, access to page
content can be fatal.  One example is virtio-mem; while unplugged memory
-- marked as PageOffline() can currently be read in the hypervisor, this
will no longer be the case in the future; for example, when having a
virtio-mem device backed by huge pages in the hypervisor.

Some special PFN walkers -- i.e., /proc/kcore -- read content of random
pages after checking PageOffline(); however, these PFN walkers can race
with drivers that set PageOffline().

Let's introduce page_offline_(begin|end|freeze|thaw) for synchronizing.

page_offline_freeze()/page_offline_thaw() allows for a subsystem to
synchronize with such drivers, achieving that a page cannot be set
PageOffline() while frozen.

page_offline_begin()/page_offline_end() is used by drivers that care about
such races when setting a page PageOffline().

For simplicity, use a rwsem for now; neither drivers nor users are
performance sensitive.

Link: https://lkml.kernel.org/r/20210526093041.8800-5-david@redhat.com
Signed-off-by: David Hildenbrand &lt;david@redhat.com&gt;
Acked-by: Michal Hocko &lt;mhocko@suse.com&gt;
Reviewed-by: Mike Rapoport &lt;rppt@linux.ibm.com&gt;
Reviewed-by: Oscar Salvador &lt;osalvador@suse.de&gt;
Cc: Aili Yao &lt;yaoaili@kingsoft.com&gt;
Cc: Alexey Dobriyan &lt;adobriyan@gmail.com&gt;
Cc: Alex Shi &lt;alex.shi@linux.alibaba.com&gt;
Cc: Haiyang Zhang &lt;haiyangz@microsoft.com&gt;
Cc: Jason Wang &lt;jasowang@redhat.com&gt;
Cc: Jiri Bohac &lt;jbohac@suse.cz&gt;
Cc: "K. Y. Srinivasan" &lt;kys@microsoft.com&gt;
Cc: "Matthew Wilcox (Oracle)" &lt;willy@infradead.org&gt;
Cc: "Michael S. Tsirkin" &lt;mst@redhat.com&gt;
Cc: Mike Kravetz &lt;mike.kravetz@oracle.com&gt;
Cc: Naoya Horiguchi &lt;naoya.horiguchi@nec.com&gt;
Cc: Roman Gushchin &lt;guro@fb.com&gt;
Cc: Stephen Hemminger &lt;sthemmin@microsoft.com&gt;
Cc: Steven Price &lt;steven.price@arm.com&gt;
Cc: Wei Liu &lt;wei.liu@kernel.org&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
</content>
</entry>
<entry>
<title>fs/proc/kcore: don't read offline sections, logically offline pages and hwpoisoned pages</title>
<updated>2021-07-01T03:47:28Z</updated>
<author>
<name>David Hildenbrand</name>
<email>david@redhat.com</email>
</author>
<published>2021-07-01T01:50:10Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=0daa322b8ff94d8ee4081c2c6868a1aaf1309642'/>
<id>urn:sha1:0daa322b8ff94d8ee4081c2c6868a1aaf1309642</id>
<content type='text'>
Let's avoid reading:

1) Offline memory sections: the content of offline memory sections is
   stale as the memory is effectively unused by the kernel.  On s390x with
   standby memory, offline memory sections (belonging to offline storage
   increments) are not accessible.  With virtio-mem and the hyper-v
   balloon, we can have unavailable memory chunks that should not be
   accessed inside offline memory sections.  Last but not least, offline
   memory sections might contain hwpoisoned pages which we can no longer
   identify because the memmap is stale.

2) PG_offline pages: logically offline pages that are documented as
   "The content of these pages is effectively stale.  Such pages should
   not be touched (read/write/dump/save) except by their owner.".
   Examples include pages inflated in a balloon or unavailble memory
   ranges inside hotplugged memory sections with virtio-mem or the hyper-v
   balloon.

3) PG_hwpoison pages: Reading pages marked as hwpoisoned can be fatal.
   As documented: "Accessing is not safe since it may cause another
   machine check.  Don't touch!"

Introduce is_page_hwpoison(), adding a comment that it is inherently racy
but best we can really do.

Reading /proc/kcore now performs similar checks as when reading
/proc/vmcore for kdump via makedumpfile: problematic pages are exclude.
It's also similar to hibernation code, however, we don't skip hwpoisoned
pages when processing pages in kernel/power/snapshot.c:saveable_page()
yet.

Note 1: we can race against memory offlining code, especially memory going
offline and getting unplugged: however, we will properly tear down the
identity mapping and handle faults gracefully when accessing this memory
from kcore code.

Note 2: we can race against drivers setting PageOffline() and turning
memory inaccessible in the hypervisor.  We'll handle this in a follow-up
patch.

Link: https://lkml.kernel.org/r/20210526093041.8800-4-david@redhat.com
Signed-off-by: David Hildenbrand &lt;david@redhat.com&gt;
Reviewed-by: Mike Rapoport &lt;rppt@linux.ibm.com&gt;
Reviewed-by: Oscar Salvador &lt;osalvador@suse.de&gt;
Cc: Aili Yao &lt;yaoaili@kingsoft.com&gt;
Cc: Alexey Dobriyan &lt;adobriyan@gmail.com&gt;
Cc: Alex Shi &lt;alex.shi@linux.alibaba.com&gt;
Cc: Haiyang Zhang &lt;haiyangz@microsoft.com&gt;
Cc: Jason Wang &lt;jasowang@redhat.com&gt;
Cc: Jiri Bohac &lt;jbohac@suse.cz&gt;
Cc: "K. Y. Srinivasan" &lt;kys@microsoft.com&gt;
Cc: "Matthew Wilcox (Oracle)" &lt;willy@infradead.org&gt;
Cc: "Michael S. Tsirkin" &lt;mst@redhat.com&gt;
Cc: Michal Hocko &lt;mhocko@suse.com&gt;
Cc: Mike Kravetz &lt;mike.kravetz@oracle.com&gt;
Cc: Naoya Horiguchi &lt;naoya.horiguchi@nec.com&gt;
Cc: Roman Gushchin &lt;guro@fb.com&gt;
Cc: Stephen Hemminger &lt;sthemmin@microsoft.com&gt;
Cc: Steven Price &lt;steven.price@arm.com&gt;
Cc: Wei Liu &lt;wei.liu@kernel.org&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
</content>
</entry>
</feed>
