<feed xmlns='http://www.w3.org/2005/Atom'>
<title>user/sven/linux.git/kernel/events, branch next/HEAD</title>
<subtitle>Linux Kernel
</subtitle>
<id>https://git.stealer.net/cgit.cgi/user/sven/linux.git/atom?h=next%2FHEAD</id>
<link rel='self' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/atom?h=next%2FHEAD'/>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/'/>
<updated>2026-04-09T13:29:03Z</updated>
<entry>
<title>Merge branch 'master' of https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git</title>
<updated>2026-04-09T13:29:03Z</updated>
<author>
<name>Mark Brown</name>
<email>broonie@kernel.org</email>
</author>
<published>2026-04-09T13:29:02Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=b9a032f6e43b3f04fa76856831e36154d0fb6418'/>
<id>urn:sha1:b9a032f6e43b3f04fa76856831e36154d0fb6418</id>
<content type='text'>
</content>
</entry>
<entry>
<title>Merge branch 'fs-next' of linux-next</title>
<updated>2026-04-09T13:09:24Z</updated>
<author>
<name>Mark Brown</name>
<email>broonie@kernel.org</email>
</author>
<published>2026-04-09T13:09:23Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=85315ec393d8fe7bd0e9543d18cbc3f4ae61b01d'/>
<id>urn:sha1:85315ec393d8fe7bd0e9543d18cbc3f4ae61b01d</id>
<content type='text'>
</content>
</entry>
<entry>
<title>Merge branch into tip/master: 'perf/core'</title>
<updated>2026-04-09T03:07:35Z</updated>
<author>
<name>Ingo Molnar</name>
<email>mingo@kernel.org</email>
</author>
<published>2026-04-09T03:07:35Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=e78a8bbf916cb209843bd59a49300029efb1724b'/>
<id>urn:sha1:e78a8bbf916cb209843bd59a49300029efb1724b</id>
<content type='text'>
 # New commits in perf/core:
    5a84b600050c ("perf/events: Replace READ_ONCE() with standard pgtable accessors")
    9805ed3c9147 ("perf/x86/msr: Make SMI and PPERF on by default")
    6ee26b7a224b ("perf/x86/intel/p4: Fix unused variable warning in p4_pmu_init()")
    b191aa32be2c ("perf/x86/intel: Only check GP counters for PEBS constraints validation")
    73cee0aad1ee ("perf/x86/amd/ibs: Fix comment typo in ibs_op_data")
    b2ea0f541d35 ("perf/amd/ibs: Advertise remote socket capability")
    8ae68bfec975 ("perf/amd/ibs: Enable streaming store filter")
    8c63c4af92ac ("perf/amd/ibs: Enable RIP bit63 hardware filtering")
    35247fa60b74 ("perf/amd/ibs: Enable fetch latency filtering")
    efa5700ec0da ("perf/amd/ibs: Support IBS_{FETCH|OP}_CTL2[Dis] to eliminate RMW race")
    e267b4178134 ("perf/amd/ibs: Add new MSRs and CPUID bits definitions")
    f9d55ccf0199 ("perf/amd/ibs: Define macro for ldlat mask and shift")
    1b044ff3c17e ("perf/amd/ibs: Avoid race between event add and NMI")
    b0a09142622a ("perf/amd/ibs: Avoid calling perf_allow_kernel() from the IBS NMI handler")
    723a290326e0 ("perf/amd/ibs: Preserve PhyAddrVal bit when clearing PhyAddr MSR")
    898138efc990 ("perf/amd/ibs: Limit ldlat-&gt;l3missonly dependency to Zen5")
    01336b555978 ("perf/amd/ibs: Account interrupt for discarded samples")
    da45c8d5f051 ("perf/core: Simplify __detach_global_ctx_data()")
    bec2ee2390c9 ("perf/core: Try to allocate task_ctx_data quickly")
    28c75fbfec8f ("perf/core: Pass GFP flags to attach_task_ctx_data()")

Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
</content>
</entry>
<entry>
<title>perf/events: Replace READ_ONCE() with standard pgtable accessors</title>
<updated>2026-04-08T11:11:46Z</updated>
<author>
<name>Anshuman Khandual</name>
<email>anshuman.khandual@arm.com</email>
</author>
<published>2026-02-27T06:27:44Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=5a84b600050c5f16b8bba25dd0e7aea845880407'/>
<id>urn:sha1:5a84b600050c5f16b8bba25dd0e7aea845880407</id>
<content type='text'>
Replace raw READ_ONCE() dereferences of pgtable entries with corresponding
standard page table accessors pxdp_get() in perf_get_pgtable_size(). These
accessors default to READ_ONCE() on platforms that don't override them. So
there is no functional change on such platforms.

However arm64 platform is being extended to support 128 bit page tables via
a new architecture feature i.e FEAT_D128 in which case READ_ONCE() will not
provide required single copy atomic access for 128 bit page table entries.
Although pxdp_get() accessors can later be overridden on arm64 platform to
extend required single copy atomicity support on 128 bit entries.

Signed-off-by: Anshuman Khandual &lt;anshuman.khandual@arm.com&gt;
Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
Acked-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
Link: https://patch.msgid.link/20260227062744.2215491-1-anshuman.khandual@arm.com
</content>
</entry>
<entry>
<title>mm: rename zap_page_range_single() to zap_vma_range()</title>
<updated>2026-04-05T20:53:15Z</updated>
<author>
<name>David Hildenbrand (Arm)</name>
<email>david@kernel.org</email>
</author>
<published>2026-02-27T20:08:45Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=0326440c3545c86b6501c7c636fcf018d6e87b8c'/>
<id>urn:sha1:0326440c3545c86b6501c7c636fcf018d6e87b8c</id>
<content type='text'>
Let's rename it to make it better match our new naming scheme.

While at it, polish the kerneldoc.

[akpm@linux-foundation.org: fix rustfmtcheck]
Link: https://lkml.kernel.org/r/20260227200848.114019-15-david@kernel.org
Signed-off-by: David Hildenbrand (Arm) &lt;david@kernel.org&gt;
Reviewed-by: Lorenzo Stoakes (Oracle) &lt;ljs@kernel.org&gt;
Acked-by: Puranjay Mohan &lt;puranjay@kernel.org&gt;
Cc: Alexander Gordeev &lt;agordeev@linux.ibm.com&gt;
Cc: Alexei Starovoitov &lt;ast@kernel.org&gt;
Cc: Alice Ryhl &lt;aliceryhl@google.com&gt;
Cc: Andrii Nakryiko &lt;andrii@kernel.org&gt;
Cc: Andy Lutomirski &lt;luto@kernel.org&gt;
Cc: Arnaldo Carvalho de Melo &lt;acme@kernel.org&gt;
Cc: Arnd Bergmann &lt;arnd@arndb.de&gt;
Cc: Arve &lt;arve@android.com&gt;
Cc: "Borislav Petkov (AMD)" &lt;bp@alien8.de&gt;
Cc: Carlos Llamas &lt;cmllamas@google.com&gt;
Cc: Christian Borntraeger &lt;borntraeger@linux.ibm.com&gt;
Cc: Christian Brauner &lt;brauner@kernel.org&gt;
Cc: Claudio Imbrenda &lt;imbrenda@linux.ibm.com&gt;
Cc: Daniel Borkman &lt;daniel@iogearbox.net&gt;
Cc: Dave Airlie &lt;airlied@gmail.com&gt;
Cc: David Ahern &lt;dsahern@kernel.org&gt;
Cc: David Rientjes &lt;rientjes@google.com&gt;
Cc: David S. Miller &lt;davem@davemloft.net&gt;
Cc: Dimitri Sivanich &lt;dimitri.sivanich@hpe.com&gt;
Cc: Eric Dumazet &lt;edumazet@google.com&gt;
Cc: Gerald Schaefer &lt;gerald.schaefer@linux.ibm.com&gt;
Cc: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
Cc: Hartley Sweeten &lt;hsweeten@visionengravers.com&gt;
Cc: Heiko Carstens &lt;hca@linux.ibm.com&gt;
Cc: Ian Abbott &lt;abbotti@mev.co.uk&gt;
Cc: Ingo Molnar &lt;mingo@redhat.com&gt;
Cc: Jakub Kacinski &lt;kuba@kernel.org&gt;
Cc: Jani Nikula &lt;jani.nikula@linux.intel.com&gt;
Cc: Jann Horn &lt;jannh@google.com&gt;
Cc: Janosch Frank &lt;frankja@linux.ibm.com&gt;
Cc: Jarkko Sakkinen &lt;jarkko@kernel.org&gt;
Cc: Jason Gunthorpe &lt;jgg@ziepe.ca&gt;
Cc: Jonas Lahtinen &lt;joonas.lahtinen@linux.intel.com&gt;
Cc: Leon Romanovsky &lt;leon@kernel.org&gt;
Cc: Liam Howlett &lt;liam.howlett@oracle.com&gt;
Cc: Madhavan Srinivasan &lt;maddy@linux.ibm.com&gt;
Cc: Matthew Wilcox (Oracle) &lt;willy@infradead.org&gt;
Cc: Michael Ellerman &lt;mpe@ellerman.id.au&gt;
Cc: Michal Hocko &lt;mhocko@suse.com&gt;
Cc: Miguel Ojeda &lt;ojeda@kernel.org&gt;
Cc: Mike Rapoport &lt;rppt@kernel.org&gt;
Cc: Namhyung kim &lt;namhyung@kernel.org&gt;
Cc: Neal Cardwell &lt;ncardwell@google.com&gt;
Cc: Paolo Abeni &lt;pabeni@redhat.com&gt;
Cc: Pedro Falcato &lt;pfalcato@suse.de&gt;
Cc: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Rodrigo Vivi &lt;rodrigo.vivi@intel.com&gt;
Cc: Shakeel Butt &lt;shakeel.butt@linux.dev&gt;
Cc: Suren Baghdasaryan &lt;surenb@google.com&gt;
Cc: Todd Kjos &lt;tkjos@android.com&gt;
Cc: Tvrtko Ursulin &lt;tursulin@ursulin.net&gt;
Cc: Vasily Gorbik &lt;gor@linux.ibm.com&gt;
Cc: Vincenzo Frascino &lt;vincenzo.frascino@arm.com&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
</content>
</entry>
<entry>
<title>mm/memory: remove "zap_details" parameter from zap_page_range_single()</title>
<updated>2026-04-05T20:53:13Z</updated>
<author>
<name>David Hildenbrand (Arm)</name>
<email>david@kernel.org</email>
</author>
<published>2026-02-27T20:08:33Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=de008c9ba5684f14e83bcf86cd45fb0e4e6c4d82'/>
<id>urn:sha1:de008c9ba5684f14e83bcf86cd45fb0e4e6c4d82</id>
<content type='text'>
Nobody except memory.c should really set that parameter to non-NULL.  So
let's just drop it and make unmap_mapping_range_vma() use
zap_page_range_single_batched() instead.

[david@kernel.org: format on a single line]
  Link: https://lkml.kernel.org/r/8a27e9ac-2025-4724-a46d-0a7c90894ba7@kernel.org
Link: https://lkml.kernel.org/r/20260227200848.114019-3-david@kernel.org
Signed-off-by: David Hildenbrand (Arm) &lt;david@kernel.org&gt;
Reviewed-by: Lorenzo Stoakes (Oracle) &lt;ljs@kernel.org&gt;
Acked-by: Puranjay Mohan &lt;puranjay@kernel.org&gt;
Cc: Alexander Gordeev &lt;agordeev@linux.ibm.com&gt;
Cc: Alexei Starovoitov &lt;ast@kernel.org&gt;
Cc: Alice Ryhl &lt;aliceryhl@google.com&gt;
Cc: Andrii Nakryiko &lt;andrii@kernel.org&gt;
Cc: Andy Lutomirski &lt;luto@kernel.org&gt;
Cc: Arnaldo Carvalho de Melo &lt;acme@kernel.org&gt;
Cc: Arnd Bergmann &lt;arnd@arndb.de&gt;
Cc: Arve &lt;arve@android.com&gt;
Cc: "Borislav Petkov (AMD)" &lt;bp@alien8.de&gt;
Cc: Carlos Llamas &lt;cmllamas@google.com&gt;
Cc: Christian Borntraeger &lt;borntraeger@linux.ibm.com&gt;
Cc: Christian Brauner &lt;brauner@kernel.org&gt;
Cc: Claudio Imbrenda &lt;imbrenda@linux.ibm.com&gt;
Cc: Daniel Borkman &lt;daniel@iogearbox.net&gt;
Cc: Dave Airlie &lt;airlied@gmail.com&gt;
Cc: David Ahern &lt;dsahern@kernel.org&gt;
Cc: David Rientjes &lt;rientjes@google.com&gt;
Cc: David S. Miller &lt;davem@davemloft.net&gt;
Cc: Dimitri Sivanich &lt;dimitri.sivanich@hpe.com&gt;
Cc: Eric Dumazet &lt;edumazet@google.com&gt;
Cc: Gerald Schaefer &lt;gerald.schaefer@linux.ibm.com&gt;
Cc: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
Cc: Hartley Sweeten &lt;hsweeten@visionengravers.com&gt;
Cc: Heiko Carstens &lt;hca@linux.ibm.com&gt;
Cc: Ian Abbott &lt;abbotti@mev.co.uk&gt;
Cc: Ingo Molnar &lt;mingo@redhat.com&gt;
Cc: Jakub Kacinski &lt;kuba@kernel.org&gt;
Cc: Jani Nikula &lt;jani.nikula@linux.intel.com&gt;
Cc: Jann Horn &lt;jannh@google.com&gt;
Cc: Janosch Frank &lt;frankja@linux.ibm.com&gt;
Cc: Jarkko Sakkinen &lt;jarkko@kernel.org&gt;
Cc: Jason Gunthorpe &lt;jgg@ziepe.ca&gt;
Cc: Jonas Lahtinen &lt;joonas.lahtinen@linux.intel.com&gt;
Cc: Leon Romanovsky &lt;leon@kernel.org&gt;
Cc: Liam Howlett &lt;liam.howlett@oracle.com&gt;
Cc: Madhavan Srinivasan &lt;maddy@linux.ibm.com&gt;
Cc: Matthew Wilcox (Oracle) &lt;willy@infradead.org&gt;
Cc: Michael Ellerman &lt;mpe@ellerman.id.au&gt;
Cc: Michal Hocko &lt;mhocko@suse.com&gt;
Cc: Miguel Ojeda &lt;ojeda@kernel.org&gt;
Cc: Mike Rapoport &lt;rppt@kernel.org&gt;
Cc: Namhyung kim &lt;namhyung@kernel.org&gt;
Cc: Neal Cardwell &lt;ncardwell@google.com&gt;
Cc: Paolo Abeni &lt;pabeni@redhat.com&gt;
Cc: Pedro Falcato &lt;pfalcato@suse.de&gt;
Cc: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Rodrigo Vivi &lt;rodrigo.vivi@intel.com&gt;
Cc: Shakeel Butt &lt;shakeel.butt@linux.dev&gt;
Cc: Suren Baghdasaryan &lt;surenb@google.com&gt;
Cc: Todd Kjos &lt;tkjos@android.com&gt;
Cc: Tvrtko Ursulin &lt;tursulin@ursulin.net&gt;
Cc: Vasily Gorbik &lt;gor@linux.ibm.com&gt;
Cc: Vincenzo Frascino &lt;vincenzo.frascino@arm.com&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
</content>
</entry>
<entry>
<title>Merge branch 'vfs-7.1.kino' into vfs.all</title>
<updated>2026-03-31T11:59:06Z</updated>
<author>
<name>Christian Brauner</name>
<email>brauner@kernel.org</email>
</author>
<published>2026-03-31T11:59:06Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=eccd9d47e1457f136e648732013e1c42b2b2540a'/>
<id>urn:sha1:eccd9d47e1457f136e648732013e1c42b2b2540a</id>
<content type='text'>
Signed-off-by: Christian Brauner &lt;brauner@kernel.org&gt;

# Conflicts:
#	fs/affs/inode.c
</content>
</entry>
<entry>
<title>perf: Make sure to use pmu_ctx-&gt;pmu for groups</title>
<updated>2026-03-12T10:29:16Z</updated>
<author>
<name>Peter Zijlstra</name>
<email>peterz@infradead.org</email>
</author>
<published>2026-03-09T12:55:46Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=4b9ce671960627b2505b3f64742544ae9801df97'/>
<id>urn:sha1:4b9ce671960627b2505b3f64742544ae9801df97</id>
<content type='text'>
Oliver reported that x86_pmu_del() ended up doing an out-of-bound memory access
when group_sched_in() fails and needs to roll back.

This *should* be handled by the transaction callbacks, but he found that when
the group leader is a software event, the transaction handlers of the wrong PMU
are used. Despite the move_group case in perf_event_open() and group_sched_in()
using pmu_ctx-&gt;pmu.

Turns out, inherit uses event-&gt;pmu to clone the events, effectively undoing the
move_group case for all inherited contexts. Fix this by also making inherit use
pmu_ctx-&gt;pmu, ensuring all inherited counters end up in the same pmu context.

Similarly, __perf_event_read() should use equally use pmu_ctx-&gt;pmu for the
group case.

Fixes: bd2756811766 ("perf: Rewrite core context handling")
Reported-by: Oliver Rosenberg &lt;olrose55@gmail.com&gt;
Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
Reviewed-by: Ian Rogers &lt;irogers@google.com&gt;
Link: https://patch.msgid.link/20260309133713.GB606826@noisy.programming.kicks-ass.net
</content>
</entry>
<entry>
<title>treewide: change inode-&gt;i_ino from unsigned long to u64</title>
<updated>2026-03-06T13:31:28Z</updated>
<author>
<name>Jeff Layton</name>
<email>jlayton@kernel.org</email>
</author>
<published>2026-03-04T15:32:42Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=0b2600f81cefcdfcda58d50df7be8fd48ada8ce2'/>
<id>urn:sha1:0b2600f81cefcdfcda58d50df7be8fd48ada8ce2</id>
<content type='text'>
On 32-bit architectures, unsigned long is only 32 bits wide, which
causes 64-bit inode numbers to be silently truncated. Several
filesystems (NFS, XFS, BTRFS, etc.) can generate inode numbers that
exceed 32 bits, and this truncation can lead to inode number collisions
and other subtle bugs on 32-bit systems.

Change the type of inode-&gt;i_ino from unsigned long to u64 to ensure that
inode numbers are always represented as 64-bit values regardless of
architecture. Update all format specifiers treewide from %lu/%lx to
%llu/%llx to match the new type, along with corresponding local variable
types.

This is the bulk treewide conversion. Earlier patches in this series
handled trace events separately to allow trace field reordering for
better struct packing on 32-bit.

Signed-off-by: Jeff Layton &lt;jlayton@kernel.org&gt;
Link: https://patch.msgid.link/20260304-iino-u64-v3-12-2257ad83d372@kernel.org
Acked-by: Damien Le Moal &lt;dlemoal@kernel.org&gt;
Reviewed-by: Christoph Hellwig &lt;hch@lst.de&gt;
Reviewed-by: Jan Kara &lt;jack@suse.cz&gt;
Reviewed-by: Chuck Lever &lt;chuck.lever@oracle.com&gt;
Signed-off-by: Christian Brauner &lt;brauner@kernel.org&gt;
</content>
</entry>
<entry>
<title>perf/core: Simplify __detach_global_ctx_data()</title>
<updated>2026-02-27T15:40:22Z</updated>
<author>
<name>Namhyung Kim</name>
<email>namhyung@kernel.org</email>
</author>
<published>2026-02-11T22:32:21Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=da45c8d5f051434a3c68397e66ae2d3b3c97cdec'/>
<id>urn:sha1:da45c8d5f051434a3c68397e66ae2d3b3c97cdec</id>
<content type='text'>
Like in the attach_global_ctx_data() it has a O(N^2) loop to delete task
context data for each thread.  But perf_free_ctx_data_rcu() can be
called under RCU read lock, so just calls it directly rather than
iterating the whole thread list again.

Signed-off-by: Namhyung Kim &lt;namhyung@kernel.org&gt;
Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
Link: https://patch.msgid.link/20260211223222.3119790-4-namhyung@kernel.org
</content>
</entry>
</feed>
