summaryrefslogtreecommitdiff
path: root/tools/lib/bpf/libbpf_utils.c
diff options
context:
space:
mode:
authorMike Rapoport (Microsoft) <rppt@kernel.org>2025-08-18 09:46:14 +0300
committerMike Rapoport (Microsoft) <rppt@kernel.org>2025-09-14 08:48:59 +0300
commit219f624d0690459440c5c4d4ebfc54f0d440d615 (patch)
treede914c4c9f4cc4211be89253afa5cbc9b489aaee /tools/lib/bpf/libbpf_utils.c
parentf1f86187fd72332ef214716a3c5b71616c0d340e (diff)
mm/mm_init: drop deferred_init_maxorder()
deferred_init_memmap_chunk() calls deferred_init_maxorder() to initialize struct pages in MAX_ORDER_NR_PAGES because according to commit 0e56acae4b4d ("mm: initialize MAX_ORDER_NR_PAGES at a time instead of doing larger sections") this provides better cache locality than initializing the memory map in larger sections. The looping through free memory ranges is quite cumbersome in the current implementation as it is divided between deferred_init_memmap_chunk() and deferred_init_maxorder(). Besides, the latter has two loops, one that initializes struct pages and another one that frees them. There is no need in two loops because it is safe to free pages in groups smaller than MAX_ORDER_NR_PAGES. Even if lookup for a buddy page will access a struct page ahead of the pages being initialized, that page is guaranteed to be initialized either by memmap_init_reserved_pages() or by init_unavailable_range(). Simplify the code by moving initialization and freeing of the pages into deferred_init_memmap_chunk() and dropping deferred_init_maxorder(). Reviewed-by: David Hildenbrand <david@redhat.com> Reviewed-by: Wei Yang <richard.weiyang@gmail.com> Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
Diffstat (limited to 'tools/lib/bpf/libbpf_utils.c')
0 files changed, 0 insertions, 0 deletions