| Age | Commit message (Collapse) | Author |
|
Update the hugetlb tunable default in elf/dl-tunables.c so it is shown as 1
with /lib/ld-linux-aarch64.so.1 --list-tunables.
Move the intitialization of thp_mode/thp_pagesize to do_set_hugetlb() and
avoid accessing /sys/kernel/mm if DEFAULT_THP_PAGESIZE > 0. Switch off THP if
glibc.malloc.hugetlb=0 is used - this behaves as if DEFAULT_THP_PAGESIZE==0.
Fix the --list-tunables testcase.
Reviewed-by: DJ Delorie <dj@redhat.com>
|
|
|
|
First off, apologies for my misunderstanding on how madvise(MADV_HUGEPAGE)
works. I had the misconception that doing madvise(p, 1, MADV_HUGEPAGE) will set
VM_HUGEPAGE on the entire VMA - it does not, it will align the size to
PAGE_SIZE (4k) and then *split* the VMA. Only the first page-length of the
virtual space will VM_HUGEPAGE'd, the rest of it will stay the same.
The above is the semantics for all madvise() calls - which makes sense from a
UABI perspective. madvise() should do the proposed thing to only the length
(page-aligned) which it was asked to do, doing any more than that is not
something the user is expecting.
Commit 6e8f32d39a57 tries to optimize around the madvise() call by determining
whether the VMA got madvise'd before. This will work for most cases except
the following: if check_may_shrink_heap() is true, shrink_heap() re-maps the
shrunk portion, giving us a new VMA altogether. That VMA won't have the
VM_HUGEPAGE flag.
Reverting this commit, we will again mark the new VMA with VM_HUGEPAGE, and
the kernel will merge the two into a single VMA marked with VM_HUGEPAGE.
This may be the only case where we lose VM_HUGEPAGE, and we could micro-optimize
by extending the current if-condition with !check_may_shrink_heap. But let us
not do this - this is very difficult to reason about, and I am soon going
to propose mmap(MAP_HUGEPAGE) in Linux to do away with all these workarounds.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
|
|
From 0ea9ebe48ad624919d579dbe651293975fb6a699.
|
|
Cleanup warnings - malloc builds with -Os and -Og without needing any
complex warning avoidance defines.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
Use generic stdc_bit_width to safely adapt to input types. Move rounding up of
alignments that are not powers of 2 to __libc_memalign. Simplify alignment
handling of aligned_alloc and __posix_memalign. Add a testcase for non-power
of 2 memalign and fix malloc-debug.
Reviewed-by: DJ Delorie <dj@redhat.com>
|
|
On AArch64 malloc always checks /sys/kernel/mm/transparent_hugepage/enabled to
set the THP mode. However this check is quite expensive and the file may not
be accessible in containers. If DEFAULT_THP_PAGESIZE is non-zero, use
malloc_thp_mode_madvise so that we take advantage of THP in all cases. Since
madvise is a fast systemcall, it adds only a small overhead compared to the
cost of mmap and populating the pages.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
|
|
|
|
Currently malloc has various assumptions, some documented, some implicit.
Add a few asserts to check the most fundamental assumptions using verify().
Remove some odd #define void.
Reviewed-by: Paul Eggert <eggert@cs.ucla.edu>
|
|
Now that fastbins have been removed, there is no need to add chunks
to tcache during an unsorted scan. Small blocks can only be added
to unsorted as a result of a remainder chunk split off a larger block,
so there is no point in checking for additional chunks to place in
tcache. The last remainder is checked first, and will be used if it
is large enough or an exact fit. The unsorted bin scan becomes simpler
as a result. Remove the tcache_unsorted_limit tunable and manual entries.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
clang 21 optimize out memalign.
Checked on x86_64-linux-gnu with gcc-15 and clang-21.
|
|
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
The change to cap valid sizes to PTRDIFF_MAX inadvertently dropped the
overflow check for alignment in memalign functions, _mid_memalign and
_int_memalign. Reinstate the overflow check in _int_memalign, aligned
with the PTRDIFF_MAX change since that is directly responsible for the
CVE. The missing _mid_memalign check is not relevant (and does not have
a security impact) and may need a different approach to fully resolve,
so it has been omitted.
CVE-Id: CVE-2026-0861
Vulnerable-Commit: 9bf8e29ca136094f73f69f725f15c51facc97206
Reported-by: Igor Morgenstern, Aisle Research
Fixes: BZ #33796
Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
Signed-off-by: Siddhesh Poyarekar <siddhesh@gotplt.org>
|
|
Commit 244c404ae85003f45aa491a50b6902655ee2df15 added -threaded-main and
-threaded-worker variants of several malloc tests with some exceptions.
tst-mallocfork calls fork from a signal handler, leading to sporadic
deadlocks when multi-threaded since fork is not AS-safe when
multi-threading. This commit therefore adds tst-mallocfork to the
appropriate exception list.
Reviewed-by: Florian Weimer <fweimer@redhat.com>
|
|
I've updated copyright dates in glibc for 2026. This is the patch for
the changes not generated by scripts/update-copyrights.
|
|
|
|
Fix regression in commit 7447efa9622cb33a567094833f6c4000b3ed2e23
("malloc: remove fastbin code from malloc_info") where the closing
`sizes` tag had a typo, missing the '/'.
Signed-off-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
|
|
clang issues:
malloc.c:1909:8: error: converting the result of '<<' to a boolean always evaluates to true [-Werror,-Wtautological-constant-compare]
1909 | if (!DEFAULT_THP_PAGESIZE || mp_.thp_mode != malloc_thp_mode_not_supported)
| ^
../sysdeps/unix/sysv/linux/aarch64/malloc-hugepages.h:19:35: note: expanded from macro 'DEFAULT_THP_PAGESIZE'
19 | #define DEFAULT_THP_PAGESIZE (1UL << 21)
Checked on aarch64-linux-gnu.
|
|
Cleanup thp_init, change it so that the DEFAULT_THP_PAGESIZE
setting can be overridden with glibc.malloc.hugetlb=0 tunable.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
Now that the fastbins are gone, set the default per size class length of
tcache to 16. We observe that doing this retains the original performance
of malloc.
Reviewed-by: DJ Delorie <dj@redhat.com>
|
|
Now that all the fastbin code is gone, remove the remaining comments
referencing fastbins.
Reviewed-by: DJ Delorie <dj@redhat.com>
|
|
Now that all users of the fastbin code are gone, remove the fastbin
infrastructure.
Reviewed-by: DJ Delorie <dj@redhat.com>
|
|
do_check_remalloced_chunk checks properties of fastbin chunks. But, it is also
used to check properties of other chunks. Hence, remove this and merge the body
of the function in do_check_malloced_chunk.
Reviewed-by: DJ Delorie <dj@redhat.com>
|
|
In preparation for removal of fastbins, remove all fastbin code from
malloc_info.
Reviewed-by: DJ Delorie <dj@redhat.com>
|
|
In preparation for removal of fastbins, remove all fastbin code from
do_check_malloc_state.
Reviewed-by: DJ Delorie <dj@redhat.com>
|
|
In preparation for removal of fastbins, remove all fastbin code from
mallopt.
Reviewed-by: DJ Delorie <dj@redhat.com>
|
|
In preparation for removal of fastbins, remove the fastbin allocation
path, and remove the TRIM_FASTBINS code.
Reviewed-by: DJ Delorie <dj@redhat.com>
|
|
In preparation for removal of fastbins, remove the consolidation
infrastructure of fastbins.
Reviewed-by: DJ Delorie <dj@redhat.com>
|
|
Remove all the fastbin tests in preparation for removing the fastbins.
Reviewed-by: DJ Delorie <dj@redhat.com>
|
|
Linux supports multi-sized Transparent Huge Pages (mTHP). For the purpose
of this patch description, we call the block size mapped by a non-last
level pagetable level, the traditional THP size (2M for 4K basepage,
512M for 64K basepage). Linux now also supports intermediate THP sizes
mapped by the last level pagetable - we call that the mTHP size.
The support for mTHP in Linux has grown to be better and stable over time -
applications can benefit from reduced page faults and reduced kernel
memory management overhead, albeit at the cost of internal fragmentation.
We have observed consistent performance boosts with mTHP with little
variance.
As a result, enable 2M THP by default on Aarch64. This enables THP even if
user hasn't passed glibc.malloc.hugetlb=1. If user has passed it, we avoid
making the system call to check the hugepage size from sysfs, and override
it with the hardcoded 2MB.
There are two additional benefits of this patch, if the transparent
hugepage sysctl is set to madvise or always:
1) The THP size is now hardcoded to 2MB for Aarch64. This avoids a
syscall for fetching the THP size from sysfs.
2) On 64K basepage size systems, the traditional THP size is 512M, which
is unusable and impractical. We can instead benefit from the mTHP size of
2M. Apart from the usual benefit of THPs/mTHPs as described above, Aarch64
systems benefit from reduced TLB pressure on this mTHP size, commonly
known as the "contpte" size. If the application takes a pagefault, and
either the THP sysctl settings is "always", or the virtual memory area
has been madvise(MADV_HUGEPAGE)'d along with sysctl being "madvise", then
Linux will fault in a 2M mTHP, mapping contiguous pages into the pagetable,
and painting the pagetable entries with the cont-bit. This bit is a hint to
the hardware that the concerned pagetable entry maps a page which is part
of a set of contiguous pages - the TLB then only remembers a single entry
for this set of 2M/64K = 32 pages, because the physical address of any
other page in this contiguous set is computable by the TLB cached physical
address via a linear offset. Hence, what was only possible with the
traditional THP size, is now possible with the mTHP size.
We see a 6.25% performance improvement on SPEC.
If the sysctl is set to never, no transparent hugepages will be created by
the kernel. But, this patch still sets thp_pagesize = 2MB. The benefit is
that on MORECORE() invocation, we extend the heap by 2MB instead of 4KB,
potentially reducing the frequency of this syscall's invocation by 512x.
Note that, there is no difference in cost between an sbrk(2M) and sbrk(4K);
the kernel only does a virtual reservation and does not touch user physical
memory.
Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
|
|
Currently, if the initial program break is not aligned to the system page
size, then we align the pointer down to the page size. If there is a gap
before the heap VMA, then such an adjustment means that the madvise() range
now contains a gap. The behaviour in the upstream kernel is currently this:
madvise() will return -ENOMEM, even though the operation will still succeed
in the sense that the VM_HUGEPAGE flag will be set on the heap VMA. We
*must not* depend on this behaviour - this is an internal kernel
implementation, and earlier kernels may possibly abort the operation
altogether.
The other case is that there is no gap, and as a result we may end up
setting the VM_HUGEPAGE flag on that other VMA too, which is an
unnecessary side effect.
Let us fix this by aligning the pointer up to the page size. We should
also subtract the pointer difference from the size, because if we don't,
since the pointer is now aligned up, the size may cross the heap VMA, thus
leading to the same problem but at the other end.
There is no need to check this new size against mp_.thp_pagesize to decide
whether to make the madvise() call. The reason we make this check at the
start of madvise_thp() is to check whether the size of the VMA is enough
to map THPs into it. Since that check has passed, all that we need to
ensure now is that q + size does not cross the heap VMA.
Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
|
|
clang 20 optimize out reallocarray.
Reviewed-by: Sam James <sam@gentoo.org>
|
|
clang 21 optimize out reallocarray.
Reviewed-by: Sam James <sam@gentoo.org>
|
|
The allocation_index was being incremented before checking if mmap()
succeeds. If mmap() fails, allocation_index would still be incremented,
creating a gap in the allocations tracking array and making
allocation_index inconsistent with the actual number of successful
allocations.
This fix moves the allocation_index increment to after the mmap()
success check, ensuring it only increments when an allocation actually
succeeds. This maintains proper tracking for leak detection and
prevents gaps in the allocations array.
Signed-off-by: Osama Abdelkader <osama.abdelkader@gmail.com>
Reviewed-by: Florian Weimer <fweimer@redhat.com>
|
|
Single-threaded malloc tests exercise only the SINGLE_THREAD_P paths in
the malloc implementation. This commit runs variants of these tests in
a multi-threaded environment in order to exercise the alternate code
paths in the same test scenarios, thus potentially improving coverage.
$(test)-threaded-main and $(test)-threaded-worker variants are
introduced for most single-threaded malloc tests (with a small number of
exceptions). The -main variants run the base test in a main thread
while the test environment has an alternate thread running, whereas the
-worker variants run the test in an alternate thread while the main
thread waits on it.
The tests themselves are unmodified, and the change is accomplished by
using -DTEST_IN_THREAD at compile time, which instructs support/
infrastructure to run the test while an alternate thread waits on it.
Reviewed-by: Florian Weimer <fweimer@redhat.com>
|
|
Directly call _int_free_chunk during tcache shutdown to avoid recursion.
Calling __libc_free on a block from tcache gets flagged as a double free,
and tcache_double_free_verify checks every tcache chunk (quadratic
overhead).
Reviewed-by: Arjun Shankar <arjun@redhat.com>
|
|
Signed-off-by: Justin King <jcking@google.com>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
The Linux specific test-case in tst-free-errno was backing up malloc
metadata for a large mmap'd block, overwriting the block with its own
mmap, then restoring malloc metadata and calling free to force an munmap
failure. However, the backed up pages containing metadata can
occasionally be overlapped by the overwriting mmap, leading to a
metadata corruption.
This commit replaces this Linux specific test case with a simpler,
generic, three block allocation, expecting the kernel to coalesce the
VMAs, then cause a fragmentation to trigger the same failure.
Reviewed-by: Florian Weimer <fweimer@redhat.com>
|
|
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
|
|
The clang default to warning for missing fall-through and it does
not support all comment-like annotation that gcc does. Use C23
[[fallthrough]] annotation instead.
proper attribute instead.
Reviewed-by: Collin Funk <collin.funk1@gmail.com>
|
|
clang warns that this function is not used.
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
|
|
The tcache is used for allocation only if an exact match is found. In the
large tcache code added in commit cbfd7988107b, we currently extract a
chunk of size greater than or equal to the size we need, but don't check
strict equality. This patch fixes that behaviour.
Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
|
|
Avoid needing to check for tcache == NULL by initializing it
to a dummy read-only tcache structure. This dummy is all zeros,
so logically it is both full (when you want to put) and empty (when
you want to get). Also, there are two dummies, one used for
"not yet initialized" and one for "tunables say we shouldn't have
a tcache".
The net result is twofold:
1. Checks for tcache == NULL may be removed from the fast path.
Whether this makes the fast path faster when tcache is
disabled is TBD, but the normal case is tcache enabled.
2. no memory for tcache is allocated if tunables disable caching.
Co-authored-by: Florian Weimer <fweimer@redhat.com>
Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
|
|
The warning is not supported by clang.
Reviewed-by: Sam James <sam@gentoo.org>
|
|
Linux handles virtual memory in Virtual Memory Areas (VMAs). The
madvise(MADV_HUGEPAGE) call works on a VMA granularity, which sets the
VM_HUGEPAGE flag on the VMA. This flag is invariant of the mprotect()
syscall which is used in growing the secondary heaps. Therefore, we
need to call madvise() only when we are sure that VM_HUGEPAGE was not
previously set, which is only in the case when h->size < mp_.thp_pagesize.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
clang does not support the __builtin_*_overflow_p builtins, on gcc
the macros will call __builtin_*_overflow_p.
Reviewed-by: Collin Funk <collin.funk1@gmail.com>
|
|
Cleanup _int_memalign. Simplify the logic. Add a seperate check
for mmap. Only release the tail chunk if it is at least MINSIZE.
Use the new mmap abstractions.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
Linux handles virtual memory in Virtual Memory Areas (VMAs). The
madvise(MADV_HUGEPAGE) call works on a VMA granularity, which sets the
VM_HUGEPAGE flag on the VMA. If this VMA or a portion of it is mremapped
to a different location, Linux will create a new VMA, which will have
the same flags as the old one. This implies that the VM_HUGEPAGE flag
will be retained. Therefore, if we can guarantee that the old VMA was
marked with VM_HUGEPAGE, then there is no need to call madvise_thp() in
mremap_chunk().
The old chunk comes from a heap or non-heap allocation, both of which
have already been enlightened for THP. This implies that, if THP is on,
and the size of the old chunk is greater than or equal to thp_pagesize,
the VMA to which this chunk belongs to, has the VM_HUGEPAGE flag set.
Hence in this case we can avoid invoking the madvise() syscall.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|
|
Add mmap_set_chunk() to create a new chunk from an mmap block.
Remove set_mmap_is_hp() since it is done inside mmap_set_chunk().
Rename prev_size_mmap() to mmap_base_offset(). Cleanup comments.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
|