diff options
| author | Harry Yoo <harry.yoo@oracle.com> | 2026-02-23 22:33:22 +0900 |
|---|---|---|
| committer | Vlastimil Babka (SUSE) <vbabka@kernel.org> | 2026-02-26 17:30:06 +0100 |
| commit | 021ca6b670bebebc409d43845efcfe8c11c1dd54 (patch) | |
| tree | ded2d103ed6a8a9be34596b95a1a3c77035eb277 | |
| parent | 6de23f81a5e08be8fbf5e8d7e9febc72a5b5f27f (diff) | |
mm/slab: pass __GFP_NOWARN to refill_sheaf() if fallback is available
When refill_sheaf() is called, failing to refill the sheaf doesn't
necessarily mean the allocation will fail because a fallback path
might be available and serve the allocation request.
Suppress spurious warnings by passing __GFP_NOWARN along with
__GFP_NOMEMALLOC whenever a fallback path is available.
When the caller is alloc_full_sheaf() or __pcs_replace_empty_main(),
the kernel always falls back to the slowpath (__slab_alloc_node()).
For __prefill_sheaf_pfmemalloc(), the fallback path is available
only when gfp_pfmemalloc_allowed() returns true.
Reported-and-tested-by: Chris Bainbridge <chris.bainbridge@gmail.com>
Closes: https://lore.kernel.org/linux-mm/aZt2-oS9lkmwT7Ch@debian.local
Fixes: 1ce20c28eafd ("slab: handle pfmemalloc slabs properly with sheaves")
Link: https://lore.kernel.org/linux-mm/aZwSreGj9-HHdD-j@hyeyoo
Signed-off-by: Harry Yoo <harry.yoo@oracle.com>
Link: https://patch.msgid.link/20260223133322.16705-1-harry.yoo@oracle.com
Tested-by: Mikhail Gavrilov <mikhail.v.gavrilov@gmail.com>
Signed-off-by: Vlastimil Babka (SUSE) <vbabka@kernel.org>
| -rw-r--r-- | mm/slub.c | 13 |
1 files changed, 9 insertions, 4 deletions
diff --git a/mm/slub.c b/mm/slub.c index 862642c165ed..4ce24d9ee7e4 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2822,7 +2822,7 @@ static struct slab_sheaf *alloc_full_sheaf(struct kmem_cache *s, gfp_t gfp) if (!sheaf) return NULL; - if (refill_sheaf(s, sheaf, gfp | __GFP_NOMEMALLOC)) { + if (refill_sheaf(s, sheaf, gfp | __GFP_NOMEMALLOC | __GFP_NOWARN)) { free_empty_sheaf(s, sheaf); return NULL; } @@ -4575,7 +4575,7 @@ __pcs_replace_empty_main(struct kmem_cache *s, struct slub_percpu_sheaves *pcs, return NULL; if (empty) { - if (!refill_sheaf(s, empty, gfp | __GFP_NOMEMALLOC)) { + if (!refill_sheaf(s, empty, gfp | __GFP_NOMEMALLOC | __GFP_NOWARN)) { full = empty; } else { /* @@ -4890,9 +4890,14 @@ EXPORT_SYMBOL(kmem_cache_alloc_node_noprof); static int __prefill_sheaf_pfmemalloc(struct kmem_cache *s, struct slab_sheaf *sheaf, gfp_t gfp) { - int ret = 0; + gfp_t gfp_nomemalloc; + int ret; + + gfp_nomemalloc = gfp | __GFP_NOMEMALLOC; + if (gfp_pfmemalloc_allowed(gfp)) + gfp_nomemalloc |= __GFP_NOWARN; - ret = refill_sheaf(s, sheaf, gfp | __GFP_NOMEMALLOC); + ret = refill_sheaf(s, sheaf, gfp_nomemalloc); if (likely(!ret || !gfp_pfmemalloc_allowed(gfp))) return ret; |
