summaryrefslogtreecommitdiff
path: root/arch
diff options
context:
space:
mode:
authorAndrew Cooper <andrew.cooper3@citrix.com>2026-01-26 21:10:46 +0000
committerAndrew Morton <akpm@linux-foundation.org>2026-02-02 18:43:55 -0800
commit16459fe7e0ca6520a6e8f603de4ccd52b90fd765 (patch)
tree5a0226aa8e3de83494e89fd93125ed74e7be3af7 /arch
parentbd58782995a2e6a07fd07255f3cc319f40b131c9 (diff)
x86/kfence: fix booting on 32bit non-PAE systems
The original patch inverted the PTE unconditionally to avoid L1TF-vulnerable PTEs, but Linux doesn't make this adjustment in 2-level paging. Adjust the logic to use the flip_protnone_guard() helper, which is a nop on 2-level paging but inverts the address bits in all other paging modes. This doesn't matter for the Xen aspect of the original change. Linux no longer supports running 32bit PV under Xen, and Xen doesn't support running any 32bit PV guests without using PAE paging. Link: https://lkml.kernel.org/r/20260126211046.2096622-1-andrew.cooper3@citrix.com Fixes: b505f1944535 ("x86/kfence: avoid writing L1TF-vulnerable PTEs") Reported-by: Ryusuke Konishi <konishi.ryusuke@gmail.com> Closes: https://lore.kernel.org/lkml/CAKFNMokwjw68ubYQM9WkzOuH51wLznHpEOMSqtMoV1Rn9JV_gw@mail.gmail.com/ Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Tested-by: Ryusuke Konishi <konishi.ryusuke@gmail.com> Tested-by: Borislav Petkov (AMD) <bp@alien8.de> Cc: Alexander Potapenko <glider@google.com> Cc: Marco Elver <elver@google.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Jann Horn <jannh@google.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'arch')
-rw-r--r--arch/x86/include/asm/kfence.h7
1 files changed, 4 insertions, 3 deletions
diff --git a/arch/x86/include/asm/kfence.h b/arch/x86/include/asm/kfence.h
index acf9ffa1a171..dfd5c74ba41a 100644
--- a/arch/x86/include/asm/kfence.h
+++ b/arch/x86/include/asm/kfence.h
@@ -42,7 +42,7 @@ static inline bool kfence_protect_page(unsigned long addr, bool protect)
{
unsigned int level;
pte_t *pte = lookup_address(addr, &level);
- pteval_t val;
+ pteval_t val, new;
if (WARN_ON(!pte || level != PG_LEVEL_4K))
return false;
@@ -57,11 +57,12 @@ static inline bool kfence_protect_page(unsigned long addr, bool protect)
return true;
/*
- * Otherwise, invert the entire PTE. This avoids writing out an
+ * Otherwise, flip the Present bit, taking care to avoid writing an
* L1TF-vulnerable PTE (not present, without the high address bits
* set).
*/
- set_pte(pte, __pte(~val));
+ new = val ^ _PAGE_PRESENT;
+ set_pte(pte, __pte(flip_protnone_guard(val, new, PTE_PFN_MASK)));
/*
* If the page was protected (non-present) and we're making it