diff options
| author | Peter Zijlstra <peterz@infradead.org> | 2025-09-01 12:49:58 +0200 | 
|---|---|---|
| committer | Peter Zijlstra <peterz@infradead.org> | 2025-09-04 21:59:09 +0200 | 
| commit | 85a2d4a890dce3cfc9c14aa91afc3dd7af8e3bf5 (patch) | |
| tree | ccb87339dc61ef503743d6112c1237f4e13ecc57 /rust/kernel/processor.rs | |
| parent | 0b815825b1b0bd6762ca028e9b6631b002efb7ca (diff) | |
x86,ibt: Use UDB instead of 0xEA
A while ago [0] FineIBT started using the 0xEA instruction to raise #UD.
All existing parts will generate #UD in 64bit mode on that instruction.
However; Intel/AMD have not blessed using this instruction, it is on
their 'reserved' opcode list for future use.
Peter Anvin worked the committees and got use of 0xD6 blessed, it
shall be called UDB (per the next SDM or so), and it being a single
byte instruction is easy to slip into a single byte immediate -- as
is done by this very patch.
Reworking the FineIBT code to use UDB wasn't entirely trivial. Notably
the FineIBT-BHI1 case ran out of bytes. In order to condense the
encoding some it was required to move the hash register from R10D to
EAX (thanks hpa!).
Per the x86_64 ABI, RAX is used to pass the number of vector registers
for vararg function calls -- something that should not happen in the
kernel. More so, the kernel is built with -mskip-rax-setup, which
should leave RAX completely unused, allowing its re-use.
 [ For BPF; while the bpf2bpf tail-call uses RAX in its calling
   convention, that does not use CFI and is unaffected. Only the
   'regular' C->BPF transition is covered by CFI. ]
The ENDBR poison value is changed from 'OSP NOP3' to 'NOPL -42(%RAX)',
this is basically NOP4 but with UDB as its immediate. As such it is
still a non-standard NOP value unique to prior ENDBR sites, but now
also provides UDB.
Per Agner Fog's optimization guide, Jcc is assumed not-taken. That is,
the expected path should be the fallthrough case for improved
throughput.
Since the preamble now relies on the ENDBR poison to provide UDB, the
code is changed to write the poison right along with the initial
preamble -- this is possible because the ITS mitigation already
disabled IBT over rewriting the CFI scheme.
The scheme in detail:
Preamble:
  FineIBT			FineIBT-BHI1			FineIBT-BHI
  __cfi_\func:			__cfi_\func:			__cfi_\func:
    endbr			  endbr				  endbr
    subl       $0x12345678, %eax  subl      $0x12345678, %eax	  subl       $0x12345678, %eax
    jne.d32,np \func+3		  cmovne    %rax, %rdi		  cs cs call __bhi_args_N
                                  jne.d8,np \func+3
  \func:			\func:				\func:
    nopl       -42(%rax)	  nopl      -42(%rax)		  nopl       -42(%rax)
Notably there are 7 bytes available after the SUBL; this enables the
BHI1 case to fit without the nasty overlapping case it had previously.
The !BHI case uses Jcc.d32,np to consume all 7 bytes without the need
for an additional NOP, while the BHI case uses CS padding to align the
CALL with the end of the preamble such that it returns to \func+0.
Caller:
  FineIBT				Paranoid-FineIBT
  fineibt_caller:			fineibt_caller:
    mov     $0x12345678, %eax		  mov    $0x12345678, %eax
    lea     -10(%r11), %r11		  cmp    -0x11(%r11), %eax
    nop5				  cs lea -0x10(%r11), %r11
  retpoline:				retpoline:
    cs call __x86_indirect_thunk_r11	  jne    fineibt_caller+0xd
					  call   *%r11
					  nop
Notably this is before apply_retpolines() which will fix up the
retpoline call -- since all parts with IBT also have eIBRS (lets
ignore ITS). Typically the retpoline site is rewritten (when still
intact) into:
    call *%r11
    nop3
[0] 06926c6cdb95 ("x86/ibt: Optimize the FineIBT instruction sequence")
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20250901191307.GI4067720@noisy.programming.kicks-ass.net
Diffstat (limited to 'rust/kernel/processor.rs')
0 files changed, 0 insertions, 0 deletions
