summaryrefslogtreecommitdiff
path: root/net/unix/af_unix.c
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2025-12-02 18:24:35 -0800
committerLinus Torvalds <torvalds@linux-foundation.org>2025-12-02 18:24:35 -0800
commit8f4c9978de91a9a3b37df1e74d6201acfba6cefd (patch)
treeab4ab2ba48e3890805c82a7b60f910d0491fbdcf /net/unix/af_unix.c
parentdb425f7a0b158d0dbb07c4f4653795aaad3a7a15 (diff)
parent0e253e250ed0e46f5ff6962c840157da9dab48cd (diff)
Merge tag 'aes-gcm-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux
Pull AES-GCM optimizations from Eric Biggers: "More optimizations and cleanups for the x86_64 AES-GCM code: - Add a VAES+AVX2 optimized implementation of AES-GCM. This is very helpful on CPUs that have VAES but not AVX512, such as AMD Zen 3. - Make the VAES+AVX512 optimized implementation of AES-GCM handle large amounts of associated data efficiently. - Remove the "avx10_256" implementation of AES-GCM. It's superseded by the VAES+AVX2 optimized implementation. - Rename the "avx10_512" implementation to "avx512" Overall, this fills in a gap where AES-GCM wasn't fully optimized on some recent CPUs. It also drops code that won't be as useful as initially expected due to AVX10/256 being dropped from the AVX10 spec" * tag 'aes-gcm-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux: crypto: x86/aes-gcm-vaes-avx2 - initialize full %rax return register crypto: x86/aes-gcm - optimize long AAD processing with AVX512 crypto: x86/aes-gcm - optimize AVX512 precomputation of H^2 from H^1 crypto: x86/aes-gcm - revise some comments in AVX512 code crypto: x86/aes-gcm - reorder AVX512 precompute and aad_update functions crypto: x86/aes-gcm - clean up AVX512 code to assume 512-bit vectors crypto: x86/aes-gcm - rename avx10 and avx10_512 to avx512 crypto: x86/aes-gcm - remove VAES+AVX10/256 optimized code crypto: x86/aes-gcm - add VAES+AVX2 optimized code
Diffstat (limited to 'net/unix/af_unix.c')
0 files changed, 0 insertions, 0 deletions