| Age | Commit message (Collapse) | Author |
|
This is needed so that we can keep the in_place assignment outside the
inner loop. Without this in pathalogical situations we can start out
having walk_out being different from walk_in, but when walk_out crosses
a page it may converge with walk_in.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Rather than taking a branch on the fast path, we might as well split
cbc_process into encrypt and decrypt since they don't share anything
in common.
We can get rid of the cryptfn argument too. I'll do that next.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Here's some more optimisations plus a bug fix for a pathological case
where in_place might not be set correctly which can't happen with any
of the current users. Here is the first one:
We have long since stopped using a null cit_iv as a means of doing null
encryption. In fact it doesn't work here anyway since we need to copy
src into dst to achieve null encryption.
No user of cbc_encrypt_iv/cbc_decrypt_iv does this either so let's just
get rid of this check which is sitting in the fast path.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The problem is that walk->data wasn't being incremented anymore
after my last change. This patch should fix it up.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Perform kmap once (or twice if the buffer is not aligned correctly)
per page in crypt() instead of the current code which does it once
per block. Consequently it will yield once per page instead of once
per block.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Only call scatterwalk_copychunks when the block straddles a page boundary.
This allows crypt() to skip the out-of-line call most of the time.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Move src/dst handling from crypt() into the helpers prepare_src,
prepare_dst, complete_src and complete_dst. complete_src doesn't
actually do anything at the moment but is included for completeness.
This sets the stage for further optimisations down the track without
polluting crypt() itself.
These helpers don't belong in scatterwalk.[ch] since they only help
the particular way that crypt() is walking the scatter lists.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Move the handling of in_place into crypt() itself. This means that we only
need two temporary buffers instead of three. It also allows us to simplify
the check in scatterwalk_samebuf.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
scatterwalk_whichbuf is called once for each block which could be as
small as 8/16 bytes. So it makes sense to do that work inline.
It's also a bit inflexible since we may want to use the temporary buffer
even if the block doesn't cross page boundaries. In particular, we want
to do that when the source and destination are the same.
So let's replace it with scatterwalk_across_pages.
I've also simplified the check in scatterwalk_across_pages. It is
sufficient to only check len_this_page.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Signed-off-by: Domen Puncer <domen@coderock.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Signed-off-by: Domen Puncer <domen@coderock.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Signed-off-by: Domen Puncer <domen@coderock.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Signed-off-by: Domen Puncer <domen@coderock.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Drop the cryptolib SHA implementation and use the faster and much smaller SHA
implementation from lib/. Saves about 5K. This also saves time by doing one
memset per update call rather than one per SHA block.
Signed-off-by: Matt Mackall <mpm@selenic.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
Move users of private rotl/rotr functions to rol32/ror32. Crypto bits
verified with tcrypt.
Signed-off-by: Matt Mackall <mpm@selenic.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
Signed-off-by: James Morris <jmorris@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Looks like a cleanup broke the test vectors:
http://linux.bkbits.net:8080/linux-2.5/gnupatch@41ad5cd9EXGuUhmmotTFBIZdIkTm0A
Signed-off-by: James Morris <jmorris@redhat.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
Here are some Kconfig fixes:
- typo fixes
- unused token removes (empty or duplicated 'help')
- non ASCII characters replaces
- e-mail address and URL format corrections
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch below makes some needlessly global code
static.
Signed-off-by: Adrian Bunk <bunk@stusta.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Signed-off-by: Aaron Grothe <ajgrothe@yahoo.com>
Signed-off-by: James Morris <jmorris@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Patch moves large temporary u64 W[80] from stack to ctx struct:
* reduces stack usage by 640 bytes
* saves one 640-byte memset() per sha512_transform()
(we still do it after *all* iterations are done)
* quite unexpectedly saves 1.6k of code on i386
because stack offsets now fit into 8bits
and many stack addressing insns got 3 bytes smaller:
# size sha512.o.org sha512.o
text data bss dec hex filename
8281 372 0 8653 21cd sha512.o.org
6649 372 0 7021 1b6d sha512.o
# objdump -d sha512.o.org | cut -b9- >sha512.d.org
# objdump -d sha512.o | cut -b9- >sha512.d
# diff -u sha512.d.org sha512.d
[snip]
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Looks like open-coded be_to_cpu. GCC produces rather poor code for this.
be_to_cpu produces asm()s which are ~4 times shorter.
Compile-tested only.
I am not sure whether input can be 64bit-unaligned.
If it indeed can be, replace:
((u64*)(input))[I] -> get_unaligned( ((u64*)(input))+I )
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Looks like open-coded be_to_cpu. GCC produces rather poor code for this.
be_to_cpu produces asm()s which are ~4 times shorter.
Compile-tested only.
I am not sure whether input can be 32bit-unaligned.
If it indeed can be, replace:
((u32*)(input))[I] -> get_unaligned( ((u32*)(input))+I )
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch adds support for the kerneli 'Tnepres' cipher, a reversed form
of Serpent which was implemented due to problems with the specification.
This allows people to maintain compatibility between old kerneli and
current kernels.
Signed-off-by: Ruben Garcia <ruben@ugr.es>
Signed-off-by: Fruhwirth Clemens <clemens@endorphin.org>
Signed-off-by: James Morris <jmorris@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch from Herbert V. Riedel <hvr@gnu.org> adds __initdata to the
generic AES code where appropriate. I also added __init to f_mult().
Signed-off-by: Herbert V. Riedel <hvr@gnu.org>
Signed-off-by: James Morris <jmorris@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Signed-off-by: Aaron Grothe <ajgrothe@yahoo.com>
Signed-off-by: James Morris <jmorris@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Based upon discussions with Ulrich Kuehn
(ukuehn@acm.org)
Signed-off-by: James Morris <jmorris@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
xtea_encrypt() should use XTEA_DELTA instead of TEA_DELTA.
Signed-off-by: Thor Kooda <tkooda-patch-kernel@devsec.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
Since the irq handling rework in 2.5 lots of code in the individual
<asm/hardirq.h> files is the same. This patch moves that common code
to <linux/hardirq.h>. The following differences existed:
- alpha, m68k, m68knommu and v850 were missing the ~PREEMPT_ACTIVE
masking in the CONFIG_PREEMPT case of in_atomic(). These
architectures don't support CONFIG_PREEMPT else this would have been
an easily-spottbale bug
- S390 didn't provide synchronize_irq as it doesn't fit into their
I/O model. They now get a spurious prototype/macro
- ppc added a new preemptible() macro that is provided for all
architectures now.
Most drivers were using <linux/interrupt.h> as they should, but a few
drivers and lots of architecture code has been updated to use
<linux/hardirq.h> instead of <asm/hardirq.h>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
MODULE_PARM() was marked obsolete. Remove it from everything except
drivers/ and arch/.
Naturally, such a widespread change may introduce bugs for some of the
non-trivial cases, and where in doubt I used "0" as permissions arg (ie.
won't appear in sysfs). Individual authors should think about whether that
would be useful.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
Given the recent potential weaknesses in the SHA and MD families,
I thought it might not be a bad idea to include another hash/digest
algorithm in the kernel.
So here is Whirlpool. I chose it for a couple of reasons.
o - It is by the same people who did Khazad. I feel pretty good about their work.
o - It has been evaluated by NESSIE
https://www.cosic.esat.kuleuven.ac.be/nessie/reports/phase1/sagwp3-037_1.pdf
o - NESSIE has accepted it as one of the cryptographic primitives
o - It will be part of an ISO standard in the revised ISO/IEC 10118-3:2003(E) standard, thanks to
NESSIE
o - It is patent free and has an implementation in the public domain.
Signed-off-by: Aaron Grothe <ajgrothe@yahoo.com>
Signed-off-by: James Morris <jmorris@redhat.com>
Signed-off-by: David S. Miller <davem@redhat.com>
|
|
|
|
From Nicolas Kaiser <nikai@nikai.net>
Signed-off-by: James Morris <jmorris@redhat.com>
Signed-off-by: David S. Miller <davem@redhat.com>
|
|
From Nicolas Kaiser <nikai@nikai.net>
Signed-off-by: James Morris <jmorris@redhat.com>
Signed-off-by: David S. Miller <davem@redhat.com>
|
|
From Nicolas Kaiser <nikai@nikai.net>
Signed-off-by: James Morris <jmorris@redhat.com>
Signed-off-by: David S. Miller <davem@redhat.com>
|
|
From Nicolas Kaiser <nikai@nikai.net>
Signed-off-by: James Morris <jmorris@redhat.com>
Signed-off-by: David S. Miller <davem@redhat.com>
|
|
From Nicolas Kaiser <nikai@nikai.net>
Signed-off-by: James Morris <jmorris@redhat.com>
Signed-off-by: David S. Miller <davem@redhat.com>
|
|
From Nicolas Kaiser <nikai@nikai.net>
Signed-off-by: James Morris <jmorris@redhat.com>
Signed-off-by: David S. Miller <davem@redhat.com>
|
|
Signed-off-by: Aaron Grothe <ajgrothe@yahoo.com>
Signed-off-by: James Morris <jmorris@redhat.com>
Signed-off-by: David S. Miller <davem@redhat.com>
|
|
This code is a rework of the original Gladman AES code, and does not
include any supposed BSD licensed work by Jari Ruusu.
Linus converted the Intel asm to Gas format, and made some minor
alterations.
Fruhwirth's glue module has also been retained, although I rebased the
table generation and key scheduling back to Gladman's code. I've tested
this code with some standard FIPS test vectors, and large FTP transfers
over IPSec (both locally and over the wire to a system running the
generic AES implementation).
Signed-off-by: James Morris <jmorris@redhat.com>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
This patch reverts the i586 AES module. A new one should be ready soon.
Signed-off-by: James Morris <jmorris@redhat.com>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
Below is an updated version of patch from Fruhwirth which integrates the
Gladman AES code into the crypto API.
I've tried to ensure that this is done as simply as possible: the user
gets the asm version by default if it's suitable.
I've also now added the alternate GPL licensing provided by Brian Gladman,
and licensed the code as GPL.
Signed-off-by: James Morris <jmorris@redhat.com>
Signed-off-by: David S. Miller <davem@redhat.com>
|
|
The stack allocation in crypt() is bogus as whether tmp_src/tmp_dst
is used is determined by factors unrelated to nbytes and
src->length/dst->length.
Since the condition for whether tmp_src/tmp_dst are used is very
complex, let's allocate them always instead of guessing.
This fixes a number of weird crashes including those AES crashes
that people have been seeing with the 2.4 backport + ipt_conntrack.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: James Morris <jmorris@redhat.com>
Signed-off-by: David S. Miller <davem@redhat.com>
|
|
|
|
Signed-off-by: James Morris <jmorris@redhat.com>
Signed-off-by: David S. Miller <davem@redhat.com>
|
|
The following is a patch against 2.6.7 (should apply cleanly to 2.6.5 or
above). It implements the Tiny Encryption Algorithm (TEA) and the
Xtended TEA (XTEA) algorithms. TEA goes back to 1994 and is a good
algorithm espically for memory constrained systems. It is similar in
concept to the IDEA crypto. It does NOT have any patent restrictions
and has been put in the public domain by Wheeler and Needham. Tea is used
in quite a few products such as filesafe and even Microsoft's Xbox.
Signed-off-by: Aaron Grothe <ajgrothe@yahoo.com>
Signed-off-by: James Morris <jmorris@redhat.com>
Signed-off-by: David S. Miller <davem@redhat.com>
|