| Age | Commit message (Collapse) | Author |
|
This patch ensures that kernel.h and slab.h are included for
the setkey_unaligned function. It also breaks a couple of
long lines.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
setkey_unaligned() commited in ca7c39385ce1a7b44894a4b225a4608624e90730
overwrites unallocated memory in the following memset() because
I used the wrong buffer length.
Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
setkey() in {cipher,blkcipher,ablkcipher,hash}.c does not respect the
requested alignment by the algorithm. This patch fixes it. The extra
memory is allocated by kmalloc() with GFP_ATOMIC flag.
Signed-off-by: Sebastian Siewior <linux-crypto@ml.breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This patch removes the old cipher interface and related code.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Mark the parts of the cipher interface that have been replaced by
block ciphers as deprecated. Thanks to Andrew Morton for suggesting
doing this before removing them completely.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This patch prepares the scatterwalk code for use by the new block cipher
type.
Firstly it halves the size of scatter_walk on 32-bit platforms. This
is important as we allocate at least two of these objects on the stack
for each block cipher operation.
It also exports the symbols since the block cipher code can be built as
a module.
Finally there is a hack in scatterwalk_unmap that relies on progress
being made. Unfortunately, for hardware crypto we can't guarantee
progress to be made since the hardware can fail.
So this also gets rid of the hack by not advancing the address returned
by scatterwalk_map.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This patch adds two new operations for the simple cipher that encrypts or
decrypts a single block at a time. This will be the main interface after
the existing block operations have moved over to the new block ciphers.
It also adds the crypto_cipher type which is currently only used on the
new operations but will be extended to setkey as well once existing users
have been converted to use block ciphers where applicable.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The sleeping flag used to determine whether crypto_yield can actually
yield is really a per-operation flag rather than a per-tfm flag. This
patch changes crypto_yield to take a flag directly so that we can start
using a per-operation flag instead the tfm flag.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This patch makes IV operations on ECB fail through nocrypt_iv rather than
calling BUG(). This is needed to generalise CBC/ECB using the template
mechanism.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Now that the tfm is passed directly to setkey instead of the ctx, we no
longer need to pass the &tfm->crt_flags pointer.
This patch also gets rid of a few unnecessary checks on the key length
for ciphers as the cipher layer guarantees that the key length is within
the bounds specified by the algorithm.
Rather than testing dia_setkey every time, this patch does it only once
during crypto_alloc_tfm. The redundant check from crypto_digest_setkey
is also removed.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Up until now algorithms have been happy to get a context pointer since
they know everything that's in the tfm already (e.g., alignment, block
size).
However, once we have parameterised algorithms, such information will
be specific to each tfm. So the algorithm API needs to be changed to
pass the tfm structure instead of the context pointer.
This patch is basically a text substitution. The only tricky bit is
the assembly routines that need to get the context pointer offset
through asm-offsets.h.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Since the temporary buffer is used as an argument to cia_decrypt, it must be
aligned by cra_alignmask. This bug was found by linux@horizon.com.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The boundary check in the standard multi-block cipher processors are
broken when nbytes is not a multiple of bsize. In those cases it will
always process an extra block.
This patch corrects the check so that it processes at most nbytes of
data.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The crypto layer currently uses in_atomic() to determine whether it is
allowed to sleep. This is incorrect since spin locks don't always cause
in_atomic() to return true.
Instead of that, this patch returns to an earlier idea of a per-tfm flag
which determines whether sleeping is allowed. Unlike the earlier version,
the default is to not allow sleeping. This ensures that no existing code
can break.
As usual, this flag may either be set through crypto_alloc_tfm(), or
just before a specific crypto operation.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Noticed by Ken-ichirou MATSUZAWA.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Even though cit_iv is now always aligned, the user can still supply an
unaligned iv through crypto_cipher_encrypt_iv/crypto_cipher_decrypt_iv.
This patch will check the alignment of the user-supplied iv and copy
it if necessary.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch ensures that cit_iv is aligned according to cra_alignmask
by allocating it as part of the tfm structure. As a side effect the
crypto layer will also guarantee that the tfm ctx area has enough space
to be aligned by cra_alignmask. This allows us to remove the extra
space reservation from the Padlock driver.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The VIA Padlock device requires the input and output buffers to
be aligned on 16-byte boundaries. This patch adds the alignmask
attribute for low-level cipher implementations to indicate their
alignment requirements.
The mid-level crypt() function will copy the input/output buffers
if they are not aligned correctly before they are passed to the
low-level implementation.
Strictly speaking, some of the software implementations require
the buffers to be aligned on 4-byte boundaries as they do 32-bit
loads. However, it is not clear whether it is better to copy
the buffers or pay the penalty for unaligned loads/stores.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch adds hooks for cipher algorithms to implement multi-block
ECB/CBC operations directly. This is expected to provide significant
performance boots to the VIA Padlock.
It could also be used for improving software implementations such as
AES where operating on multiple blocks at a time may enable certain
optimisations.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The VIA Padlock device is able to perform much better when multiple
blocks are fed to it at once. As this device offers an exceptional
throughput rate it is worthwhile to optimise the infrastructure
specifically for it.
We shift the existing page-sized fast path down to the CBC/ECB functions.
We can then replace the CBC/ECB functions with functions provided by the
underlying algorithm that performs the multi-block operations.
As a side-effect this improves the performance of large cipher operations
for all existing algorithm implementations. I've measured the gain to be
around 5% for 3DES and 15% for AES.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Checking a pointer for NULL before calling kfree() on it is redundant.
This patch removes such checks from crypto/
Signed-off-by: Jesper Juhl <juhl-lkml@dif.dk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This is needed so that we can keep the in_place assignment outside the
inner loop. Without this in pathalogical situations we can start out
having walk_out being different from walk_in, but when walk_out crosses
a page it may converge with walk_in.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Rather than taking a branch on the fast path, we might as well split
cbc_process into encrypt and decrypt since they don't share anything
in common.
We can get rid of the cryptfn argument too. I'll do that next.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Here's some more optimisations plus a bug fix for a pathological case
where in_place might not be set correctly which can't happen with any
of the current users. Here is the first one:
We have long since stopped using a null cit_iv as a means of doing null
encryption. In fact it doesn't work here anyway since we need to copy
src into dst to achieve null encryption.
No user of cbc_encrypt_iv/cbc_decrypt_iv does this either so let's just
get rid of this check which is sitting in the fast path.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Perform kmap once (or twice if the buffer is not aligned correctly)
per page in crypt() instead of the current code which does it once
per block. Consequently it will yield once per page instead of once
per block.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Only call scatterwalk_copychunks when the block straddles a page boundary.
This allows crypt() to skip the out-of-line call most of the time.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Move src/dst handling from crypt() into the helpers prepare_src,
prepare_dst, complete_src and complete_dst. complete_src doesn't
actually do anything at the moment but is included for completeness.
This sets the stage for further optimisations down the track without
polluting crypt() itself.
These helpers don't belong in scatterwalk.[ch] since they only help
the particular way that crypt() is walking the scatter lists.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Move the handling of in_place into crypt() itself. This means that we only
need two temporary buffers instead of three. It also allows us to simplify
the check in scatterwalk_samebuf.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
scatterwalk_whichbuf is called once for each block which could be as
small as 8/16 bytes. So it makes sense to do that work inline.
It's also a bit inflexible since we may want to use the temporary buffer
even if the block doesn't cross page boundaries. In particular, we want
to do that when the source and destination are the same.
So let's replace it with scatterwalk_across_pages.
I've also simplified the check in scatterwalk_across_pages. It is
sufficient to only check len_this_page.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The stack allocation in crypt() is bogus as whether tmp_src/tmp_dst
is used is determined by factors unrelated to nbytes and
src->length/dst->length.
Since the condition for whether tmp_src/tmp_dst are used is very
complex, let's allocate them always instead of guessing.
This fixes a number of weird crashes including those AES crashes
that people have been seeing with the 2.4 backport + ipt_conntrack.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: James Morris <jmorris@redhat.com>
Signed-off-by: David S. Miller <davem@redhat.com>
|
|
- After calling scatterwalk_copychunks walk_in might point to the next
page which will break scatterwalk_samebuf (in this case src_p should
point to tmp_src anyway and scatterwalk_samembuf returns 0).
- scatterwalk_samebuf should also check for equal offsets inside the page
(just bad for performance in some cases).
|
|
From: Christophe Saout <christophe@saout.de>
This patch fixes the bug where in-place encryption was not detected when
the same highmem pages is mapped twice to different virtual addresses.
This adds a parameter to xxx_process to indicate whether this is an
in-place encryption and moves the responsability to the caller using a
helper function scatterwalk.h.
|
|
From: Christophe Saout <christophe@saout.de>
I've cleaned up the latest patches and adjusted the header files.
This patch moves the scatterwalk functions from cipher.c to
scatterwalk.c and adds a header file.
|
|
|
|
|
|
|
|
|
|
- Merge scatterwalk patch from Adam J. Richter <adam@yggdrasil.com>
API change: cipher methods now take in/out scatterlists and nbytes
params.
- Merge gss_krb5_crypto update from Adam J. Richter <adam@yggdrasil.com>
- Add KM_SOFTIRQn (instead of KM_CRYPTO_IN etc).
- Add asm/kmap_types.h to crypto/internal.h
- Update cipher.c credits.
- Update cipher.c documentation.
|
|
|
|
|
|
|
|
- Changed unsigned to unsigned int in algos.
- Consistent use of u32 for flags throughout api.
- Use of unsigned int rather than int for counting things which must
be positive, also replaced size_ts to keep code simpler and lessen
bloat on some archs.
- got rid of some unneeded returns.
- const correctness.
|
|
|
|
- Removed local_bh_disable() from kmap wrapper, not needed now with
two atomic kmaps.
- Nuked atomic flag, use in_softirq() instead.
- Converted crypto_kmap() and crypto_yield() to check in_softirq().
- Check CRYPTO_MAX_CIPHER_BLOCK_SIZE during alg init.
- Try to initialize as much at compile time as possible
(feedback from Christoph Hellwig).
- Clean up list handling a bit (feedback from Christoph Hellwig).
|
|
- API change: implemented simplest version of algorithm lookup
by name (feedback from Rusty Russell and Herbert Valerio Riedel).
- Now need to add the following line to to /etc/modules.conf for
dynamic module loading:
alias des3_ede des
|
|
- try_inc_mod_count() already does what crypto_alg_get() was trying to do.
(feedback from Andrew Morton).
- Moved the BUG_ON() in crypto_unregister_alg() further up, no need to
bother iterating over the list.
- Always use kmap_atomic (feedback from Andrew Morton). Implemented two
atomic kmaps, KM_USER for user context and KM_SOFTIRQ for softirq
context.
- Fixup KM_CRYPTO_ placement so Dave does not go crazy.
|
|
- s/__u/u/
- s/char/u8/
- Fixed bug in cipher.c, page remapped was off by one block
|
|
|