diff options
| author | Anton Blanchard <anton@samba.org> | 2004-08-30 20:35:11 -0700 |
|---|---|---|
| committer | Linus Torvalds <torvalds@ppc970.osdl.org> | 2004-08-30 20:35:11 -0700 |
| commit | 4c746d407a010eca7eb964d7b5548a812993ac73 (patch) | |
| tree | 5f1eebca940935e43958e2b7a51bee537705286e /fs/proc/array.c | |
| parent | 60b292cab32ba482d8119906fbd4f73c7117c70b (diff) | |
[PATCH] Using get_cycles for add_timer_randomness
I tested how long it took to do a dd from /dev/random on ppc64 before and
after this patch, while doing a ping flood from another machine.
before:
# /usr/bin/time dd if=/dev/random of=/dev/zero count=1k
0+51 records in
Command terminated by signal 2
0.00user 0.00system 19:18.46elapsed 0%CPU (0avgtext+0avgdata 0maxresident)k
I gave up after 19 minutes.
after:
# /usr/bin/time dd if=/dev/random of=/dev/zero count=1k
0+1024 records in
0.00user 0.00system 0:33.38elapsed 0%CPU (0avgtext+0avgdata 0maxresident)k
Just over 33 seconds. Better.
From: Arnd Bergmann <arnd@arndb.de>
I noticed that only i386 and x86-64 are currently using a high resolution
timer source when adding randomness. Since many architectures have a
working get_cycles() implementation, it seems rather straightforward to use
that.
Has this been discussed before, or can anyone comment on the implementation
below?
This patch attempts to take into account the size of cycles_t, which is
either 32 or 64 bits wide but independent of the architecture's word size.
The behavior should be nearly identical to the old one on i386, x86-64 and
all architectures without a time stamp counter, while finding more entropy
on the other architectures.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Diffstat (limited to 'fs/proc/array.c')
0 files changed, 0 insertions, 0 deletions
