<feed xmlns='http://www.w3.org/2005/Atom'>
<title>user/sven/linux.git/init/main.c, branch v4.9.243</title>
<subtitle>Linux Kernel
</subtitle>
<id>https://git.stealer.net/cgit.cgi/user/sven/linux.git/atom?h=v4.9.243</id>
<link rel='self' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/atom?h=v4.9.243'/>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/'/>
<updated>2020-08-21T09:02:11Z</updated>
<entry>
<title>mm: Avoid calling build_all_zonelists_init under hotplug context</title>
<updated>2020-08-21T09:02:11Z</updated>
<author>
<name>Oscar Salvador</name>
<email>osalvador@suse.de</email>
</author>
<published>2020-08-18T11:00:46Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=23feab188cb8c1abbd98d18f181287d899d82b22'/>
<id>urn:sha1:23feab188cb8c1abbd98d18f181287d899d82b22</id>
<content type='text'>
Recently a customer of ours experienced a crash when booting the
system while enabling memory-hotplug.

The problem is that Normal zones on different nodes don't get their private
zone-&gt;pageset allocated, and keep sharing the initial boot_pageset.
The sharing between zones is normally safe as explained by the comment for
boot_pageset - it's a percpu structure, and manipulations are done with
disabled interrupts, and boot_pageset is set up in a way that any page placed
on its pcplist is immediately flushed to shared zone's freelist, because
pcp-&gt;high == 1.
However, the hotplug operation updates pcp-&gt;high to a higher value as it
expects to be operating on a private pageset.

The problem is in build_all_zonelists(), which is called when the first range
of pages is onlined for the Normal zone of node X or Y:

	if (system_state == SYSTEM_BOOTING) {
		build_all_zonelists_init();
	} else {
	#ifdef CONFIG_MEMORY_HOTPLUG
		if (zone)
			setup_zone_pageset(zone);
	#endif
		/* we have to stop all cpus to guarantee there is no user
		of zonelist */
		stop_machine(__build_all_zonelists, pgdat, NULL);
		/* cpuset refresh routine should be here */
	}

When called during hotplug, it should execute the setup_zone_pageset(zone)
which allocates the private pageset.
However, with memhp_default_state=online, this happens early while
system_state == SYSTEM_BOOTING is still true, hence this step is skipped.
(and build_all_zonelists_init() is probably unsafe anyway at this point).

Another hotplug operation on the same zone then leads to zone_pcp_update(zone)
called from online_pages(), which updates the pcp-&gt;high for the shared
boot_pageset to a value higher than 1.
At that point, pages freed from Node X and Y Normal zones can end up on the same
pcplist and from there they can be freed to the wrong zone's freelist,
leading to the corruption and crashes.

Please, note that upstream has fixed that differently (and unintentionally) by
adding another boot state (SYSTEM_SCHEDULING), which is set before smp_init().
That should happen before memory hotplug events even with memhp_default_state=online.
Backporting that would be too intrusive.

Signed-off-by: Oscar Salvador &lt;osalvador@suse.de&gt;
Debugged-by: Vlastimil Babka &lt;vbabka@suse.cz&gt;
Acked-by: Michal Hocko &lt;mhocko@suse.com&gt; # for stable trees
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
</entry>
<entry>
<title>x86: Fix early boot crash on gcc-10, third try</title>
<updated>2020-05-20T06:15:41Z</updated>
<author>
<name>Borislav Petkov</name>
<email>bp@suse.de</email>
</author>
<published>2020-04-22T16:11:30Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=b263060dba4096b74bf5dbf796c535aa85fb73c6'/>
<id>urn:sha1:b263060dba4096b74bf5dbf796c535aa85fb73c6</id>
<content type='text'>
commit a9a3ed1eff3601b63aea4fb462d8b3b92c7c1e7e upstream.

... or the odyssey of trying to disable the stack protector for the
function which generates the stack canary value.

The whole story started with Sergei reporting a boot crash with a kernel
built with gcc-10:

  Kernel panic — not syncing: stack-protector: Kernel stack is corrupted in: start_secondary
  CPU: 1 PID: 0 Comm: swapper/1 Not tainted 5.6.0-rc5—00235—gfffb08b37df9 #139
  Hardware name: Gigabyte Technology Co., Ltd. To be filled by O.E.M./H77M—D3H, BIOS F12 11/14/2013
  Call Trace:
    dump_stack
    panic
    ? start_secondary
    __stack_chk_fail
    start_secondary
    secondary_startup_64
  -—-[ end Kernel panic — not syncing: stack—protector: Kernel stack is corrupted in: start_secondary

This happens because gcc-10 tail-call optimizes the last function call
in start_secondary() - cpu_startup_entry() - and thus emits a stack
canary check which fails because the canary value changes after the
boot_init_stack_canary() call.

To fix that, the initial attempt was to mark the one function which
generates the stack canary with:

  __attribute__((optimize("-fno-stack-protector"))) ... start_secondary(void *unused)

however, using the optimize attribute doesn't work cumulatively
as the attribute does not add to but rather replaces previously
supplied optimization options - roughly all -fxxx options.

The key one among them being -fno-omit-frame-pointer and thus leading to
not present frame pointer - frame pointer which the kernel needs.

The next attempt to prevent compilers from tail-call optimizing
the last function call cpu_startup_entry(), shy of carving out
start_secondary() into a separate compilation unit and building it with
-fno-stack-protector, was to add an empty asm("").

This current solution was short and sweet, and reportedly, is supported
by both compilers but we didn't get very far this time: future (LTO?)
optimization passes could potentially eliminate this, which leads us
to the third attempt: having an actual memory barrier there which the
compiler cannot ignore or move around etc.

That should hold for a long time, but hey we said that about the other
two solutions too so...

Reported-by: Sergei Trofimovich &lt;slyfox@gentoo.org&gt;
Signed-off-by: Borislav Petkov &lt;bp@suse.de&gt;
Tested-by: Kalle Valo &lt;kvalo@codeaurora.org&gt;
Cc: &lt;stable@vger.kernel.org&gt;
Link: https://lkml.kernel.org/r/20200314164451.346497-1-slyfox@gentoo.org
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>init: initialize jump labels before command line option parsing</title>
<updated>2019-05-16T17:43:42Z</updated>
<author>
<name>Dan Williams</name>
<email>dan.j.williams@intel.com</email>
</author>
<published>2019-04-19T00:50:44Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=f351f4ae81d738c72bc0934dc32a8b6b2608f78f'/>
<id>urn:sha1:f351f4ae81d738c72bc0934dc32a8b6b2608f78f</id>
<content type='text'>
[ Upstream commit 6041186a32585fc7a1d0f6cfe2f138b05fdc3c82 ]

When a module option, or core kernel argument, toggles a static-key it
requires jump labels to be initialized early.  While x86, PowerPC, and
ARM64 arrange for jump_label_init() to be called before parse_args(),
ARM does not.

  Kernel command line: rdinit=/sbin/init page_alloc.shuffle=1 panic=-1 console=ttyAMA0,115200 page_alloc.shuffle=1
  ------------[ cut here ]------------
  WARNING: CPU: 0 PID: 0 at ./include/linux/jump_label.h:303
  page_alloc_shuffle+0x12c/0x1ac
  static_key_enable(): static key 'page_alloc_shuffle_key+0x0/0x4' used
  before call to jump_label_init()
  Modules linked in:
  CPU: 0 PID: 0 Comm: swapper Not tainted
  5.1.0-rc4-next-20190410-00003-g3367c36ce744 #1
  Hardware name: ARM Integrator/CP (Device Tree)
  [&lt;c0011c68&gt;] (unwind_backtrace) from [&lt;c000ec48&gt;] (show_stack+0x10/0x18)
  [&lt;c000ec48&gt;] (show_stack) from [&lt;c07e9710&gt;] (dump_stack+0x18/0x24)
  [&lt;c07e9710&gt;] (dump_stack) from [&lt;c001bb1c&gt;] (__warn+0xe0/0x108)
  [&lt;c001bb1c&gt;] (__warn) from [&lt;c001bb88&gt;] (warn_slowpath_fmt+0x44/0x6c)
  [&lt;c001bb88&gt;] (warn_slowpath_fmt) from [&lt;c0b0c4a8&gt;]
  (page_alloc_shuffle+0x12c/0x1ac)
  [&lt;c0b0c4a8&gt;] (page_alloc_shuffle) from [&lt;c0b0c550&gt;] (shuffle_store+0x28/0x48)
  [&lt;c0b0c550&gt;] (shuffle_store) from [&lt;c003e6a0&gt;] (parse_args+0x1f4/0x350)
  [&lt;c003e6a0&gt;] (parse_args) from [&lt;c0ac3c00&gt;] (start_kernel+0x1c0/0x488)

Move the fallback call to jump_label_init() to occur before
parse_args().

The redundant calls to jump_label_init() in other archs are left intact
in case they have static key toggling use cases that are even earlier
than option parsing.

Link: http://lkml.kernel.org/r/155544804466.1032396.13418949511615676665.stgit@dwillia2-desk3.amr.corp.intel.com
Signed-off-by: Dan Williams &lt;dan.j.williams@intel.com&gt;
Reported-by: Guenter Roeck &lt;groeck@google.com&gt;
Reviewed-by: Kees Cook &lt;keescook@chromium.org&gt;
Cc: Mathieu Desnoyers &lt;mathieu.desnoyers@efficios.com&gt;
Cc: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Cc: Mike Rapoport &lt;rppt@linux.ibm.com&gt;
Cc: Russell King &lt;rmk@armlinux.org.uk&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Signed-off-by: Sasha Levin &lt;sashal@kernel.org&gt;
</content>
</entry>
<entry>
<title>module: fix DEBUG_SET_MODULE_RONX typo</title>
<updated>2018-11-10T15:42:53Z</updated>
<author>
<name>Arnd Bergmann</name>
<email>arnd@arndb.de</email>
</author>
<published>2016-11-28T14:59:13Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=dfdf8be7dcc2fa84be2f6560e414f82d0176baa7'/>
<id>urn:sha1:dfdf8be7dcc2fa84be2f6560e414f82d0176baa7</id>
<content type='text'>
[ Upstream commit 4d217a5adccf5e806790c37c61cc374a08bd7381 ]

The newly added 'rodata_enabled' global variable is protected by
the wrong #ifdef, leading to a link error when CONFIG_DEBUG_SET_MODULE_RONX
is turned on:

kernel/module.o: In function `disable_ro_nx':
module.c:(.text.unlikely.disable_ro_nx+0x88): undefined reference to `rodata_enabled'
kernel/module.o: In function `module_disable_ro':
module.c:(.text.module_disable_ro+0x8c): undefined reference to `rodata_enabled'
kernel/module.o: In function `module_enable_ro':
module.c:(.text.module_enable_ro+0xb0): undefined reference to `rodata_enabled'

CONFIG_SET_MODULE_RONX does not exist, so use the correct one instead.

Fixes: 39290b389ea2 ("module: extend 'rodata=off' boot cmdline parameter to module mappings")
Signed-off-by: Arnd Bergmann &lt;arnd@arndb.de&gt;
Signed-off-by: Jessica Yu &lt;jeyu@redhat.com&gt;
Signed-off-by: Sasha Levin &lt;sashal@kernel.org&gt;
</content>
</entry>
<entry>
<title>init: rename and re-order boot_cpu_state_init()</title>
<updated>2018-08-15T16:14:42Z</updated>
<author>
<name>Linus Torvalds</name>
<email>torvalds@linux-foundation.org</email>
</author>
<published>2018-08-12T19:19:42Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=6bb53ee170c45f44ac80ad8318f72feff9cdee1b'/>
<id>urn:sha1:6bb53ee170c45f44ac80ad8318f72feff9cdee1b</id>
<content type='text'>
commit b5b1404d0815894de0690de8a1ab58269e56eae6 upstream.

This is purely a preparatory patch for upcoming changes during the 4.19
merge window.

We have a function called "boot_cpu_state_init()" that isn't really
about the bootup cpu state: that is done much earlier by the similarly
named "boot_cpu_init()" (note lack of "state" in name).

This function initializes some hotplug CPU state, and needs to run after
the percpu data has been properly initialized.  It even has a comment to
that effect.

Except it _doesn't_ actually run after the percpu data has been properly
initialized.  On x86 it happens to do that, but on at least arm and
arm64, the percpu base pointers are initialized by the arch-specific
'smp_prepare_boot_cpu()' hook, which ran _after_ boot_cpu_state_init().

This had some unexpected results, and in particular we have a patch
pending for the merge window that did the obvious cleanup of using
'this_cpu_write()' in the cpu hotplug init code:

  -       per_cpu_ptr(&amp;cpuhp_state, smp_processor_id())-&gt;state = CPUHP_ONLINE;
  +       this_cpu_write(cpuhp_state.state, CPUHP_ONLINE);

which is obviously the right thing to do.  Except because of the
ordering issue, it actually failed miserably and unexpectedly on arm64.

So this just fixes the ordering, and changes the name of the function to
be 'boot_cpu_hotplug_init()' to make it obvious that it's about cpu
hotplug state, because the core CPU state was supposed to have already
been done earlier.

Marked for stable, since the (not yet merged) patch that will show this
problem is marked for stable.

Reported-by: Vlastimil Babka &lt;vbabka@suse.cz&gt;
Reported-by: Mian Yousaf Kaukab &lt;yousaf.kaukab@suse.com&gt;
Suggested-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
Acked-by: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Cc: Will Deacon &lt;will.deacon@arm.com&gt;
Cc: stable@kernel.org
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>module: extend 'rodata=off' boot cmdline parameter to module mappings</title>
<updated>2018-04-08T10:12:52Z</updated>
<author>
<name>AKASHI Takahiro</name>
<email>takahiro.akashi@linaro.org</email>
</author>
<published>2018-04-03T11:09:03Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=7f9a93b30bcb7d78f09c86ed47aa86f7340e42e8'/>
<id>urn:sha1:7f9a93b30bcb7d78f09c86ed47aa86f7340e42e8</id>
<content type='text'>
commit 39290b389ea upstream.

The current "rodata=off" parameter disables read-only kernel mappings
under CONFIG_DEBUG_RODATA:
    commit d2aa1acad22f ("mm/init: Add 'rodata=off' boot cmdline parameter
    to disable read-only kernel mappings")

This patch is a logical extension to module mappings ie. read-only mappings
at module loading can be disabled even if CONFIG_DEBUG_SET_MODULE_RONX
(mainly for debug use). Please note, however, that it only affects RO/RW
permissions, keeping NX set.

This is the first step to make CONFIG_DEBUG_SET_MODULE_RONX mandatory
(always-on) in the future as CONFIG_DEBUG_RODATA on x86 and arm64.

Suggested-by: and Acked-by: Mark Rutland &lt;mark.rutland@arm.com&gt;
Signed-off-by: AKASHI Takahiro &lt;takahiro.akashi@linaro.org&gt;
Reviewed-by: Kees Cook &lt;keescook@chromium.org&gt;
Acked-by: Rusty Russell &lt;rusty@rustcorp.com.au&gt;
Link: http://lkml.kernel.org/r/20161114061505.15238-1-takahiro.akashi@linaro.org
Signed-off-by: Jessica Yu &lt;jeyu@redhat.com&gt;
Signed-off-by: Alex Shi &lt;alex.shi@linaro.org&gt; [v4.9 backport]
Signed-off-by: Mark Rutland &lt;mark.rutland@arm.com&gt; [v4.9 backport]
Tested-by: Will Deacon &lt;will.deacon@arm.com&gt;
Tested-by: Greg Hackmann &lt;ghackmann@google.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
</entry>
<entry>
<title>kaiser: stack map PAGE_SIZE at THREAD_SIZE-PAGE_SIZE</title>
<updated>2018-01-05T14:46:32Z</updated>
<author>
<name>Hugh Dickins</name>
<email>hughd@google.com</email>
</author>
<published>2017-09-04T01:57:03Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=0994a2cf8fe4e884bad4810681117a7d0096c8e7'/>
<id>urn:sha1:0994a2cf8fe4e884bad4810681117a7d0096c8e7</id>
<content type='text'>
Kaiser only needs to map one page of the stack; and
kernel/fork.c did not build on powerpc (no __PAGE_KERNEL).
It's all cleaner if linux/kaiser.h provides kaiser_map_thread_stack()
and kaiser_unmap_thread_stack() wrappers around asm/kaiser.h's
kaiser_add_mapping() and kaiser_remove_mapping().  And use
linux/kaiser.h in init/main.c to avoid the #ifdefs there.

Signed-off-by: Hugh Dickins &lt;hughd@google.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
</entry>
<entry>
<title>KAISER: Kernel Address Isolation</title>
<updated>2018-01-05T14:46:32Z</updated>
<author>
<name>Richard Fellner</name>
<email>richard.fellner@student.tugraz.at</email>
</author>
<published>2017-05-04T12:26:50Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=13be4483bb487176c48732b887780630a141ae96'/>
<id>urn:sha1:13be4483bb487176c48732b887780630a141ae96</id>
<content type='text'>
This patch introduces our implementation of KAISER (Kernel Address Isolation to
have Side-channels Efficiently Removed), a kernel isolation technique to close
hardware side channels on kernel address information.

More information about the patch can be found on:

        https://github.com/IAIK/KAISER

From: Richard Fellner &lt;richard.fellner@student.tugraz.at&gt;
From: Daniel Gruss &lt;daniel.gruss@iaik.tugraz.at&gt;
Subject: [RFC, PATCH] x86_64: KAISER - do not map kernel in user mode
Date: Thu, 4 May 2017 14:26:50 +0200
Link: http://marc.info/?l=linux-kernel&amp;m=149390087310405&amp;w=2
Kaiser-4.10-SHA1: c4b1831d44c6144d3762ccc72f0c4e71a0c713e5

To: &lt;linux-kernel@vger.kernel.org&gt;
To: &lt;kernel-hardening@lists.openwall.com&gt;
Cc: &lt;clementine.maurice@iaik.tugraz.at&gt;
Cc: &lt;moritz.lipp@iaik.tugraz.at&gt;
Cc: Michael Schwarz &lt;michael.schwarz@iaik.tugraz.at&gt;
Cc: Richard Fellner &lt;richard.fellner@student.tugraz.at&gt;
Cc: Ingo Molnar &lt;mingo@kernel.org&gt;
Cc: &lt;kirill.shutemov@linux.intel.com&gt;
Cc: &lt;anders.fogh@gdata-adan.de&gt;

After several recent works [1,2,3] KASLR on x86_64 was basically
considered dead by many researchers. We have been working on an
efficient but effective fix for this problem and found that not mapping
the kernel space when running in user mode is the solution to this
problem [4] (the corresponding paper [5] will be presented at ESSoS17).

With this RFC patch we allow anybody to configure their kernel with the
flag CONFIG_KAISER to add our defense mechanism.

If there are any questions we would love to answer them.
We also appreciate any comments!

Cheers,
Daniel (+ the KAISER team from Graz University of Technology)

[1] http://www.ieee-security.org/TC/SP2013/papers/4977a191.pdf
[2] https://www.blackhat.com/docs/us-16/materials/us-16-Fogh-Using-Undocumented-CPU-Behaviour-To-See-Into-Kernel-Mode-And-Break-KASLR-In-The-Process.pdf
[3] https://www.blackhat.com/docs/us-16/materials/us-16-Jang-Breaking-Kernel-Address-Space-Layout-Randomization-KASLR-With-Intel-TSX.pdf
[4] https://github.com/IAIK/KAISER
[5] https://gruss.cc/files/kaiser.pdf

[patch based also on
https://raw.githubusercontent.com/IAIK/KAISER/master/KAISER/0001-KAISER-Kernel-Address-Isolation.patch]

Signed-off-by: Richard Fellner &lt;richard.fellner@student.tugraz.at&gt;
Signed-off-by: Moritz Lipp &lt;moritz.lipp@iaik.tugraz.at&gt;
Signed-off-by: Daniel Gruss &lt;daniel.gruss@iaik.tugraz.at&gt;
Signed-off-by: Michael Schwarz &lt;michael.schwarz@iaik.tugraz.at&gt;
Acked-by: Jiri Kosina &lt;jkosina@suse.cz&gt;
Signed-off-by: Hugh Dickins &lt;hughd@google.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
</entry>
<entry>
<title>initramfs: finish fput() before accessing any binary from initramfs</title>
<updated>2017-10-21T15:21:33Z</updated>
<author>
<name>Lokesh Vutla</name>
<email>lokeshvutla@ti.com</email>
</author>
<published>2017-02-27T22:28:12Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=aaf54d40b83fa6ebcdccdd895f97cd86b8f67406'/>
<id>urn:sha1:aaf54d40b83fa6ebcdccdd895f97cd86b8f67406</id>
<content type='text'>
[ Upstream commit 08865514805d2de8e7002fa8149c5de3e391f412 ]

Commit 4a9d4b024a31 ("switch fput to task_work_add") implements a
schedule_work() for completing fput(), but did not guarantee calling
__fput() after unpacking initramfs.  Because of this, there is a
possibility that during boot a driver can see ETXTBSY when it tries to
load a binary from initramfs as fput() is still pending on that binary.

This patch makes sure that fput() is completed after unpacking initramfs
and removes the call to flush_delayed_fput() in kernel_init() which
happens very late after unpacking initramfs.

Link: http://lkml.kernel.org/r/20170201140540.22051-1-lokeshvutla@ti.com
Signed-off-by: Lokesh Vutla &lt;lokeshvutla@ti.com&gt;
Reported-by: Murali Karicheri &lt;m-karicheri2@ti.com&gt;
Cc: Al Viro &lt;viro@zeniv.linux.org.uk&gt;
Cc: Tero Kristo &lt;t-kristo@ti.com&gt;
Cc: Sekhar Nori &lt;nsekhar@ti.com&gt;
Cc: Nishanth Menon &lt;nm@ti.com&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Signed-off-by: Sasha Levin &lt;alexander.levin@verizon.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
</entry>
<entry>
<title>random: use chacha20 for get_random_int/long</title>
<updated>2017-04-12T10:41:15Z</updated>
<author>
<name>Jason A. Donenfeld</name>
<email>Jason@zx2c4.com</email>
</author>
<published>2017-01-06T18:32:01Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=7c03613344663982a27c49d5951c80c575714ab8'/>
<id>urn:sha1:7c03613344663982a27c49d5951c80c575714ab8</id>
<content type='text'>
commit f5b98461cb8167ba362ad9f74c41d126b7becea7 upstream.

Now that our crng uses chacha20, we can rely on its speedy
characteristics for replacing MD5, while simultaneously achieving a
higher security guarantee. Before the idea was to use these functions if
you wanted random integers that aren't stupidly insecure but aren't
necessarily secure either, a vague gray zone, that hopefully was "good
enough" for its users. With chacha20, we can strengthen this claim,
since either we're using an rdrand-like instruction, or we're using the
same crng as /dev/urandom. And it's faster than what was before.

We could have chosen to replace this with a SipHash-derived function,
which might be slightly faster, but at the cost of having yet another
RNG construction in the kernel. By moving to chacha20, we have a single
RNG to analyze and verify, and we also already get good performance
improvements on all platforms.

Implementation-wise, rather than use a generic buffer for both
get_random_int/long and memcpy based on the size needs, we use a
specific buffer for 32-bit reads and for 64-bit reads. This way, we're
guaranteed to always have aligned accesses on all platforms. While
slightly more verbose in C, the assembly this generates is a lot
simpler than otherwise.

Finally, on 32-bit platforms where longs and ints are the same size,
we simply alias get_random_int to get_random_long.

Signed-off-by: Jason A. Donenfeld &lt;Jason@zx2c4.com&gt;
Suggested-by: Theodore Ts'o &lt;tytso@mit.edu&gt;
Cc: Theodore Ts'o &lt;tytso@mit.edu&gt;
Cc: Hannes Frederic Sowa &lt;hannes@stressinduktion.org&gt;
Cc: Andy Lutomirski &lt;luto@amacapital.net&gt;
Signed-off-by: Theodore Ts'o &lt;tytso@mit.edu&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
</feed>
