<feed xmlns='http://www.w3.org/2005/Atom'>
<title>user/sven/linux.git/arch/x86/include/asm, branch v3.2.70</title>
<subtitle>Linux Kernel
</subtitle>
<id>https://git.stealer.net/cgit.cgi/user/sven/linux.git/atom?h=v3.2.70</id>
<link rel='self' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/atom?h=v3.2.70'/>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/'/>
<updated>2015-08-06T23:32:03Z</updated>
<entry>
<title>x86/iommu: Fix header comments regarding standard and _FINISH macros</title>
<updated>2015-08-06T23:32:03Z</updated>
<author>
<name>Aravind Gopalakrishnan</name>
<email>Aravind.Gopalakrishnan@amd.com</email>
</author>
<published>2015-04-09T08:51:48Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=1ea590d49d06394e1f073653ba99be404484e54d'/>
<id>urn:sha1:1ea590d49d06394e1f073653ba99be404484e54d</id>
<content type='text'>
commit b44915927ca88084a7292e4ddd4cf91036f365e1 upstream.

The comment line regarding IOMMU_INIT and IOMMU_INIT_FINISH
macros is incorrect:

  "The standard vs the _FINISH differs in that the _FINISH variant
  will continue detecting other IOMMUs in the call list..."

It should be "..the *standard* variant will continue
detecting..."

Fix that. Also, make it readable while at it.

Signed-off-by: Aravind Gopalakrishnan &lt;Aravind.Gopalakrishnan@amd.com&gt;
Signed-off-by: Borislav Petkov &lt;bp@suse.de&gt;
Cc: H. Peter Anvin &lt;hpa@zytor.com&gt;
Cc: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Cc: konrad.wilk@oracle.com
Fixes: 6e9636693373 ("x86, iommu: Update header comments with appropriate naming")
Link: http://lkml.kernel.org/r/1428508017-5316-1-git-send-email-Aravind.Gopalakrishnan@amd.com
Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
Signed-off-by: Ben Hutchings &lt;ben@decadent.org.uk&gt;
</content>
</entry>
<entry>
<title>x86, cpu, amd: Add workaround for family 16h, erratum 793</title>
<updated>2015-02-20T00:49:40Z</updated>
<author>
<name>Borislav Petkov</name>
<email>bp@suse.de</email>
</author>
<published>2014-01-14T23:07:11Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=9ec2b3153415ca412de6471baec2e61ec89997e1'/>
<id>urn:sha1:9ec2b3153415ca412de6471baec2e61ec89997e1</id>
<content type='text'>
commit 3b56496865f9f7d9bcb2f93b44c63f274f08e3b6 upstream.

This adds the workaround for erratum 793 as a precaution in case not
every BIOS implements it.  This addresses CVE-2013-6885.

Erratum text:

[Revision Guide for AMD Family 16h Models 00h-0Fh Processors,
document 51810 Rev. 3.04 November 2013]

793 Specific Combination of Writes to Write Combined Memory Types and
Locked Instructions May Cause Core Hang

Description

Under a highly specific and detailed set of internal timing
conditions, a locked instruction may trigger a timing sequence whereby
the write to a write combined memory type is not flushed, causing the
locked instruction to stall indefinitely.

Potential Effect on System

Processor core hang.

Suggested Workaround

BIOS should set MSR
C001_1020[15] = 1b.

Fix Planned

No fix planned

[ hpa: updated description, fixed typo in MSR name ]

Signed-off-by: Borislav Petkov &lt;bp@suse.de&gt;
Link: http://lkml.kernel.org/r/20140114230711.GS29865@pd.tnic
Tested-by: Aravind Gopalakrishnan &lt;aravind.gopalakrishnan@amd.com&gt;
Signed-off-by: H. Peter Anvin &lt;hpa@linux.intel.com&gt;
[bwh: Backported to 3.2:
 - Adjust filename
 - Venkatesh Srinivas pointed out we should use {rd,wr}msrl_safe() to
   avoid crashing on KVM.  This was fixed upstream by commit 8f86a7373a1c
   ("x86, AMD: Convert to the new bit access MSR accessors") but that's too
   much trouble to backport.  Here we must use {rd,wr}msrl_amd_safe().]
Signed-off-by: Ben Hutchings &lt;ben@decadent.org.uk&gt;
Cc: Moritz Muehlenhoff &lt;jmm@debian.org&gt;
Cc: Venkatesh Srinivas &lt;venkateshs@google.com&gt;
</content>
</entry>
<entry>
<title>x86, tls: Interpret an all-zero struct user_desc as "no segment"</title>
<updated>2015-02-20T00:49:39Z</updated>
<author>
<name>Andy Lutomirski</name>
<email>luto@amacapital.net</email>
</author>
<published>2015-01-22T19:27:59Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=3175b4cb1aa4b1430fada4679be4598f6eb8872b'/>
<id>urn:sha1:3175b4cb1aa4b1430fada4679be4598f6eb8872b</id>
<content type='text'>
commit 3669ef9fa7d35f573ec9c0e0341b29251c2734a7 upstream.

The Witcher 2 did something like this to allocate a TLS segment index:

        struct user_desc u_info;
        bzero(&amp;u_info, sizeof(u_info));
        u_info.entry_number = (uint32_t)-1;

        syscall(SYS_set_thread_area, &amp;u_info);

Strictly speaking, this code was never correct.  It should have set
read_exec_only and seg_not_present to 1 to indicate that it wanted
to find a free slot without putting anything there, or it should
have put something sensible in the TLS slot if it wanted to allocate
a TLS entry for real.  The actual effect of this code was to
allocate a bogus segment that could be used to exploit espfix.

The set_thread_area hardening patches changed the behavior, causing
set_thread_area to return -EINVAL and crashing the game.

This changes set_thread_area to interpret this as a request to find
a free slot and to leave it empty, which isn't *quite* what the game
expects but should be close enough to keep it working.  In
particular, using the code above to allocate two segments will
allocate the same segment both times.

According to FrostbittenKing on Github, this fixes The Witcher 2.

If this somehow still causes problems, we could instead allocate
a limit==0 32-bit data segment, but that seems rather ugly to me.

Fixes: 41bdc78544b8 x86/tls: Validate TLS entries to protect espfix
Signed-off-by: Andy Lutomirski &lt;luto@amacapital.net&gt;
Cc: torvalds@linux-foundation.org
Link: http://lkml.kernel.org/r/0cb251abe1ff0958b8e468a9a9a905b80ae3a746.1421954363.git.luto@amacapital.net
Signed-off-by: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Signed-off-by: Ben Hutchings &lt;ben@decadent.org.uk&gt;
</content>
</entry>
<entry>
<title>x86, tls, ldt: Stop checking lm in LDT_empty</title>
<updated>2015-02-20T00:49:39Z</updated>
<author>
<name>Andy Lutomirski</name>
<email>luto@amacapital.net</email>
</author>
<published>2015-01-22T19:27:58Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=f62570cbdcb6dd53d0e2361488f9ea2c4cf17ec9'/>
<id>urn:sha1:f62570cbdcb6dd53d0e2361488f9ea2c4cf17ec9</id>
<content type='text'>
commit e30ab185c490e9a9381385529e0fd32f0a399495 upstream.

32-bit programs don't have an lm bit in their ABI, so they can't
reliably cause LDT_empty to return true without resorting to memset.
They shouldn't need to do this.

This should fix a longstanding, if minor, issue in all 64-bit kernels
as well as a potential regression in the TLS hardening code.

Fixes: 41bdc78544b8 x86/tls: Validate TLS entries to protect espfix
Signed-off-by: Andy Lutomirski &lt;luto@amacapital.net&gt;
Cc: torvalds@linux-foundation.org
Link: http://lkml.kernel.org/r/72a059de55e86ad5e2935c80aa91880ddf19d07c.1421954363.git.luto@amacapital.net
Signed-off-by: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Signed-off-by: Ben Hutchings &lt;ben@decadent.org.uk&gt;
</content>
</entry>
<entry>
<title>x86/tls: Don't validate lm in set_thread_area() after all</title>
<updated>2015-02-20T00:49:31Z</updated>
<author>
<name>Andy Lutomirski</name>
<email>luto@amacapital.net</email>
</author>
<published>2014-12-17T22:48:30Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=c759a579c902167d656ee303d518cb5eed2af278'/>
<id>urn:sha1:c759a579c902167d656ee303d518cb5eed2af278</id>
<content type='text'>
commit 3fb2f4237bb452eb4e98f6a5dbd5a445b4fed9d0 upstream.

It turns out that there's a lurking ABI issue.  GCC, when
compiling this in a 32-bit program:

struct user_desc desc = {
	.entry_number    = idx,
	.base_addr       = base,
	.limit           = 0xfffff,
	.seg_32bit       = 1,
	.contents        = 0, /* Data, grow-up */
	.read_exec_only  = 0,
	.limit_in_pages  = 1,
	.seg_not_present = 0,
	.useable         = 0,
};

will leave .lm uninitialized.  This means that anything in the
kernel that reads user_desc.lm for 32-bit tasks is unreliable.

Revert the .lm check in set_thread_area().  The value never did
anything in the first place.

Fixes: 0e58af4e1d21 ("x86/tls: Disallow unusual TLS segments")
Signed-off-by: Andy Lutomirski &lt;luto@amacapital.net&gt;
Acked-by: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Cc: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Link: http://lkml.kernel.org/r/d7875b60e28c512f6a6fc0baf5714d58e7eaadbb.1418856405.git.luto@amacapital.net
Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
[bwh: Backported to 3.2: adjust filename]
Signed-off-by: Ben Hutchings &lt;ben@decadent.org.uk&gt;
</content>
</entry>
<entry>
<title>x86: kvm: use alternatives for VMCALL vs. VMMCALL if kernel text is read-only</title>
<updated>2015-01-01T01:27:52Z</updated>
<author>
<name>Paolo Bonzini</name>
<email>pbonzini@redhat.com</email>
</author>
<published>2014-09-22T11:17:48Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=9a8d305befe3218c7523179c0d406d876b5cbbed'/>
<id>urn:sha1:9a8d305befe3218c7523179c0d406d876b5cbbed</id>
<content type='text'>
commit c1118b3602c2329671ad5ec8bdf8e374323d6343 upstream.

On x86_64, kernel text mappings are mapped read-only with CONFIG_DEBUG_RODATA.
In that case, KVM will fail to patch VMCALL instructions to VMMCALL
as required on AMD processors.

The failure mode is currently a divide-by-zero exception, which obviously
is a KVM bug that has to be fixed.  However, picking the right instruction
between VMCALL and VMMCALL will be faster and will help if you cannot upgrade
the hypervisor.

Reported-by: Chris Webb &lt;chris@arachsys.com&gt;
Tested-by: Chris Webb &lt;chris@arachsys.com&gt;
Cc: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Cc: Ingo Molnar &lt;mingo@redhat.com&gt;
Cc: "H. Peter Anvin" &lt;hpa@zytor.com&gt;
Cc: x86@kernel.org
Acked-by: Borislav Petkov &lt;bp@suse.de&gt;
Signed-off-by: Paolo Bonzini &lt;pbonzini@redhat.com&gt;
[bwh: Backported to 3.2: adjust context]
Signed-off-by: Ben Hutchings &lt;ben@decadent.org.uk&gt;
</content>
</entry>
<entry>
<title>x86_64, traps: Stop using IST for #SS</title>
<updated>2014-12-14T16:23:59Z</updated>
<author>
<name>Andy Lutomirski</name>
<email>luto@amacapital.net</email>
</author>
<published>2014-11-23T02:00:32Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=4c414592a79b82ddca76945c7afb4843684aa9a8'/>
<id>urn:sha1:4c414592a79b82ddca76945c7afb4843684aa9a8</id>
<content type='text'>
commit 6f442be2fb22be02cafa606f1769fa1e6f894441 upstream.

On a 32-bit kernel, this has no effect, since there are no IST stacks.

On a 64-bit kernel, #SS can only happen in user code, on a failed iret
to user space, a canonical violation on access via RSP or RBP, or a
genuine stack segment violation in 32-bit kernel code.  The first two
cases don't need IST, and the latter two cases are unlikely fatal bugs,
and promoting them to double faults would be fine.

This fixes a bug in which the espfix64 code mishandles a stack segment
violation.

This saves 4k of memory per CPU and a tiny bit of code.

Signed-off-by: Andy Lutomirski &lt;luto@amacapital.net&gt;
Reviewed-by: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
[bwh: Backported to 3.2:
 - No need to define trace_stack_segment
 - Use the errorentry macro to generate #SS asm code
 - Adjust context
 - Checked that this matches Luis's backport for Ubuntu]
Signed-off-by: Ben Hutchings &lt;ben@decadent.org.uk&gt;
</content>
</entry>
<entry>
<title>kvm: x86: fix stale mmio cache bug</title>
<updated>2014-12-14T16:23:41Z</updated>
<author>
<name>David Matlack</name>
<email>dmatlack@google.com</email>
</author>
<published>2014-08-18T22:46:07Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=d9c145180fc42155cd4c97fce75ca3165571a3ea'/>
<id>urn:sha1:d9c145180fc42155cd4c97fce75ca3165571a3ea</id>
<content type='text'>
commit 56f17dd3fbc44adcdbc3340fe3988ddb833a47a7 upstream.

The following events can lead to an incorrect KVM_EXIT_MMIO bubbling
up to userspace:

(1) Guest accesses gpa X without a memory slot. The gfn is cached in
struct kvm_vcpu_arch (mmio_gfn). On Intel EPT-enabled hosts, KVM sets
the SPTE write-execute-noread so that future accesses cause
EPT_MISCONFIGs.

(2) Host userspace creates a memory slot via KVM_SET_USER_MEMORY_REGION
covering the page just accessed.

(3) Guest attempts to read or write to gpa X again. On Intel, this
generates an EPT_MISCONFIG. The memory slot generation number that
was incremented in (2) would normally take care of this but we fast
path mmio faults through quickly_check_mmio_pf(), which only checks
the per-vcpu mmio cache. Since we hit the cache, KVM passes a
KVM_EXIT_MMIO up to userspace.

This patch fixes the issue by using the memslot generation number
to validate the mmio cache.

Signed-off-by: David Matlack &lt;dmatlack@google.com&gt;
[xiaoguangrong: adjust the code to make it simpler for stable-tree fix.]
Signed-off-by: Xiao Guangrong &lt;xiaoguangrong@linux.vnet.ibm.com&gt;
Reviewed-by: David Matlack &lt;dmatlack@google.com&gt;
Reviewed-by: Xiao Guangrong &lt;xiaoguangrong@linux.vnet.ibm.com&gt;
Tested-by: David Matlack &lt;dmatlack@google.com&gt;
Signed-off-by: Paolo Bonzini &lt;pbonzini@redhat.com&gt;
[bwh: Backported to 3.2: adjust context]
Signed-off-by: Ben Hutchings &lt;ben@decadent.org.uk&gt;
</content>
</entry>
<entry>
<title>kvm: vmx: handle invvpid vm exit gracefully</title>
<updated>2014-11-05T20:27:47Z</updated>
<author>
<name>Petr Matousek</name>
<email>pmatouse@redhat.com</email>
</author>
<published>2014-09-23T18:22:30Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=3f09b1f1033b9a6350b72649c6abdafdf81e5c2d'/>
<id>urn:sha1:3f09b1f1033b9a6350b72649c6abdafdf81e5c2d</id>
<content type='text'>
commit a642fc305053cc1c6e47e4f4df327895747ab485 upstream.

On systems with invvpid instruction support (corresponding bit in
IA32_VMX_EPT_VPID_CAP MSR is set) guest invocation of invvpid
causes vm exit, which is currently not handled and results in
propagation of unknown exit to userspace.

Fix this by installing an invvpid vm exit handler.

This is CVE-2014-3646.

Signed-off-by: Petr Matousek &lt;pmatouse@redhat.com&gt;
Signed-off-by: Paolo Bonzini &lt;pbonzini@redhat.com&gt;
[bwh: Backported to 3.2:
 - Adjust filename
 - Drop inapplicable change to exit reason string array]
Signed-off-by: Ben Hutchings &lt;ben@decadent.org.uk&gt;
</content>
</entry>
<entry>
<title>nEPT: Nested INVEPT</title>
<updated>2014-11-05T20:27:47Z</updated>
<author>
<name>Nadav Har'El</name>
<email>nyh@il.ibm.com</email>
</author>
<published>2013-08-05T08:07:17Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=02a988e6e4511b1f6d83525710a12db9c5a45149'/>
<id>urn:sha1:02a988e6e4511b1f6d83525710a12db9c5a45149</id>
<content type='text'>
commit bfd0a56b90005f8c8a004baf407ad90045c2b11e upstream.

If we let L1 use EPT, we should probably also support the INVEPT instruction.

In our current nested EPT implementation, when L1 changes its EPT table
for L2 (i.e., EPT12), L0 modifies the shadow EPT table (EPT02), and in
the course of this modification already calls INVEPT. But if last level
of shadow page is unsync not all L1's changes to EPT12 are intercepted,
which means roots need to be synced when L1 calls INVEPT. Global INVEPT
should not be different since roots are synced by kvm_mmu_load() each
time EPTP02 changes.

Reviewed-by: Xiao Guangrong &lt;xiaoguangrong@linux.vnet.ibm.com&gt;
Signed-off-by: Nadav Har'El &lt;nyh@il.ibm.com&gt;
Signed-off-by: Jun Nakajima &lt;jun.nakajima@intel.com&gt;
Signed-off-by: Xinhao Xu &lt;xinhao.xu@intel.com&gt;
Signed-off-by: Yang Zhang &lt;yang.z.zhang@Intel.com&gt;
Signed-off-by: Gleb Natapov &lt;gleb@redhat.com&gt;
Signed-off-by: Paolo Bonzini &lt;pbonzini@redhat.com&gt;
[bwh: Backported to 3.2:
 - Adjust context, filename
 - Simplify handle_invept() as recommended by Paolo - nEPT is not
   supported so we always raise #UD]
Signed-off-by: Ben Hutchings &lt;ben@decadent.org.uk&gt;
</content>
</entry>
</feed>
