<feed xmlns='http://www.w3.org/2005/Atom'>
<title>user/sven/linux.git/include/linux/kvm_host.h, branch v3.2.78</title>
<subtitle>Linux Kernel
</subtitle>
<id>https://git.stealer.net/cgit.cgi/user/sven/linux.git/atom?h=v3.2.78</id>
<link rel='self' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/atom?h=v3.2.78'/>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/'/>
<updated>2014-06-09T12:29:03Z</updated>
<entry>
<title>kvm: remove .done from struct kvm_async_pf</title>
<updated>2014-06-09T12:29:03Z</updated>
<author>
<name>Radim Krčmář</name>
<email>rkrcmar@redhat.com</email>
</author>
<published>2013-09-04T20:32:24Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=d244fc2319deec77099d7b4d63d8fe8830ac66ec'/>
<id>urn:sha1:d244fc2319deec77099d7b4d63d8fe8830ac66ec</id>
<content type='text'>
commit 98fda169290b3b28c0f2db2b8f02290c13da50ef upstream.

'.done' is used to mark the completion of 'async_pf_execute()', but
'cancel_work_sync()' returns true when the work was canceled, so we
use it instead.

Signed-off-by: Radim Krčmář &lt;rkrcmar@redhat.com&gt;
Reviewed-by: Paolo Bonzini &lt;pbonzini@redhat.com&gt;
Reviewed-by: Gleb Natapov &lt;gleb@redhat.com&gt;
Signed-off-by: Paolo Bonzini &lt;pbonzini@redhat.com&gt;
Signed-off-by: Ben Hutchings &lt;ben@decadent.org.uk&gt;
</content>
</entry>
<entry>
<title>KVM: Allow cross page reads and writes from cached translations.</title>
<updated>2013-04-25T19:25:50Z</updated>
<author>
<name>Andrew Honig</name>
<email>ahonig@google.com</email>
</author>
<published>2013-03-29T16:35:21Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=c471da1e3f5c6e43397dccf47cefd8edc86aa9f0'/>
<id>urn:sha1:c471da1e3f5c6e43397dccf47cefd8edc86aa9f0</id>
<content type='text'>
commit 8f964525a121f2ff2df948dac908dcc65be21b5b upstream.

This patch adds support for kvm_gfn_to_hva_cache_init functions for
reads and writes that will cross a page.  If the range falls within
the same memslot, then this will be a fast operation.  If the range
is split between two memslots, then the slower kvm_read_guest and
kvm_write_guest are used.

Tested: Test against kvm_clock unit tests.

Signed-off-by: Andrew Honig &lt;ahonig@google.com&gt;
Signed-off-by: Gleb Natapov &lt;gleb@redhat.com&gt;
[bwh: Backported to 3.2:
 - Drop change in lapic.c
 - Keep using __gfn_to_memslot() in kvm_gfn_to_hva_cache_init()]
Signed-off-by: Ben Hutchings &lt;ben@decadent.org.uk&gt;
</content>
</entry>
<entry>
<title>KVM: Ensure all vcpus are consistent with in-kernel irqchip settings</title>
<updated>2012-05-30T23:43:10Z</updated>
<author>
<name>Avi Kivity</name>
<email>avi@redhat.com</email>
</author>
<published>2012-03-05T12:23:29Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=645b177cbfce6b695bdbe0b4c131de584821840d'/>
<id>urn:sha1:645b177cbfce6b695bdbe0b4c131de584821840d</id>
<content type='text'>
(cherry picked from commit 3e515705a1f46beb1c942bb8043c16f8ac7b1e9e)

If some vcpus are created before KVM_CREATE_IRQCHIP, then
irqchip_in_kernel() and vcpu-&gt;arch.apic will be inconsistent, leading
to potential NULL pointer dereferences.

Fix by:
- ensuring that no vcpus are installed when KVM_CREATE_IRQCHIP is called
- ensuring that a vcpu has an apic if it is installed after KVM_CREATE_IRQCHIP

This is somewhat long winded because vcpu-&gt;arch.apic is created without
kvm-&gt;lock held.

Based on earlier patch by Michael Ellerman.

Signed-off-by: Michael Ellerman &lt;michael@ellerman.id.au&gt;
Signed-off-by: Avi Kivity &lt;avi@redhat.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
Signed-off-by: Ben Hutchings &lt;ben@decadent.org.uk&gt;
</content>
</entry>
<entry>
<title>KVM: unmap pages from the iommu when slots are removed</title>
<updated>2012-05-11T12:14:07Z</updated>
<author>
<name>Alex Williamson</name>
<email>alex.williamson@redhat.com</email>
</author>
<published>2012-04-27T21:54:08Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=1e57aab4e6c549804298f07fac0b6fc77f10fab2'/>
<id>urn:sha1:1e57aab4e6c549804298f07fac0b6fc77f10fab2</id>
<content type='text'>
commit 32f6daad4651a748a58a3ab6da0611862175722f upstream.

We've been adding new mappings, but not destroying old mappings.
This can lead to a page leak as pages are pinned using
get_user_pages, but only unpinned with put_page if they still
exist in the memslots list on vm shutdown.  A memslot that is
destroyed while an iommu domain is enabled for the guest will
therefore result in an elevated page reference count that is
never cleared.

Additionally, without this fix, the iommu is only programmed
with the first translation for a gpa.  This can result in
peer-to-peer errors if a mapping is destroyed and replaced by a
new mapping at the same gpa as the iommu will still be pointing
to the original, pinned memory address.

Signed-off-by: Alex Williamson &lt;alex.williamson@redhat.com&gt;
Signed-off-by: Marcelo Tosatti &lt;mtosatti@redhat.com&gt;
Signed-off-by: Jonathan Nieder &lt;jrnieder@gmail.com&gt;
Signed-off-by: Ben Hutchings &lt;ben@decadent.org.uk&gt;
</content>
</entry>
<entry>
<title>KVM: Fix simultaneous NMIs</title>
<updated>2011-09-25T16:52:59Z</updated>
<author>
<name>Avi Kivity</name>
<email>avi@redhat.com</email>
</author>
<published>2011-09-20T10:43:14Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=7460fb4a340033107530df19e7e125bd0969bfb2'/>
<id>urn:sha1:7460fb4a340033107530df19e7e125bd0969bfb2</id>
<content type='text'>
If simultaneous NMIs happen, we're supposed to queue the second
and next (collapsing them), but currently we sometimes collapse
the second into the first.

Fix by using a counter for pending NMIs instead of a bool; since
the counter limit depends on whether the processor is currently
in an NMI handler, which can only be checked in vcpu context
(via the NMI mask), we add a new KVM_REQ_NMI to request recalculation
of the counter.

Signed-off-by: Avi Kivity &lt;avi@redhat.com&gt;
Signed-off-by: Marcelo Tosatti &lt;mtosatti@redhat.com&gt;
</content>
</entry>
<entry>
<title>KVM: Clean up and extend rate-limited output</title>
<updated>2011-09-25T16:52:43Z</updated>
<author>
<name>Jan Kiszka</name>
<email>jan.kiszka@siemens.com</email>
</author>
<published>2011-09-12T09:26:22Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=bd80158aff71a80292f96d9baea1a65bc0ce87b3'/>
<id>urn:sha1:bd80158aff71a80292f96d9baea1a65bc0ce87b3</id>
<content type='text'>
The use of printk_ratelimit is discouraged, replace it with
pr*_ratelimited or __ratelimit. While at it, convert remaining
guest-triggerable printks to rate-limited variants.

Signed-off-by: Jan Kiszka &lt;jan.kiszka@siemens.com&gt;
Signed-off-by: Marcelo Tosatti &lt;mtosatti@redhat.com&gt;
</content>
</entry>
<entry>
<title>KVM: Intelligent device lookup on I/O bus</title>
<updated>2011-09-25T16:17:59Z</updated>
<author>
<name>Sasha Levin</name>
<email>levinsasha928@gmail.com</email>
</author>
<published>2011-07-27T13:00:48Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=743eeb0b01d2fbf4154bf87bff1ebb6fb18aeb7a'/>
<id>urn:sha1:743eeb0b01d2fbf4154bf87bff1ebb6fb18aeb7a</id>
<content type='text'>
Currently the method of dealing with an IO operation on a bus (PIO/MMIO)
is to call the read or write callback for each device registered
on the bus until we find a device which handles it.

Since the number of devices on a bus can be significant due to ioeventfds
and coalesced MMIO zones, this leads to a lot of overhead on each IO
operation.

Instead of registering devices, we now register ranges which points to
a device. Lookup is done using an efficient bsearch instead of a linear
search.

Performance test was conducted by comparing exit count per second with
200 ioeventfds created on one byte and the guest is trying to access a
different byte continuously (triggering usermode exits).
Before the patch the guest has achieved 259k exits per second, after the
patch the guest does 274k exits per second.

Cc: Avi Kivity &lt;avi@redhat.com&gt;
Cc: Marcelo Tosatti &lt;mtosatti@redhat.com&gt;
Signed-off-by: Sasha Levin &lt;levinsasha928@gmail.com&gt;
Signed-off-by: Avi Kivity &lt;avi@redhat.com&gt;
</content>
</entry>
<entry>
<title>KVM: Make coalesced mmio use a device per zone</title>
<updated>2011-09-25T16:17:57Z</updated>
<author>
<name>Sasha Levin</name>
<email>levinsasha928@gmail.com</email>
</author>
<published>2011-07-20T17:59:00Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=2b3c246a682c50f5415c71fc5387a114a6f0d643'/>
<id>urn:sha1:2b3c246a682c50f5415c71fc5387a114a6f0d643</id>
<content type='text'>
This patch changes coalesced mmio to create one mmio device per
zone instead of handling all zones in one device.

Doing so enables us to take advantage of existing locking and prevents
a race condition between coalesced mmio registration/unregistration
and lookups.

Suggested-by: Avi Kivity &lt;avi@redhat.com&gt;
Signed-off-by: Sasha Levin &lt;levinsasha928@gmail.com&gt;
Signed-off-by: Marcelo Tosatti &lt;mtosatti@redhat.com&gt;
</content>
</entry>
<entry>
<title>KVM: MMU: filter out the mmio pfn from the fault pfn</title>
<updated>2011-07-24T08:50:34Z</updated>
<author>
<name>Xiao Guangrong</name>
<email>xiaoguangrong@cn.fujitsu.com</email>
</author>
<published>2011-07-11T19:28:54Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=fce92dce79dbf5fff39c7ac2fb149729d79b7a39'/>
<id>urn:sha1:fce92dce79dbf5fff39c7ac2fb149729d79b7a39</id>
<content type='text'>
If the page fault is caused by mmio, the gfn can not be found in memslots, and
'bad_pfn' is returned on gfn_to_hva path, so we can use 'bad_pfn' to identify
the mmio page fault.
And, to clarify the meaning of mmio pfn, we return fault page instead of bad
page when the gfn is not allowd to prefetch

Signed-off-by: Xiao Guangrong &lt;xiaoguangrong@cn.fujitsu.com&gt;
Signed-off-by: Avi Kivity &lt;avi@redhat.com&gt;
</content>
</entry>
<entry>
<title>KVM: Steal time implementation</title>
<updated>2011-07-14T09:59:14Z</updated>
<author>
<name>Glauber Costa</name>
<email>glommer@redhat.com</email>
</author>
<published>2011-07-11T19:28:14Z</published>
<link rel='alternate' type='text/html' href='https://git.stealer.net/cgit.cgi/user/sven/linux.git/commit/?id=c9aaa8957f203bd6df83b002fb40b98390bed078'/>
<id>urn:sha1:c9aaa8957f203bd6df83b002fb40b98390bed078</id>
<content type='text'>
To implement steal time, we need the hypervisor to pass the guest
information about how much time was spent running other processes
outside the VM, while the vcpu had meaningful work to do - halt
time does not count.

This information is acquired through the run_delay field of
delayacct/schedstats infrastructure, that counts time spent in a
runqueue but not running.

Steal time is a per-cpu information, so the traditional MSR-based
infrastructure is used. A new msr, KVM_MSR_STEAL_TIME, holds the
memory area address containing information about steal time

This patch contains the hypervisor part of the steal time infrasructure,
and can be backported independently of the guest portion.

[avi, yongjie: export delayacct_on, to avoid build failures in some configs]

Signed-off-by: Glauber Costa &lt;glommer@redhat.com&gt;
Tested-by: Eric B Munson &lt;emunson@mgebm.net&gt;
CC: Rik van Riel &lt;riel@redhat.com&gt;
CC: Jeremy Fitzhardinge &lt;jeremy.fitzhardinge@citrix.com&gt;
CC: Peter Zijlstra &lt;peterz@infradead.org&gt;
CC: Anthony Liguori &lt;aliguori@us.ibm.com&gt;
Signed-off-by: Yongjie Ren &lt;yongjie.ren@intel.com&gt;
Signed-off-by: Avi Kivity &lt;avi@redhat.com&gt;
</content>
</entry>
</feed>
