safe/jmp/linux-2.6
15 years agoKVM: Really remove a slot when a user ask us so
Glauber Costa [Wed, 3 Dec 2008 15:40:51 +0000 (13:40 -0200)]
KVM: Really remove a slot when a user ask us so

Right now, KVM does not remove a slot when we do a
register ioctl for size 0 (would be the expected behaviour).

Instead, we only mark it as empty, but keep all bitmaps
and allocated data structures present. It completely
nullifies our chances of reusing that same slot again
for mapping a different piece of memory.

In this patch, we destroy rmaps, and vfree() the
pointers that used to hold the dirty bitmap, rmap
and lpage_info structures.

Signed-off-by: Glauber Costa <glommer@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: ppc: mostly cosmetic updates to the exit timing accounting code
Hollis Blanchard [Tue, 2 Dec 2008 21:51:58 +0000 (15:51 -0600)]
KVM: ppc: mostly cosmetic updates to the exit timing accounting code

The only significant changes were to kvmppc_exit_timing_write() and
kvmppc_exit_timing_show(), both of which were dramatically simplified.

Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: ppc: Implement in-kernel exit timing statistics
Hollis Blanchard [Tue, 2 Dec 2008 21:51:57 +0000 (15:51 -0600)]
KVM: ppc: Implement in-kernel exit timing statistics

Existing KVM statistics are either just counters (kvm_stat) reported for
KVM generally or trace based aproaches like kvm_trace.
For KVM on powerpc we had the need to track the timings of the different exit
types. While this could be achieved parsing data created with a kvm_trace
extension this adds too much overhead (at least on embedded PowerPC) slowing
down the workloads we wanted to measure.

Therefore this patch adds a in-kernel exit timing statistic to the powerpc kvm
code. These statistic is available per vm&vcpu under the kvm debugfs directory.
As this statistic is low, but still some overhead it can be enabled via a
.config entry and should be off by default.

Since this patch touched all powerpc kvm_stat code anyway this code is now
merged and simplified together with the exit timing statistic code (still
working with exit timing disabled in .config).

Signed-off-by: Christian Ehrhardt <ehrhardt@linux.vnet.ibm.com>
Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: ppc: save and restore guest mappings on context switch
Hollis Blanchard [Tue, 2 Dec 2008 21:51:56 +0000 (15:51 -0600)]
KVM: ppc: save and restore guest mappings on context switch

Store shadow TLB entries in memory, but only use it on host context switch
(instead of every guest entry). This improves performance for most workloads on
440 by reducing the guest TLB miss rate.

Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: ppc: directly insert shadow mappings into the hardware TLB
Hollis Blanchard [Tue, 2 Dec 2008 21:51:55 +0000 (15:51 -0600)]
KVM: ppc: directly insert shadow mappings into the hardware TLB

Formerly, we used to maintain a per-vcpu shadow TLB and on every entry to the
guest would load this array into the hardware TLB. This consumed 1280 bytes of
memory (64 entries of 16 bytes plus a struct page pointer each), and also
required some assembly to loop over the array on every entry.

Instead of saving a copy in memory, we can just store shadow mappings directly
into the hardware TLB, accepting that the host kernel will clobber these as
part of the normal 440 TLB round robin. When we do that we need less than half
the memory, and we have decreased the exit handling time for all guest exits,
at the cost of increased number of TLB misses because the host overwrites some
guest entries.

These savings will be increased on processors with larger TLBs or which
implement intelligent flush instructions like tlbivax (which will avoid the
need to walk arrays in software).

In addition to that and to the code simplification, we have a greater chance of
leaving other host userspace mappings in the TLB, instead of forcing all
subsequent tasks to re-fault all their mappings.

Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agopowerpc/44x: declare tlb_44x_index for use in C code
Hollis Blanchard [Tue, 2 Dec 2008 21:51:54 +0000 (15:51 -0600)]
powerpc/44x: declare tlb_44x_index for use in C code

KVM currently ignores the host's round robin TLB eviction selection, instead
maintaining its own TLB state and its own round robin index. However, by
participating in the normal 44x TLB selection, we can drop the alternate TLB
processing in KVM. This results in a significant performance improvement,
since that processing currently must be done on *every* guest exit.

Accordingly, KVM needs to be able to access and increment tlb_44x_index.
(KVM on 440 cannot be a module, so there is no need to export this symbol.)

Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com>
Acked-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: ppc: support large host pages
Hollis Blanchard [Tue, 2 Dec 2008 21:51:53 +0000 (15:51 -0600)]
KVM: ppc: support large host pages

KVM on 440 has always been able to handle large guest mappings with 4K host
pages -- we must, since the guest kernel uses 256MB mappings.

This patch makes KVM work when the host has large pages too (tested with 64K).

Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: split out kvm_free_assigned_irq()
Mark McLoughlin [Mon, 1 Dec 2008 13:57:49 +0000 (13:57 +0000)]
KVM: split out kvm_free_assigned_irq()

Split out the logic corresponding to undoing assign_irq() and
clean it up a bit.

Signed-off-by: Mark McLoughlin <markmc@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: add KVM_USERSPACE_IRQ_SOURCE_ID assertions
Mark McLoughlin [Mon, 1 Dec 2008 13:57:48 +0000 (13:57 +0000)]
KVM: add KVM_USERSPACE_IRQ_SOURCE_ID assertions

Make sure kvm_request_irq_source_id() never returns
KVM_USERSPACE_IRQ_SOURCE_ID.

Likewise, check that kvm_free_irq_source_id() never accepts
KVM_USERSPACE_IRQ_SOURCE_ID.

Signed-off-by: Mark McLoughlin <markmc@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: don't free an unallocated irq source id
Mark McLoughlin [Mon, 1 Dec 2008 13:57:47 +0000 (13:57 +0000)]
KVM: don't free an unallocated irq source id

Set assigned_dev->irq_source_id to -1 so that we can avoid freeing
a source ID which we never allocated.

Signed-off-by: Mark McLoughlin <markmc@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: make kvm_unregister_irq_ack_notifier() safe
Mark McLoughlin [Mon, 1 Dec 2008 13:57:46 +0000 (13:57 +0000)]
KVM: make kvm_unregister_irq_ack_notifier() safe

We never pass a NULL notifier pointer here, but we may well
pass a notifier struct which hasn't previously been
registered.

Guard against this by using hlist_del_init() which will
not do anything if the node hasn't been added to the list
and, when removing the node, will ensure that a subsequent
call to hlist_del_init() will be fine too.

Fixes an oops seen when an assigned device is freed before
and IRQ is assigned to it.

Signed-off-by: Mark McLoughlin <markmc@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: remove the IRQ ACK notifier assertions
Mark McLoughlin [Mon, 1 Dec 2008 13:57:45 +0000 (13:57 +0000)]
KVM: remove the IRQ ACK notifier assertions

We will obviously never pass a NULL struct kvm_irq_ack_notifier* to
this functions. They are always embedded in the assigned device
structure, so the assertion add nothing.

The irqchip_in_kernel() assertion is very out of place - clearly
this little abstraction needs to know nothing about the upper
layer details.

Signed-off-by: Mark McLoughlin <markmc@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: VMX: fix sparse warning
Hannes Eder [Fri, 28 Nov 2008 16:02:06 +0000 (17:02 +0100)]
KVM: VMX: fix sparse warning

Impact: make global function static

  arch/x86/kvm/vmx.c:134:3: warning: symbol 'vmx_capability' was not declared. Should it be static?

Signed-off-by: Hannes Eder <hannes@hanneseder.net>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: fix sparse warning
Hannes Eder [Fri, 28 Nov 2008 16:02:42 +0000 (17:02 +0100)]
KVM: fix sparse warning

Impact: make global function static

  virt/kvm/kvm_main.c:85:6: warning: symbol 'kvm_rebooting' was not declared. Should it be static?

Signed-off-by: Hannes Eder <hannes@hanneseder.net>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: Remove extraneous semicolon after do/while
Avi Kivity [Sat, 29 Nov 2008 18:38:12 +0000 (20:38 +0200)]
KVM: Remove extraneous semicolon after do/while

Notices by Guillaume Thouvenin.

Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: x86 emulator: fix popf emulation
Avi Kivity [Sat, 29 Nov 2008 18:36:13 +0000 (20:36 +0200)]
KVM: x86 emulator: fix popf emulation

Set operand type and size to get correct writeback behavior.

Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: x86 emulator: fix ret emulation
Avi Kivity [Thu, 27 Nov 2008 22:14:07 +0000 (00:14 +0200)]
KVM: x86 emulator: fix ret emulation

'ret' did not set the operand type or size for the destination, so
writeback ignored it.

Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: x86 emulator: switch 'pop reg' instruction to emulate_pop()
Avi Kivity [Thu, 27 Nov 2008 16:06:33 +0000 (18:06 +0200)]
KVM: x86 emulator: switch 'pop reg' instruction to emulate_pop()

Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: x86 emulator: allow pop from mmio
Avi Kivity [Thu, 27 Nov 2008 16:00:28 +0000 (18:00 +0200)]
KVM: x86 emulator: allow pop from mmio

Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: x86 emulator: Extract 'pop' sequence into a function
Avi Kivity [Thu, 27 Nov 2008 15:36:41 +0000 (17:36 +0200)]
KVM: x86 emulator: Extract 'pop' sequence into a function

Switch 'pop r/m' instruction to use the new function.

Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: Prevent trace call into unloaded module text
Wu Fengguang [Wed, 26 Nov 2008 11:59:06 +0000 (19:59 +0800)]
KVM: Prevent trace call into unloaded module text

Add marker_synchronize_unregister() before module unloading.
This prevents possible trace calls into unloaded module text.

Signed-off-by: Wu Fengguang <wfg@linux.intel.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: s390: Fix memory leak of vcpu->run
Christian Borntraeger [Wed, 26 Nov 2008 13:51:08 +0000 (14:51 +0100)]
KVM: s390: Fix memory leak of vcpu->run

The s390 backend of kvm never calls kvm_vcpu_uninit. This causes
a memory leak of vcpu->run pages.
Lets call kvm_vcpu_uninit in kvm_arch_vcpu_destroy to free
the vcpu->run.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Acked-by: Carsten Otte <cotte@de.ibm.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: s390: Fix refcounting and allow module unload
Christian Borntraeger [Wed, 26 Nov 2008 13:50:27 +0000 (14:50 +0100)]
KVM: s390: Fix refcounting and allow module unload

Currently it is impossible to unload the kvm module on s390.
This patch fixes kvm_arch_destroy_vm to release all cpus.
This make it possible to unload the module.

In addition we stop messing with the module refcount in arch code.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Acked-by: Carsten Otte <cotte@de.ibm.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: x86 emulator: consolidate emulation of two operand instructions
Avi Kivity [Wed, 26 Nov 2008 13:30:45 +0000 (15:30 +0200)]
KVM: x86 emulator: consolidate emulation of two operand instructions

No need to repeat the same assembly block over and over.

Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: x86 emulator: reduce duplication in one operand emulation thunks
Avi Kivity [Wed, 26 Nov 2008 13:14:10 +0000 (15:14 +0200)]
KVM: x86 emulator: reduce duplication in one operand emulation thunks

Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: MMU: optimize set_spte for page sync
Marcelo Tosatti [Tue, 25 Nov 2008 14:58:07 +0000 (15:58 +0100)]
KVM: MMU: optimize set_spte for page sync

The write protect verification in set_spte is unnecessary for page sync.

Its guaranteed that, if the unsync spte was writable, the target page
does not have a write protected shadow (if it had, the spte would have
been write protected under mmu_lock by rmap_write_protect before).

Same reasoning applies to mark_page_dirty: the gfn has been marked as
dirty via the pagefault path.

The cost of hash table and memslot lookups are quite significant if the
workload is pagetable write intensive resulting in increased mmu_lock
contention.

Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: MSI to INTx translate
Sheng Yang [Mon, 24 Nov 2008 06:32:57 +0000 (14:32 +0800)]
KVM: MSI to INTx translate

Now we use MSI as default one, and translate MSI to INTx when guest need
INTx rather than MSI. For legacy device, we provide support for non-sharing
host IRQ.

Provide a parameter msi2intx for this method. The value is true by default in
x86 architecture.

We can't guarantee this mode can work on every device, but for most of us
tested, it works. If your device encounter some trouble with this mode, you can
try set msi2intx modules parameter to 0. If the device is OK with msi2intx=0,
then please report it to KVM mailing list or me. We may prepare a blacklist for
the device that can't work in this mode.

Signed-off-by: Sheng Yang <sheng@linux.intel.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: Enable MSI for device assignment
Sheng Yang [Mon, 24 Nov 2008 06:32:56 +0000 (14:32 +0800)]
KVM: Enable MSI for device assignment

We enable guest MSI and host MSI support in this patch. The userspace want to
enable MSI should set KVM_DEV_IRQ_ASSIGN_ENABLE_MSI in the assigned_irq's flag.
Function would return -ENOTTY if can't enable MSI, userspace shouldn't set MSI
Enable bit when KVM_ASSIGN_IRQ return -ENOTTY with
KVM_DEV_IRQ_ASSIGN_ENABLE_MSI.

Userspace can tell the support of MSI device from #ifdef KVM_CAP_DEVICE_MSI.

Signed-off-by: Sheng Yang <sheng@linux.intel.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: Add assigned_device_msi_dispatch()
Sheng Yang [Mon, 24 Nov 2008 06:32:55 +0000 (14:32 +0800)]
KVM: Add assigned_device_msi_dispatch()

The function is used to dispatch MSI to lapic according to MSI message
address and message data.

Signed-off-by: Sheng Yang <sheng@linux.intel.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: Export ioapic_get_delivery_bitmask
Sheng Yang [Mon, 24 Nov 2008 06:32:54 +0000 (14:32 +0800)]
KVM: Export ioapic_get_delivery_bitmask

It would be used for MSI in device assignment, for MSI dispatch.

Signed-off-by: Sheng Yang <sheng@linux.intel.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: Add fields for MSI device assignment
Sheng Yang [Mon, 24 Nov 2008 06:32:53 +0000 (14:32 +0800)]
KVM: Add fields for MSI device assignment

Prepared for kvm_arch_assigned_device_msi_dispatch().

Signed-off-by: Sheng Yang <sheng@linux.intel.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: Clean up assigned_device_update_irq
Sheng Yang [Mon, 24 Nov 2008 06:32:52 +0000 (14:32 +0800)]
KVM: Clean up assigned_device_update_irq

Signed-off-by: Sheng Yang <sheng@linux.intel.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: Replace irq_requested with more generic irq_requested_type
Sheng Yang [Mon, 24 Nov 2008 06:32:51 +0000 (14:32 +0800)]
KVM: Replace irq_requested with more generic irq_requested_type

Separate guest irq type and host irq type, for we can support guest using INTx
with host using MSI (but not opposite combination).

Signed-off-by: Sheng Yang <sheng@linux.intel.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: Separate update irq to a single function
Sheng Yang [Mon, 24 Nov 2008 06:32:50 +0000 (14:32 +0800)]
KVM: Separate update irq to a single function

Separate INTx enabling part to a independence function, so that we can add MSI
enabling part easily.

Signed-off-by: Sheng Yang <sheng@linux.intel.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: Move ack notifier register and IRQ sourcd ID request
Sheng Yang [Mon, 24 Nov 2008 06:32:49 +0000 (14:32 +0800)]
KVM: Move ack notifier register and IRQ sourcd ID request

Distinguish common part for device assignment and INTx part, perparing for
refactor later.

Signed-off-by: Sheng Yang <sheng@linux.intel.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agox86: KVM guest: sign kvmclock as paravirt
Glauber Costa [Mon, 24 Nov 2008 17:45:23 +0000 (15:45 -0200)]
x86: KVM guest: sign kvmclock as paravirt

Currently, we only set the KVM paravirt signature in case
of CONFIG_KVM_GUEST. However, it is possible to have it turned
off, while CONFIG_KVM_CLOCK is turned on. This is also a paravirt
case, and should be shown accordingly.

Signed-off-by: Glauber Costa <glommer@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: VMX: Conditionally request interrupt window after injecting irq
Avi Kivity [Sun, 23 Nov 2008 16:08:57 +0000 (18:08 +0200)]
KVM: VMX: Conditionally request interrupt window after injecting irq

If we're injecting an interrupt, and another one is pending, request
an interrupt window notification so we don't have excess latency on the
second interrupt.

This shouldn't happen in practice since an EOI will be issued, giving a second
chance to request an interrupt window, but...

Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: ia64: Clean up vmm_ivt.S using tab to indent every line
Xiantao Zhang [Fri, 21 Nov 2008 13:04:37 +0000 (21:04 +0800)]
KVM: ia64: Clean up vmm_ivt.S using tab to indent every line

Using tab for indentation for vmm_ivt.S.

Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: ia64: Add handler for crashed vmm
Xiantao Zhang [Fri, 21 Nov 2008 09:16:07 +0000 (17:16 +0800)]
KVM: ia64: Add handler for crashed vmm

Since vmm runs in an isolated address space and it is just a copy
of host's kvm-intel module, so once vmm crashes, we just crash all guests
running on it instead of crashing whole kernel.

Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: ia64: Add some debug points to provide crash infomation
Xiantao Zhang [Fri, 21 Nov 2008 02:46:12 +0000 (10:46 +0800)]
KVM: ia64: Add some debug points to provide crash infomation

Use printk infrastructure to print out some debug info once VM crashes.

Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: ia64: Define printk function for kvm-intel module
Xiantao Zhang [Fri, 21 Nov 2008 12:58:11 +0000 (20:58 +0800)]
KVM: ia64: Define printk function for kvm-intel module

kvm-intel module is relocated to an isolated address space
with kernel, so it can't call host kernel's printk for debug
purpose. In the module, we implement the printk to output debug
info of vmm.

Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agox86: disable VMX on all CPUs on reboot
Eduardo Habkost [Mon, 17 Nov 2008 21:03:24 +0000 (19:03 -0200)]
x86: disable VMX on all CPUs on reboot

On emergency_restart, we may need to use an NMI to disable virtualization
on all CPUs. We do that using nmi_shootdown_cpus() if VMX is enabled.

Note: With this patch, we will run the NMI stuff only when the CPU where
emergency_restart() was called has VMX enabled. This should work on most
cases because KVM enables VMX on all CPUs, but we may miss the small
window where KVM is doing that. Also, I don't know if all code using
VMX out there always enable VMX on all CPUs like KVM does. We have two
other alternatives for that:

a) Have an API that all code that enables VMX on any CPU should use
   to tell the kernel core that it is going to enable VMX on the CPUs.
b) Always call nmi_shootdown_cpus() if the CPU supports VMX. This is
   a bit intrusive and more risky, as it would run nmi_shootdown_cpus()
   on emergency_reboot() even on systems where virtualization is never
   enabled.

Finding a proper point to hook the nmi_shootdown_cpus() call isn't
trivial, as the non-emergency machine_restart() (that doesn't need the
NMI tricks) uses machine_emergency_restart() directly.

The solution to make this work without adding a new function or argument
to machine_ops was setting a 'reboot_emergency' flag that tells if
native_machine_emergency_restart() needs to do the virt cleanup or not.

Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agokdump: forcibly disable VMX and SVM on machine_crash_shutdown()
Eduardo Habkost [Mon, 17 Nov 2008 21:03:23 +0000 (19:03 -0200)]
kdump: forcibly disable VMX and SVM on machine_crash_shutdown()

We need to disable virtualization extensions on all CPUs before booting
the kdump kernel, otherwise the kdump kernel booting will fail, and
rebooting after the kdump kernel did its task may also fail.

We do it using cpu_emergency_vmxoff() and cpu_emergency_svm_disable(),
that should always work, because those functions check if the CPUs
support SVM or VMX before doing their tasks.

Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agox86: cpu_emergency_svm_disable() function
Eduardo Habkost [Mon, 17 Nov 2008 21:03:22 +0000 (19:03 -0200)]
x86: cpu_emergency_svm_disable() function

This function can be used by the reboot or kdump code to forcibly
disable SVM on the CPU.

Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: SVM: move svm_hardware_disable() code to asm/virtext.h
Eduardo Habkost [Mon, 17 Nov 2008 21:03:21 +0000 (19:03 -0200)]
KVM: SVM: move svm_hardware_disable() code to asm/virtext.h

Create cpu_svm_disable() function.

Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: SVM: move has_svm() code to asm/virtext.h
Eduardo Habkost [Mon, 17 Nov 2008 21:03:20 +0000 (19:03 -0200)]
KVM: SVM: move has_svm() code to asm/virtext.h

Use a trick to keep the printk()s on has_svm() working as before. gcc
will take care of not generating code for the 'msg' stuff when the
function is called with a NULL msg argument.

Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agox86: cpu_emergency_vmxoff() function
Eduardo Habkost [Mon, 17 Nov 2008 21:03:19 +0000 (19:03 -0200)]
x86: cpu_emergency_vmxoff() function

Add cpu_emergency_vmxoff() and its friends: cpu_vmx_enabled() and
__cpu_emergency_vmxoff().

Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: VMX: extract kvm_cpu_vmxoff() from hardware_disable()
Eduardo Habkost [Mon, 17 Nov 2008 21:03:18 +0000 (19:03 -0200)]
KVM: VMX: extract kvm_cpu_vmxoff() from hardware_disable()

Along with some comments on why it is different from the core cpu_vmxoff()
function.

Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agox86: asm/virtext.h: add cpu_vmxoff() inline function
Eduardo Habkost [Mon, 17 Nov 2008 21:03:17 +0000 (19:03 -0200)]
x86: asm/virtext.h: add cpu_vmxoff() inline function

Unfortunately we can't use exactly the same code from vmx
hardware_disable(), because the KVM function uses the
__kvm_handle_fault_on_reboot() tricks.

Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: VMX: move cpu_has_kvm_support() to an inline on asm/virtext.h
Eduardo Habkost [Mon, 17 Nov 2008 21:03:16 +0000 (19:03 -0200)]
KVM: VMX: move cpu_has_kvm_support() to an inline on asm/virtext.h

It will be used by core code on kdump and reboot, to disable
vmx if needed.

Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: VMX: move ASM_VMX_* definitions from asm/kvm_host.h to asm/vmx.h
Eduardo Habkost [Mon, 17 Nov 2008 21:03:15 +0000 (19:03 -0200)]
KVM: VMX: move ASM_VMX_* definitions from asm/kvm_host.h to asm/vmx.h

Those definitions will be used by code outside KVM, so move it outside
of a KVM-specific source file.

Those definitions are used only on kvm/vmx.c, that already includes
asm/vmx.h, so they can be moved safely.

Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: SVM: move svm.h to include/asm
Eduardo Habkost [Mon, 17 Nov 2008 21:03:14 +0000 (19:03 -0200)]
KVM: SVM: move svm.h to include/asm

svm.h will be used by core code that is independent of KVM, so I am
moving it outside the arch/x86/kvm directory.

Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: VMX: move vmx.h to include/asm
Eduardo Habkost [Mon, 17 Nov 2008 21:03:13 +0000 (19:03 -0200)]
KVM: VMX: move vmx.h to include/asm

vmx.h will be used by core code that is independent of KVM, so I am
moving it outside the arch/x86/kvm directory.

Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: ppc: fix userspace mapping invalidation on context switch
Hollis Blanchard [Mon, 10 Nov 2008 20:57:36 +0000 (14:57 -0600)]
KVM: ppc: fix userspace mapping invalidation on context switch

We used to defer invalidating userspace TLB entries until jumping out of the
kernel. This was causing MMU weirdness most easily triggered by using a pipe in
the guest, e.g. "dmesg | tail". I believe the problem was that after the guest
kernel changed the PID (part of context switch), the old process's mappings
were still present, and so copy_to_user() on the "return to new process" path
ended up using stale mappings.

Testing with large pages (64K) exposed the problem, probably because with 4K
pages, pressure on the TLB faulted all process A's mappings out before the
guest kernel could insert any for process B.

Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: ppc: use prefetchable mappings for guest memory
Hollis Blanchard [Mon, 10 Nov 2008 20:57:35 +0000 (14:57 -0600)]
KVM: ppc: use prefetchable mappings for guest memory

Bare metal Linux on 440 can "overmap" RAM in the kernel linear map, so that it
can use large (256MB) mappings even if memory isn't a multiple of 256MB. To
prevent the hardware prefetcher from loading from an invalid physical address
through that mapping, it's marked Guarded.

However, KVM must ensure that all guest mappings are backed by real physical
RAM (since a deliberate access through a guarded mapping could still cause a
machine check). Accordingly, we don't need to make our mappings guarded, so
let's allow prefetching as the designers intended.

Curiously this patch didn't affect performance at all on the quick test I
tried, but it's clearly the right thing to do anyways and may improve other
workloads.

Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: ppc: use MMUCR accessor to obtain TID
Hollis Blanchard [Mon, 10 Nov 2008 20:57:34 +0000 (14:57 -0600)]
KVM: ppc: use MMUCR accessor to obtain TID

We have an accessor; might as well use it.

Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: Fix kernel allocated memory slot
Sheng Yang [Tue, 11 Nov 2008 07:30:40 +0000 (15:30 +0800)]
KVM: Fix kernel allocated memory slot

Commit 7fd49de9773fdcb7b75e823b21c1c5dc1e218c14 "KVM: ensure that memslot
userspace addresses are page-aligned" broke kernel space allocated memory
slot, for the userspace_addr is invalid.

Signed-off-by: Sheng Yang <sheng@linux.intel.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: ia64: Remove some macro definitions in asm-offsets.c.
Xiantao Zhang [Fri, 24 Oct 2008 03:47:57 +0000 (11:47 +0800)]
KVM: ia64: Remove some macro definitions in asm-offsets.c.

Use kernel's corresponding macro instead.

Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: ppc: fix Kconfig constraints
Hollis Blanchard [Fri, 7 Nov 2008 19:15:13 +0000 (13:15 -0600)]
KVM: ppc: fix Kconfig constraints

Make sure that CONFIG_KVM cannot be selected without processor support
(currently, 440 is the only processor implementation available).

Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: ensure that memslot userspace addresses are page-aligned
Hollis Blanchard [Fri, 7 Nov 2008 19:32:12 +0000 (13:32 -0600)]
KVM: ensure that memslot userspace addresses are page-aligned

Bad page translation and silent guest failure ensue if the userspace address is
not page-aligned.  I hit this problem using large (host) pages with qemu,
because qemu currently has a hardcoded 4096-byte alignment for guest memory
allocations.

Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: Fix cpuid iteration on multiple leaves per eac
Nitin A Kamble [Wed, 5 Nov 2008 23:56:21 +0000 (15:56 -0800)]
KVM: Fix cpuid iteration on multiple leaves per eac

The code to traverse the cpuid data array list for counting type of leaves is
currently broken.

This patches fixes the 2 things in it.

 1. Set the 1st counting entry's flag KVM_CPUID_FLAG_STATE_READ_NEXT. Without
    it the code will never find a valid entry.

 2. Also the stop condition in the for loop while looking for the next unflaged
    entry is broken. It needs to stop when it find one matching entry;
    and in the case of count of 1, it will be the same entry found in this
    iteration.

Signed-Off-By: Nitin A Kamble <nitin.a.kamble@intel.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: Fix cpuid leaf 0xb loop termination
Nitin A Kamble [Wed, 5 Nov 2008 23:37:36 +0000 (15:37 -0800)]
KVM: Fix cpuid leaf 0xb loop termination

For cpuid leaf 0xb the bits 8-15 in ECX register define the end of counting
leaf.      The previous code was using bits 0-7 for this purpose, which is
a bug.

Signed-off-by: Nitin A Kamble <nitin.a.kamble@intel.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: ppc: improve trap emulation
Hollis Blanchard [Wed, 5 Nov 2008 15:36:24 +0000 (09:36 -0600)]
KVM: ppc: improve trap emulation

set ESR[PTR] when emulating a guest trap. This allows Linux guests to
properly handle WARN_ON() (i.e. detect that it's a non-fatal trap).

Also remove debugging printk in trap emulation.

Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: ppc: optimize irq delivery path
Hollis Blanchard [Wed, 5 Nov 2008 15:36:23 +0000 (09:36 -0600)]
KVM: ppc: optimize irq delivery path

In kvmppc_deliver_interrupt is just one case left in the switch and it is a
rare one (less than 8%) when looking at the exit numbers. Therefore we can
at least drop the switch/case and if an if. I inserted an unlikely too, but
that's open for discussion.

In kvmppc_can_deliver_interrupt all frequent cases are in the default case.
I know compilers are smart but we can make it easier for them. By writing
down all options and removing the default case combined with the fact that
ithe values are constants 0..15 should allow the compiler to write an easy
jump table.
Modifying kvmppc_can_deliver_interrupt pointed me to the fact that gcc seems
to be unable to reduce priority_exception[x] to a build time constant.
Therefore I changed the usage of the translation arrays in the interrupt
delivery path completely. It is now using priority without translation to irq
on the full irq delivery path.
To be able to do that ivpr regs are stored by their priority now.

Additionally the decision made in kvmppc_can_deliver_interrupt is already
sufficient to get the value of interrupt_msr_mask[x]. Therefore we can replace
the 16x4byte array used here with a single 4byte variable (might still be one
miss, but the chance to find this in cache should be better than the right
entry of the whole array).

Signed-off-by: Christian Ehrhardt <ehrhardt@linux.vnet.ibm.com>
Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: ppc: optimize find first bit
Hollis Blanchard [Wed, 5 Nov 2008 15:36:22 +0000 (09:36 -0600)]
KVM: ppc: optimize find first bit

Since we use a unsigned long here anyway we can use the optimized __ffs.

Signed-off-by: Christian Ehrhardt <ehrhardt@linux.vnet.ibm.com>
Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: ppc: optimize kvm stat handling
Hollis Blanchard [Wed, 5 Nov 2008 15:36:21 +0000 (09:36 -0600)]
KVM: ppc: optimize kvm stat handling

Currently we use an unnecessary if&switch to detect some cases.
To be honest we don't need the ligh_exits counter anyway, because we can
calculate it out of others. Sum_exits can also be calculated, so we can
remove that too.
MMIO, DCR  and INTR can be counted on other places without these
additional control structures (The INTR case was never hit anyway).

The handling of BOOKE_INTERRUPT_EXTERNAL/BOOKE_INTERRUPT_DECREMENTER is
similar, but we can avoid the additional if when copying 3 lines of code.
I thought about a goto there to prevent duplicate lines, but rewriting three
lines should be better style than a goto cross switch/case statements (its
also not enough code to justify a new inline function).

Signed-off-by: Christian Ehrhardt <ehrhardt@linux.vnet.ibm.com>
Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: ppc: fix set regs to take care of msr change
Hollis Blanchard [Wed, 5 Nov 2008 15:36:20 +0000 (09:36 -0600)]
KVM: ppc: fix set regs to take care of msr change

When changing some msr bits e.g. problem state we need to take special
care of that. We call the function in our mtmsr emulation (not needed for
wrtee[i]), but we don't call kvmppc_set_msr if we change msr via set_regs
ioctl.
It's a corner case we never hit so far, but I assume it should be
kvmppc_set_msr in our arch set regs function (I found it because it is also
a corner case when using pv support which would miss the update otherwise).

Signed-off-by: Christian Ehrhardt <ehrhardt@linux.vnet.ibm.com>
Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: ppc: adjust vcpu types to support 64-bit cores
Hollis Blanchard [Wed, 5 Nov 2008 15:36:19 +0000 (09:36 -0600)]
KVM: ppc: adjust vcpu types to support 64-bit cores

However, some of these fields could be split into separate per-core structures
in the future.

Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: ppc: create struct kvm_vcpu_44x and introduce container_of() accessor
Hollis Blanchard [Wed, 5 Nov 2008 15:36:18 +0000 (09:36 -0600)]
KVM: ppc: create struct kvm_vcpu_44x and introduce container_of() accessor

This patch doesn't yet move all 44x-specific data into the new structure, but
is the first step down that path. In the future we may also want to create a
struct kvm_vcpu_booke.

Based on patch from Liu Yu <yu.liu@freescale.com>.

Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: ppc: Move the last bits of 44x code out of booke.c
Hollis Blanchard [Wed, 5 Nov 2008 15:36:17 +0000 (09:36 -0600)]
KVM: ppc: Move the last bits of 44x code out of booke.c

Needed to port to other Book E processors.

Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: ppc: refactor instruction emulation into generic and core-specific pieces
Hollis Blanchard [Wed, 5 Nov 2008 15:36:16 +0000 (09:36 -0600)]
KVM: ppc: refactor instruction emulation into generic and core-specific pieces

Cores provide 3 emulation hooks, implemented for example in the new
4xx_emulate.c:
kvmppc_core_emulate_op
kvmppc_core_emulate_mtspr
kvmppc_core_emulate_mfspr

Strictly speaking the last two aren't necessary, but provide for more
informative error reporting ("unknown SPR").

Long term I'd like to have instruction decoding autogenerated from tables of
opcodes, and that way we could aggregate universal, Book E, and core-specific
instructions more easily and without redundant switch statements.

Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoppc: Create disassemble.h to extract instruction fields
Hollis Blanchard [Wed, 5 Nov 2008 15:36:15 +0000 (09:36 -0600)]
ppc: Create disassemble.h to extract instruction fields

This is used in a couple places in KVM, but isn't KVM-specific.

However, this patch doesn't modify other in-kernel emulation code:
- xmon uses a direct copy of ppc_opc.c from binutils
- emulate_instruction() doesn't need it because it can use a series
  of mask tests.

Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com>
Acked-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: ppc: Refactor powerpc.c to relocate 440-specific code
Hollis Blanchard [Wed, 5 Nov 2008 15:36:14 +0000 (09:36 -0600)]
KVM: ppc: Refactor powerpc.c to relocate 440-specific code

This introduces a set of core-provided hooks. For 440, some of these are
implemented by booke.c, with the rest in (the new) 44x.c.

Note that these hooks are link-time, not run-time. Since it is not possible to
build a single kernel for both e500 and 440 (for example), using function
pointers would only add overhead.

Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: ppc: combine booke_guest.c and booke_host.c
Hollis Blanchard [Wed, 5 Nov 2008 15:36:13 +0000 (09:36 -0600)]
KVM: ppc: combine booke_guest.c and booke_host.c

The division was somewhat artificial and cumbersome, and had no functional
benefit anyways: we can only guests built for the real host processor.

Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: ppc: Rename "struct tlbe" to "struct kvmppc_44x_tlbe"
Hollis Blanchard [Wed, 5 Nov 2008 15:36:12 +0000 (09:36 -0600)]
KVM: ppc: Rename "struct tlbe" to "struct kvmppc_44x_tlbe"

This will ease ports to other cores.

Also remove unused "struct kvm_tlb" while we're at it.

Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: ppc: Move 440-specific TLB code into 44x_tlb.c
Hollis Blanchard [Wed, 5 Nov 2008 15:36:11 +0000 (09:36 -0600)]
KVM: ppc: Move 440-specific TLB code into 44x_tlb.c

This will make it easier to provide implementations for other cores.

Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: MMU: Fix aliased gfns treated as unaliased
Izik Eidus [Fri, 3 Oct 2008 14:40:32 +0000 (17:40 +0300)]
KVM: MMU: Fix aliased gfns treated as unaliased

Some areas of kvm x86 mmu are using gfn offset inside a slot without
unaliasing the gfn first.  This patch makes sure that the gfn will be
unaliased and add gfn_to_memslot_unaliased() to save the calculating
of the gfn unaliasing in case we have it unaliased already.

Signed-off-by: Izik Eidus <ieidus@redhat.com>
Acked-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: Enable Function Level Reset for assigned device
Sheng Yang [Fri, 31 Oct 2008 04:37:41 +0000 (12:37 +0800)]
KVM: Enable Function Level Reset for assigned device

Ideally, every assigned device should in a clear condition before and after
assignment, so that the former state of device won't affect later work.
Some devices provide a mechanism named Function Level Reset, which is
defined in PCI/PCI-e document. We should execute it before and after device
assignment.

(But sadly, the feature is new, and most device on the market now don't
support it. We are considering using D0/D3hot transmit to emulate it later,
but not that elegant and reliable as FLR itself.)

[Update: Reminded by Xiantao, execute FLR after we ensure that the device can
be assigned to the guest.]

Signed-off-by: Sheng Yang <sheng@linux.intel.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: ia64: Remove lock held by halted vcpu
Xiantao Zhang [Thu, 23 Oct 2008 07:03:38 +0000 (15:03 +0800)]
KVM: ia64: Remove lock held by halted vcpu

Remove the lock protection for kvm halt logic, otherwise,
once other vcpus want to acquire the lock, and they have to
wait all vcpus are waken up from halt.

Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: ia64: Re-organize data sturure of guests' data area
Xiantao Zhang [Thu, 23 Oct 2008 06:56:44 +0000 (14:56 +0800)]
KVM: ia64: Re-organize data sturure of guests' data area

1. Increase the size of data area to 64M
2. Support more vcpus and memory, 128 vcpus and 256G memory are supported
   for guests.
3. Add the boundary check for memory and vcpu allocation.

With this patch, kvm guest's data area looks as follow:
  *
  *            +----------------------+  ------- KVM_VM_DATA_SIZE
  *            |     vcpu[n]'s data   |   |     ___________________KVM_STK_OFFSET
  *            |                      |   |    /                   |
  *            |        ..........    |   |   /vcpu's struct&stack |
  *            |        ..........    |   |  /---------------------|---- 0
  *            |     vcpu[5]'s data   |   | /       vpd            |
  *            |     vcpu[4]'s data   |   |/-----------------------|
  *            |     vcpu[3]'s data   |   /         vtlb           |
  *            |     vcpu[2]'s data   |  /|------------------------|
  *            |     vcpu[1]'s data   |/  |         vhpt           |
  *            |     vcpu[0]'s data   |____________________________|
  *            +----------------------+   |
  *            |    memory dirty log  |   |
  *            +----------------------+   |
  *            |    vm's data struct  |   |
  *            +----------------------+   |
  *            |                      |   |
  *            |                      |   |
  *            |                      |   |
  *            |                      |   |
  *            |                      |   |
  *            |                      |   |
  *            |                      |   |
  *            |   vm's p2m table  |      |
  *            |                      |   |
  *            |                      |   |
  *            |                      |   |  |
  * vm's data->|                      |   |  |
  *            +----------------------+ ------- 0
  * To support large memory, needs to increase the size of p2m.
  * To support more vcpus, needs to ensure it has enough space to
  * hold vcpus' data.
  */

Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: VMX: Handle mmio emulation when guest state is invalid
Guillaume Thouvenin [Wed, 29 Oct 2008 08:39:42 +0000 (09:39 +0100)]
KVM: VMX: Handle mmio emulation when guest state is invalid

If emulate_invalid_guest_state is enabled, the emulator is called
when guest state is invalid.  Until now, we reported an mmio failure
when emulate_instruction() returned EMULATE_DO_MMIO.  This patch adds
the case where emulate_instruction() failed and an MMIO emulation
is needed.

Signed-off-by: Guillaume Thouvenin <guillaume.thouvenin@ext.bull.net>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: allow emulator to adjust rip for emulated pio instructions
Guillaume Thouvenin [Tue, 28 Oct 2008 09:51:30 +0000 (10:51 +0100)]
KVM: allow emulator to adjust rip for emulated pio instructions

If we call the emulator we shouldn't call skip_emulated_instruction()
in the first place, since the emulator already computes the next rip
for us. Thus we move ->skip_emulated_instruction() out of
kvm_emulate_pio() and into handle_io() (and the svm equivalent). We
also replaced "return 0" by "break" in the "do_io:" case because now
the shadow register state needs to be committed. Otherwise eip will never
be updated.

Signed-off-by: Guillaume Thouvenin <guillaume.thouvenin@ext.bull.net>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: SVM: Set the 'busy' flag of the TR selector
Amit Shah [Mon, 27 Oct 2008 09:04:18 +0000 (09:04 +0000)]
KVM: SVM: Set the 'busy' flag of the TR selector

The busy flag of the TR selector is not set by the hardware. This breaks
migration from amd hosts to intel hosts.

Signed-off-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: SVM: Set the 'g' bit of the cs selector for cross-vendor migration
Amit Shah [Mon, 27 Oct 2008 09:04:17 +0000 (09:04 +0000)]
KVM: SVM: Set the 'g' bit of the cs selector for cross-vendor migration

The hardware does not set the 'g' bit of the cs selector and this breaks
migration from amd hosts to intel hosts. Set this bit if the segment
limit is beyond 1 MB.

Signed-off-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: x86: Fix typo in function name
Amit Shah [Wed, 22 Oct 2008 11:09:47 +0000 (16:39 +0530)]
KVM: x86: Fix typo in function name

get_segment_descritptor_dtable() contains an obvious type.

Signed-off-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: IRQ ACK notifier should be used with in-kernel irqchip
Sheng Yang [Mon, 20 Oct 2008 08:07:10 +0000 (16:07 +0800)]
KVM: IRQ ACK notifier should be used with in-kernel irqchip

Also remove unnecessary parameter of unregister irq ack notifier.

Signed-off-by: Sheng Yang <sheng@linux.intel.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: x86: Optimize NMI watchdog delivery
Jan Kiszka [Mon, 20 Oct 2008 08:20:03 +0000 (10:20 +0200)]
KVM: x86: Optimize NMI watchdog delivery

As suggested by Avi, this patch introduces a counter of VCPUs that have
LVT0 set to NMI mode. Only if the counter > 0, we push the PIT ticks via
all LAPIC LVT0 lines to enable NMI watchdog support.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Acked-by: Sheng Yang <sheng@linux.intel.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: x86: Fix and refactor NMI watchdog emulation
Jan Kiszka [Mon, 20 Oct 2008 08:20:02 +0000 (10:20 +0200)]
KVM: x86: Fix and refactor NMI watchdog emulation

This patch refactors the NMI watchdog delivery patch, consolidating
tests and providing a proper API for delivering watchdog events.

An included micro-optimization is to check only for apic_hw_enabled in
kvm_apic_local_deliver (the test for LVT mask is covering the
soft-disabled case already).

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Acked-by: Sheng Yang <sheng@linux.intel.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: x86 emulator: Add decode entries for 0x04 and 0x05 opcodes (add acc, imm)
Guillaume Thouvenin [Mon, 20 Oct 2008 11:11:58 +0000 (13:11 +0200)]
KVM: x86 emulator: Add decode entries for 0x04 and 0x05 opcodes (add acc, imm)

Add decode entries for 0x04 and 0x05 (ADD) opcodes, execution is already
implemented.

Signed-off-by: Guillaume Thouvenin <guillaume.thouvenin@ext.bull.net>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: VMX: Move private memory slot position
Sheng Yang [Thu, 16 Oct 2008 09:30:58 +0000 (17:30 +0800)]
KVM: VMX: Move private memory slot position

PCI device assignment would map guest MMIO spaces as separate slot, so it is
possible that the device has more than 2 MMIO spaces and overwrite current
private memslot.

The patch move private memory slot to the top of userspace visible memory slots.

Signed-off-by: Sheng Yang <sheng@linux.intel.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: MMU: Extend kvm_mmu_page->slot_bitmap size
Sheng Yang [Thu, 16 Oct 2008 09:30:57 +0000 (17:30 +0800)]
KVM: MMU: Extend kvm_mmu_page->slot_bitmap size

Otherwise set_bit() for private memory slot(above KVM_MEMORY_SLOTS) would
corrupted memory in 32bit host.

Signed-off-by: Sheng Yang <sheng@linux.intel.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: Clean up kvm_x86_emulate.h
Sheng Yang [Tue, 14 Oct 2008 07:59:10 +0000 (15:59 +0800)]
KVM: Clean up kvm_x86_emulate.h

Remove one left improper comment of removed CR2.

Signed-off-by: Sheng Yang <sheng@linux.intel.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: Enable MTRR for EPT
Sheng Yang [Thu, 9 Oct 2008 08:01:57 +0000 (16:01 +0800)]
KVM: Enable MTRR for EPT

The effective memory type of EPT is the mixture of MSR_IA32_CR_PAT and memory
type field of EPT entry.

Signed-off-by: Sheng Yang <sheng@linux.intel.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: Add local get_mtrr_type() to support MTRR
Sheng Yang [Thu, 9 Oct 2008 08:01:56 +0000 (16:01 +0800)]
KVM: Add local get_mtrr_type() to support MTRR

For EPT memory type support.

Signed-off-by: Sheng Yang <sheng@linux.intel.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: VMX: Add PAT support for EPT
Sheng Yang [Thu, 9 Oct 2008 08:01:55 +0000 (16:01 +0800)]
KVM: VMX: Add PAT support for EPT

GUEST_PAT support is a new feature introduced by Intel Core i7 architecture.
With this, cpu would save/load guest and host PAT automatically, for EPT memory
type in guest depends on MSR_IA32_CR_PAT.

Also add save/restore for MSR_IA32_CR_PAT.

Signed-off-by: Sheng Yang <sheng@linux.intel.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: Improve MTRR structure
Sheng Yang [Thu, 9 Oct 2008 08:01:54 +0000 (16:01 +0800)]
KVM: Improve MTRR structure

As well as reset mmu context when set MTRR.

Signed-off-by: Sheng Yang <sheng@linux.intel.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agox86: Export some definition of MTRR
Sheng Yang [Thu, 9 Oct 2008 08:01:53 +0000 (16:01 +0800)]
x86: Export some definition of MTRR

For KVM can reuse the type define, and need them to support shadow MTRR.

Signed-off-by: Sheng Yang <sheng@linux.intel.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agox86: Rename mtrr_state struct and macro names
Sheng Yang [Thu, 9 Oct 2008 08:01:52 +0000 (16:01 +0800)]
x86: Rename mtrr_state struct and macro names

Prepare for exporting them.

Signed-off-by: Sheng Yang <sheng@linux.intel.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: call kvm_arch_vcpu_reset() instead of the kvm_x86_ops callback
Gleb Natapov [Tue, 7 Oct 2008 13:42:33 +0000 (15:42 +0200)]
KVM: call kvm_arch_vcpu_reset() instead of the kvm_x86_ops callback

Call kvm_arch_vcpu_reset() instead of directly using arch callback.
The function does additional things.

Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
15 years agoKVM: VMX: work around lacking VNMI support
Jan Kiszka [Fri, 26 Sep 2008 07:30:57 +0000 (09:30 +0200)]
KVM: VMX: work around lacking VNMI support

Older VMX supporting CPUs do not provide the "Virtual NMI" feature for
tracking the NMI-blocked state after injecting such events. For now
KVM is unable to inject NMIs on those CPUs.

Derived from Sheng Yang's suggestion to use the IRQ window notification
for detecting the end of NMI handlers, this patch implements virtual
NMI support without impact on the host's ability to receive real NMIs.
The downside is that the given approach requires some heuristics that
can cause NMI nesting in vary rare corner cases.

The approach works as follows:
 - inject NMI and set a software-based NMI-blocked flag
 - arm the IRQ window start notification whenever an NMI window is
   requested
 - if the guest exits due to an opening IRQ window, clear the emulated
   NMI-blocked flag
 - if the guest net execution time with NMI-blocked but without an IRQ
   window exceeds 1 second, force NMI-blocked reset and inject anyway

This approach covers most practical scenarios:
 - succeeding NMIs are seperated by at least one open IRQ window
 - the guest may spin with IRQs disabled (e.g. due to a bug), but
   leaving the NMI handler takes much less time than one second
 - the guest does not rely on strict ordering or timing of NMIs
   (would be problematic in virtualized environments anyway)

Successfully tested with the 'nmi n' monitor command, the kgdbts
testsuite on smp guests (additional patches required to add debug
register support to kvm) + the kernel's nmi_watchdog=1, and a Siemens-
specific board emulation (+ guest) that comes with its own NMI
watchdog mechanism.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Avi Kivity <avi@redhat.com>