KVM is configured as the default hypervisor for Compute.
Note | |
---|---|
This document contains several sections about hypervisor selection. If you are reading
this document linearly, you do not want to load the KVM module before you install
|
To enable KVM explicitly, add the following configuration options to the
/etc/nova/nova.conf
file:
compute_driver = libvirt.LibvirtDriver [libvirt] virt_type = kvm
The KVM hypervisor supports the following virtual machine image formats:
Raw
QEMU Copy-on-write (qcow2)
QED Qemu Enhanced Disk
VMWare virtual machine disk format (vmdk)
This section describes how to enable KVM on your system. For more information, see the following distribution-specific documentation:
Fedora: Getting started with virtualization from the Fedora project wiki.
Ubuntu: KVM/Installation from the Community Ubuntu documentation.
Debian: Virtualization with KVM from the Debian handbook.
Red Hat Enterprise Linux: Installing virtualization packages on an existing Red Hat Enterprise Linux system from the Red Hat Enterprise Linux Virtualization Host Configuration and Guest Installation Guide.
openSUSE: Installing KVM from the openSUSE Virtualization with KVM manual.
SLES: Installing KVM from the SUSE Linux Enterprise Server Virtualization with KVM manual.
To perform these steps, you must be logged in as the
root
user.
To determine whether the
svm
orvmx
CPU extensions are present, run this command:# grep -E 'svm|vmx' /proc/cpuinfo
This command generates output if the CPU is hardware-virtualization capable. Even if output is shown, you might still need to enable virtualization in the system BIOS for full support.
If no output appears, consult your system documentation to ensure that your CPU and motherboard support hardware virtualization. Verify that any relevant hardware virtualization options are enabled in the system BIOS.
The BIOS for each manufacturer is different. If you must enable virtualization in the BIOS, look for an option containing the words
virtualization
,VT
,VMX
, orSVM
.To list the loaded kernel modules and verify that the
kvm
modules are loaded, run this command:# lsmod | grep kvm
If the output includes
kvm_intel
orkvm_amd
, thekvm
hardware virtualization modules are loaded and your kernel meets the module requirements for OpenStack Compute.If the output does not show that the
kvm
module is loaded, run this command to load it:# modprobe -a kvm
Run the command for your CPU. For Intel, run this command:
# modprobe -a kvm-intel
For AMD, run this command:
# modprobe -a kvm-amd
Because a KVM installation can change user group membership, you might need to log in again for changes to take effect.
If the kernel modules do not load automatically, use the procedures listed in these subsections.
If the checks indicate that required hardware virtualization support or kernel modules are disabled or unavailable, you must either enable this support on the system or find a system with this support.
Note | |
---|---|
Some systems require that you enable VT support in the system BIOS. If you believe your processor supports hardware acceleration but the previous command did not produce output, reboot your machine, enter the system BIOS, and enable the VT option. |
If KVM acceleration is not supported, configure Compute to use a different hypervisor, such as QEMU or Xen.
These procedures help you load the kernel modules for Intel-based and AMD-based processors if they do not load automatically during KVM installation.
If your compute host is Intel-based, run these commands as root to load the kernel modules:
# modprobe kvm # modprobe kvm-intel
Add these lines to the /etc/modules
file so that these modules load on reboot:
kvm kvm-intel
The Compute service enables you to control the guest CPU model that is exposed to KVM virtual machines. Use cases include:
To maximize performance of virtual machines by exposing new host CPU features to the guest
To ensure a consistent default CPU across all machines, removing reliance of variable QEMU defaults
In libvirt, the CPU is specified by providing a base CPU model name (which is a
shorthand for a set of feature flags), a set of additional feature flags, and the
topology (sockets/cores/threads). The libvirt KVM driver provides a number of standard
CPU model names. These models are defined in the
/usr/share/libvirt/cpu_map.xml
file. Check this file to
determine which models are supported by your local installation.
Two Compute configuration options in the [libvirt]
group of
nova.conf
define which type of CPU model is exposed to the
hypervisor when using KVM: cpu_mode
and
cpu_model
.
The cpu_mode
option can take one of the following values:
none
, host-passthrough
,
host-model
, and custom
.
If your nova.conf
file contains
cpu_mode=host-model
, libvirt identifies the CPU model in
/usr/share/libvirt/cpu_map.xml
file that most closely
matches the host, and requests additional CPU flags to complete the match. This
configuration provides the maximum functionality and performance and maintains good
reliability and compatibility if the guest is migrated to another host with slightly
different host CPUs.
If your nova.conf
file contains
cpu_mode=host-passthrough
, libvirt tells KVM to pass through
the host CPU with no modifications. The difference to host-model, instead of just
matching feature flags, every last detail of the host CPU is matched. This gives
absolutely best performance, and can be important to some apps which check low level
CPU details, but it comes at a cost with respect to migration: the guest can only be
migrated to an exactly matching host CPU.
If your nova.conf
file contains
cpu_mode=custom
, you can explicitly specify one of the
supported named model using the cpu_model configuration option. For example, to
configure the KVM guests to expose Nehalem CPUs, your nova.conf
file should contain:
[libvirt] cpu_mode = custom cpu_model = Nehalem
Use guest agents to enable optional access between compute nodes and guests through a socket, using the QMP protocol.
To enable this feature, you must set hw_qemu_guest_agent=yes
as a
metadata parameter on the image you wish to use to create guest-agent-capable instances
from. You can explicitly disable the feature by setting
hw_qemu_guest_agent=no
in the image metadata.
The VHostNet kernel module improves network performance. To load the kernel module, run the following command as root:
# modprobe vhost_net
Trying to launch a new virtual machine instance fails with the
ERROR
state, and the following error appears in the
/var/log/nova/nova-compute.log
file:
libvirtError: internal error no supported architecture for os type 'hvm'
This message indicates that the KVM kernel modules were not loaded.
If you cannot start VMs after installation without rebooting, the permissions might
not be correct. This can happen if you load the KVM module before you install
nova-compute
. To check whether the group is
set to kvm
, run:
# ls -l /dev/kvm
If it is not set to kvm
, run:
# udevadm trigger