Compute¶
The OpenStack Compute service allows you to control an Infrastructure-as-a-Service (IaaS) cloud computing platform. It gives you control over instances and networks, and allows you to manage access to the cloud through users and projects.
Compute does not include virtualization software. Instead, it defines drivers that interact with underlying virtualization mechanisms that run on your host operating system, and exposes functionality over a web-based API.
Overview¶
To effectively administer compute, you must understand how the different installed nodes interact with each other. Compute can be installed in many different ways using multiple servers, but generally multiple compute nodes control the virtual servers and a cloud controller node contains the remaining Compute services.
The Compute cloud works using a series of daemon processes named nova-*
that exist persistently on the host machine. These binaries can all run on the
same machine or be spread out on multiple boxes in a large deployment. The
responsibilities of services and drivers are:
Services
nova-api
Receives XML requests and sends them to the rest of the system. A WSGI app routes and authenticates requests. Supports the OpenStack Compute APIs. A
nova.conf
configuration file is created when Compute is installed.
Todo
Describe nova-api-metadata, nova-api-os-compute, nova-serialproxy and nova-spicehtml5proxy
nova-console, nova-dhcpbridge and nova-xvpvncproxy are all deprecated for removal so they can be ignored.
nova-compute
Manages virtual machines. Loads a Service object, and exposes the public methods on ComputeManager through a Remote Procedure Call (RPC).
nova-conductor
Provides database-access support for compute nodes (thereby reducing security risks).
nova-scheduler
Dispatches requests for new virtual machines to the correct node.
nova-novncproxy
Provides a VNC proxy for browsers, allowing VNC consoles to access virtual machines.
Note
Some services have drivers that change how the service implements its core
functionality. For example, the nova-compute
service supports drivers
that let you choose which hypervisor type it can use.
- Manage volumes
- Manage Flavors
- Compute service node firewall requirements
- Injecting the administrator password
- Manage the cloud
- Logging
- Secure with rootwrap
- Configure live migrations
- Live-migrate instances
- Configure remote console access
- Configure Compute service groups
- Recover from a failed compute node
Advanced configuration¶
OpenStack clouds run on platforms that differ greatly in the capabilities that they provide. By default, the Compute service seeks to abstract the underlying hardware that it runs on, rather than exposing specifics about the underlying host platforms. This abstraction manifests itself in many ways. For example, rather than exposing the types and topologies of CPUs running on hosts, the service exposes a number of generic CPUs (virtual CPUs, or vCPUs) and allows for overcommitting of these. In a similar manner, rather than exposing the individual types of network devices available on hosts, generic software-powered network ports are provided. These features are designed to allow high resource utilization and allows the service to provide a generic cost-effective and highly scalable cloud upon which to build applications.
This abstraction is beneficial for most workloads. However, there are some workloads where determinism and per-instance performance are important, if not vital. In these cases, instances can be expected to deliver near-native performance. The Compute service provides features to improve individual instance for these kind of workloads.
Important
In deployments older than Train, or in mixed Stein/Train deployments with a
rolling upgrade in progress, unless specifically
enabled
, live migration is not
possible for instances with a NUMA topology when using the libvirt
driver. A NUMA topology may be specified explicitly or can be added
implicitly due to the use of CPU pinning or huge pages. Refer to bug
#1289064 for more information. As of Train, live migration of instances
with a NUMA topology when using the libvirt driver is fully supported.
- Attaching physical PCI devices to guests
- CPU topologies
- Real Time
- Huge pages
- Attaching virtual GPU devices to guests
- File-backed memory
- Using ports with resource request
- Attaching virtual persistent memory to guests
- Emulated Trusted Platform Module (vTPM)
- UEFI
- Secure Boot
- AMD SEV (Secure Encrypted Virtualization)
- Managing Resource Providers Using Config Files
- Resource Limits
Additional guides¶
- Host aggregates
- System architecture
- Availability Zones
- CellsV2 Management
- Config drives
- Configuration
- Evacuate instances
- Image Caching
- Metadata service
- Migrate instances
- Use snapshots to migrate instances
- Networking with neutron
- Manage quotas
- Manage project security
- Security hardening
- Manage Compute services
- Configure SSH between compute nodes
- Troubleshoot Compute
- Orphaned resource allocations
- Rebuild placement DB
- Affinity policy violated with parallel requests
- Compute service logging
- Guru Meditation reports
- Common errors and fixes for Compute
- Credential errors, 401, and 403 forbidden errors
- Live migration permission issues
- Instance errors
- Empty log output for Linux instances
- Reset the state of an instance
- Injection problems
- Cannot find suitable emulator for x86_64
- Failed to attach volume after detaching
- Failed to attach volume, systool is not installed
- Failed to connect volume in FC SAN
- Multipath call failed exit
- Failed to Attach Volume, Missing sg_scan
- Requested microversions are ignored
- Secure live migration with QEMU-native TLS
- Mitigation for MDS (“Microarchitectural Data Sampling”) Security Flaws
- Vendordata
- hw_machine_type - Configuring and updating QEMU instance machine types