Configuring the Compute (zun) service (optional)¶
The Compute service (zun) handles the creation of virtual machines within an
OpenStack environment. Many of the default options used by OpenStack-Ansible
are found within defaults/main.yml within the zun role.
Availability zones¶
Deployers with multiple availability zones can set the
zun_default_schedule_zone Ansible variable to specify an availability zone
for new requests. This is useful in environments with different types
of hypervisors, where builds are sent to certain hardware types based on
their resource requirements.
For example, if you have servers running on two racks without sharing the PDU. These two racks can be grouped into two availability zones. When one rack loses power, the other one still works. By spreading your containers onto the two racks (availability zones), you will improve your service availability.
Block device tuning for Ceph (RBD)¶
Enabling Ceph and defining zun_libvirt_images_rbd_pool changes two
libvirt configurations by default:
hw_disk_discard:
unmapdisk_cachemodes:
network=writeback
Setting hw_disk_discard to unmap in libvirt enables
discard (sometimes called TRIM) support for the underlying block device. This
allows reclaiming of unused blocks on the underlying disks.
Setting disk_cachemodes to network=writeback allows data to be written
into a cache on each change, but those changes are flushed to disk at a regular
interval. This can increase write performance on Ceph block devices.
You have the option to customize these settings using two Ansible variables (defaults shown here):
zun_libvirt_hw_disk_discard: 'unmap'
zun_libvirt_disk_cachemodes: 'network=writeback'
You can disable discard by setting zun_libvirt_hw_disk_discard to
ignore. The zun_libvirt_disk_cachemodes can be set to an empty
string to disable network=writeback.
The following minimal example configuration sets zun to use the
ephemeral-vms Ceph pool. The following example uses cephx authentication,
and requires an existing cinder account for the ephemeral-vms pool:
zun_libvirt_images_rbd_pool: ephemeral-vms
ceph_mons:
- 172.29.244.151
- 172.29.244.152
- 172.29.244.153
If you have a different Ceph username for the pool, use it as:
cinder_ceph_client: <ceph-username>
The Ceph documentation for OpenStack has additional information about these settings.
Config drive¶
By default, OpenStack-Ansible does not configure zun to force config drives
to be provisioned with every instance that zun builds. The metadata service
provides configuration information that is used by cloud-init inside the
instance. Config drives are only necessary when an instance does not have
cloud-init installed or does not have support for handling metadata.
A deployer can set an Ansible variable to force config drives to be deployed with every virtual machine:
zun_force_config_drive: True
Certain formats of config drives can prevent instances from migrating properly
between hypervisors. If you need forced config drives and the ability
to migrate instances, set the config drive format to vfat using
the zun_zun_conf_overrides variable:
zun_zun_conf_overrides:
DEFAULT:
config_drive_format: vfat
force_config_drive: True
Libvirtd connectivity and authentication¶
By default, OpenStack-Ansible configures the libvirt daemon in the following way:
TLS connections are enabled
TCP plaintext connections are disabled
Authentication over TCP connections uses SASL
You can customize these settings using the following Ansible variables:
# Enable libvirtd's TLS listener
zun_libvirtd_listen_tls: 1
# Disable libvirtd's plaintext TCP listener
zun_libvirtd_listen_tcp: 0
# Use SASL for authentication
zun_libvirtd_auth_tcp: sasl
Multipath¶
Nova supports multipath for iSCSI-based storage. Enable multipath support in zun through a configuration override:
zun_zun_conf_overrides:
libvirt:
iscsi_use_multipath: true