Configuration Options¶
The following is an overview of all available configuration options in Nova.
For a sample configuration file, refer to Sample Configuration File.
DEFAULT¶
- rpc_conn_pool_size¶
- Type:
integer
- Default:
30
- Minimum Value:
1
Size of RPC connection pool.
¶ Group
Name
DEFAULT
rpc_conn_pool_size
- conn_pool_min_size¶
- Type:
integer
- Default:
2
The pool size limit for connections expiration policy
- conn_pool_ttl¶
- Type:
integer
- Default:
1200
The time-to-live in sec of idle connections in the pool
- executor_thread_pool_size¶
- Type:
integer
- Default:
64
Size of executor thread pool when executor is threading or eventlet.
¶ Group
Name
DEFAULT
rpc_thread_pool_size
- rpc_response_timeout¶
- Type:
integer
- Default:
60
Seconds to wait for a response from a call.
- transport_url¶
- Type:
string
- Default:
rabbit://
The network address and optional user credentials for connecting to the messaging backend, in URL format. The expected format is:
driver://[user:pass@]host:port[,[userN:passN@]hostN:portN]/virtual_host?query
Example: rabbit://rabbitmq:password@127.0.0.1:5672//
For full details on the fields in the URL see the documentation of oslo_messaging.TransportURL at https://docs.openstack.org/oslo.messaging/latest/reference/transport.html
- control_exchange¶
- Type:
string
- Default:
nova
The default exchange under which topics are scoped. May be overridden by an exchange name specified in the transport_url option.
- rpc_ping_enabled¶
- Type:
boolean
- Default:
False
Add an endpoint to answer to ping calls. Endpoint is named oslo_rpc_server_ping
- debug¶
- Type:
boolean
- Default:
False
- Mutable:
This option can be changed without restarting.
If set to true, the logging level will be set to DEBUG instead of the default INFO level.
- log_config_append¶
- Type:
string
- Default:
<None>
- Mutable:
This option can be changed without restarting.
The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, log-date-format).
¶ Group
Name
DEFAULT
log-config
DEFAULT
log_config
- log_date_format¶
- Type:
string
- Default:
%Y-%m-%d %H:%M:%S
Defines the format string for %(asctime)s in log records. Default: the value above . This option is ignored if log_config_append is set.
- log_file¶
- Type:
string
- Default:
<None>
(Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set.
¶ Group
Name
DEFAULT
logfile
- log_dir¶
- Type:
string
- Default:
<None>
(Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set.
¶ Group
Name
DEFAULT
logdir
- watch_log_file¶
- Type:
boolean
- Default:
False
Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set.
- use_syslog¶
- Type:
boolean
- Default:
False
Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set.
- use_journal¶
- Type:
boolean
- Default:
False
Enable journald for logging. If running in a systemd environment you may wish to enable journal support. Doing so will use the journal native protocol which includes structured metadata in addition to log messages.This option is ignored if log_config_append is set.
- syslog_log_facility¶
- Type:
string
- Default:
LOG_USER
Syslog facility to receive log lines. This option is ignored if log_config_append is set.
- use_json¶
- Type:
boolean
- Default:
False
Use JSON formatting for logging. This option is ignored if log_config_append is set.
- use_stderr¶
- Type:
boolean
- Default:
False
Log output to standard error. This option is ignored if log_config_append is set.
- use_eventlog¶
- Type:
boolean
- Default:
False
Log output to Windows Event Log.
Warning
This option is deprecated for removal. Its value may be silently ignored in the future.
- Reason:
Windows support is no longer maintained.
- log_rotate_interval¶
- Type:
integer
- Default:
1
The amount of time before the log files are rotated. This option is ignored unless log_rotation_type is set to “interval”.
- log_rotate_interval_type¶
- Type:
string
- Default:
days
- Valid Values:
Seconds, Minutes, Hours, Days, Weekday, Midnight
Rotation interval type. The time of the last file change (or the time when the service was started) is used when scheduling the next rotation.
- max_logfile_count¶
- Type:
integer
- Default:
30
Maximum number of rotated log files.
- max_logfile_size_mb¶
- Type:
integer
- Default:
200
Log file maximum size in MB. This option is ignored if “log_rotation_type” is not set to “size”.
- log_rotation_type¶
- Type:
string
- Default:
none
- Valid Values:
interval, size, none
Log rotation type.
Possible values
- interval
Rotate logs at predefined time intervals.
- size
Rotate logs once they reach a predefined size.
- none
Do not rotate log files.
- logging_context_format_string¶
- Type:
string
- Default:
%(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s
Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter
- logging_default_format_string¶
- Type:
string
- Default:
%(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
Format string to use for log messages when context is undefined. Used by oslo_log.formatters.ContextFormatter
- logging_debug_format_suffix¶
- Type:
string
- Default:
%(funcName)s %(pathname)s:%(lineno)d
Additional data to append to log message when logging level for the message is DEBUG. Used by oslo_log.formatters.ContextFormatter
- logging_exception_prefix¶
- Type:
string
- Default:
%(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s
Prefix each line of exception output with this format. Used by oslo_log.formatters.ContextFormatter
- logging_user_identity_format¶
- Type:
string
- Default:
%(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s
Defines the format string for %(user_identity)s that is used in logging_context_format_string. Used by oslo_log.formatters.ContextFormatter
- default_log_levels¶
- Type:
list
- Default:
['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO']
List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set.
- publish_errors¶
- Type:
boolean
- Default:
False
Enables or disables publication of error events.
- instance_format¶
- Type:
string
- Default:
"[instance: %(uuid)s] "
The format for an instance that is passed with the log message.
- instance_uuid_format¶
- Type:
string
- Default:
"[instance: %(uuid)s] "
The format for an instance UUID that is passed with the log message.
- rate_limit_interval¶
- Type:
integer
- Default:
0
Interval, number of seconds, of log rate limiting.
- rate_limit_burst¶
- Type:
integer
- Default:
0
Maximum number of logged messages per rate_limit_interval.
- rate_limit_except_level¶
- Type:
string
- Default:
CRITICAL
Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string means that all levels are filtered.
- fatal_deprecations¶
- Type:
boolean
- Default:
False
Enables or disables fatal status of deprecations.
- run_external_periodic_tasks¶
- Type:
boolean
- Default:
True
Some periodic tasks can be run in a separate process. Should we run them here?
- backdoor_port¶
- Type:
string
- Default:
<None>
Enable eventlet backdoor. Acceptable values are 0, <port>, and <start>:<end>, where 0 results in listening on a random tcp port number; <port> results in listening on the specified port number (and not enabling backdoor if that port is in use); and <start>:<end> results in listening on the smallest unused port number within the specified range of port numbers. The chosen port is displayed in the service’s log file.
- backdoor_socket¶
- Type:
string
- Default:
<None>
Enable eventlet backdoor, using the provided path as a unix socket that can receive connections. This option is mutually exclusive with ‘backdoor_port’ in that only one should be provided. If both are provided then the existence of this option overrides the usage of that option. Inside the path {pid} will be replaced with the PID of the current process.
- log_options¶
- Type:
boolean
- Default:
True
Enables or disables logging values of all registered options when starting a service (at DEBUG level).
- graceful_shutdown_timeout¶
- Type:
integer
- Default:
60
Specify a timeout after which a gracefully shutdown server will exit. Zero value means endless wait.
- internal_service_availability_zone¶
- Type:
string
- Default:
internal
Availability zone for internal services.
This option determines the availability zone for the various internal nova services, such as ‘nova-scheduler’, ‘nova-conductor’, etc.
Possible values:
Any string representing an existing availability zone name.
- default_availability_zone¶
- Type:
string
- Default:
nova
Default availability zone for compute services.
This option determines the default availability zone for ‘nova-compute’ services, which will be used if the service(s) do not belong to aggregates with availability zone metadata.
Possible values:
Any string representing an existing availability zone name.
- default_schedule_zone¶
- Type:
string
- Default:
<None>
Default availability zone for instances.
This option determines the default availability zone for instances, which will be used when a user does not specify one when creating an instance. The instance(s) will be bound to this availability zone for their lifetime.
Possible values:
Any string representing an existing availability zone name.
None, which means that the instance can move from one availability zone to another during its lifetime if it is moved from one compute node to another.
Related options:
[cinder]/cross_az_attach
- password_length¶
- Type:
integer
- Default:
12
- Minimum Value:
0
Length of generated instance admin passwords.
- instance_usage_audit_period¶
- Type:
string
- Default:
month
Time period to generate instance usages for. It is possible to define optional offset to given period by appending @ character followed by a number defining offset.
Possible values:
period, example:
hour
,day
,month` or ``year
period with offset, example:
month@15
will result in monthly audits starting on 15th day of month.
- use_rootwrap_daemon¶
- Type:
boolean
- Default:
False
Start and use a daemon that can run the commands that need to be run with root privileges. This option is usually enabled on nodes that run nova compute processes.
- rootwrap_config¶
- Type:
string
- Default:
/etc/nova/rootwrap.conf
Path to the rootwrap configuration file.
Goal of the root wrapper is to allow a service-specific unprivileged user to run a number of actions as the root user in the safest manner possible. The configuration file used here must match the one defined in the sudoers entry.
- tempdir¶
- Type:
string
- Default:
<None>
Explicitly specify the temporary working directory.
- default_green_pool_size¶
- Type:
integer
- Default:
1000
- Minimum Value:
100
The total number of coroutines that can be run via nova’s default greenthread pool concurrently, defaults to 1000, min value is 100.
- compute_driver¶
- Type:
string
- Default:
<None>
Defines which driver to use for controlling virtualization.
Possible values:
libvirt.LibvirtDriver
fake.FakeDriver
ironic.IronicDriver
vmwareapi.VMwareVCDriver
zvm.ZVMDriver
- allow_resize_to_same_host¶
- Type:
boolean
- Default:
False
Allow destination machine to match source for resize. Useful when testing in single-host environments. By default it is not allowed to resize to the same host. Setting this option to true will add the same host to the destination options. Also set to true if you allow the ServerGroupAffinityFilter and need to resize. For changes to this option to take effect, the nova-api service needs to be restarted.
- non_inheritable_image_properties¶
- Type:
list
- Default:
['cache_in_nova', 'bittorrent']
Image properties that should not be inherited from the instance when taking a snapshot.
This option gives an opportunity to select which image-properties should not be inherited by newly created snapshots.
Note
The following image properties are never inherited regardless of whether they are listed in this configuration option or not:
cinder_encryption_key_id
cinder_encryption_key_deletion_policy
img_signature
img_signature_hash_method
img_signature_key_type
img_signature_certificate_uuid
Possible values:
A comma-separated list whose item is an image property. Usually only the image properties that are only needed by base images can be included here, since the snapshots that are created from the base images don’t need them.
Default list: cache_in_nova, bittorrent
- max_local_block_devices¶
- Type:
integer
- Default:
3
Maximum number of devices that will result in a local image being created on the hypervisor node.
A negative number means unlimited. Setting
max_local_block_devices
to 0 means that any request that attempts to create a local disk will fail. This option is meant to limit the number of local discs (so root local disc that is the result ofimageRef
being used when creating a server, and any other ephemeral and swap disks). 0 does not mean that images will be automatically converted to volumes and boot instances from volumes - it just means that all requests that attempt to create a local disk will fail.Possible values:
0: Creating a local disk is not allowed.
Negative number: Allows unlimited number of local discs.
Positive number: Allows only these many number of local discs.
- compute_monitors¶
- Type:
list
- Default:
[]
A comma-separated list of monitors that can be used for getting compute metrics. You can use the alias/name from the setuptools entry points for nova.compute.monitors.* namespaces. If no namespace is supplied, the “cpu.” namespace is assumed for backwards-compatibility.
NOTE: Only one monitor per namespace (For example: cpu) can be loaded at a time.
Possible values:
An empty list will disable the feature (Default).
An example value that would enable the CPU bandwidth monitor that uses the virt driver variant:
compute_monitors = cpu.virt_driver
- default_ephemeral_format¶
- Type:
string
- Default:
<None>
The default format an ephemeral_volume will be formatted with on creation.
Possible values:
ext2
ext3
ext4
xfs
ntfs
(only for Windows guests)
- vif_plugging_is_fatal¶
- Type:
boolean
- Default:
True
Determine if instance should boot or fail on VIF plugging timeout.
Nova sends a port update to Neutron after an instance has been scheduled, providing Neutron with the necessary information to finish setup of the port. Once completed, Neutron notifies Nova that it has finished setting up the port, at which point Nova resumes the boot of the instance since network connectivity is now supposed to be present. A timeout will occur if the reply is not received after a given interval.
This option determines what Nova does when the VIF plugging timeout event happens. When enabled, the instance will error out. When disabled, the instance will continue to boot on the assumption that the port is ready.
Possible values:
True: Instances should fail after VIF plugging timeout
False: Instances should continue booting after VIF plugging timeout
- vif_plugging_timeout¶
- Type:
integer
- Default:
300
- Minimum Value:
0
Timeout for Neutron VIF plugging event message arrival.
Number of seconds to wait for Neutron vif plugging events to arrive before continuing or failing (see ‘vif_plugging_is_fatal’).
If you are hitting timeout failures at scale, consider running rootwrap in “daemon mode” in the neutron agent via the
[agent]/root_helper_daemon
neutron configuration option.Related options:
vif_plugging_is_fatal - If
vif_plugging_timeout
is set to zero andvif_plugging_is_fatal
is False, events should not be expected to arrive at all.
- arq_binding_timeout¶
- Type:
integer
- Default:
300
- Minimum Value:
1
Timeout for Accelerator Request (ARQ) bind event message arrival.
Number of seconds to wait for ARQ bind resolution event to arrive. The event indicates that every ARQ for an instance has either bound successfully or failed to bind. If it does not arrive, instance bringup is aborted with an exception.
- injected_network_template¶
- Type:
string
- Default:
$pybasedir/nova/virt/interfaces.template
Path to ‘/etc/network/interfaces’ template.
The path to a template file for the ‘/etc/network/interfaces’-style file, which will be populated by nova and subsequently used by cloudinit. This provides a method to configure network connectivity in environments without a DHCP server.
The template will be rendered using Jinja2 template engine, and receive a top-level key called
interfaces
. This key will contain a list of dictionaries, one for each interface.Refer to the cloudinit documentation for more information:
Possible values:
A path to a Jinja2-formatted template for a Debian ‘/etc/network/interfaces’ file. This applies even if using a non Debian-derived guest.
Related options:
flat_inject
: This must be set toTrue
to ensure nova embeds network configuration information in the metadata provided through the config drive.
- preallocate_images¶
- Type:
string
- Default:
none
- Valid Values:
none, space
The image preallocation mode to use.
Image preallocation allows storage for instance images to be allocated up front when the instance is initially provisioned. This ensures immediate feedback is given if enough space isn’t available. In addition, it should significantly improve performance on writes to new blocks and may even improve I/O performance to prewritten blocks due to reduced fragmentation.
Possible values
- none
No storage provisioning is done up front
- space
Storage is fully allocated at instance start
- use_cow_images¶
- Type:
boolean
- Default:
True
Enable use of copy-on-write (cow) images.
QEMU/KVM allow the use of qcow2 as backing files. By disabling this, backing files will not be used.
- force_raw_images¶
- Type:
boolean
- Default:
True
Force conversion of backing images to raw format.
Possible values:
True: Backing image files will be converted to raw image format
False: Backing image files will not be converted
Related options:
compute_driver
: Only the libvirt driver uses this option.[libvirt]/images_type
: If images_type is rbd, setting this option to False is not allowed. See the bug https://bugs.launchpad.net/nova/+bug/1816686 for more details.
- virt_mkfs¶
- Type:
multi-valued
- Default:
''
Name of the mkfs commands for ephemeral device.
The format is <os_type>=<mkfs command>
- resize_fs_using_block_device¶
- Type:
boolean
- Default:
False
Enable resizing of filesystems via a block device.
If enabled, attempt to resize the filesystem by accessing the image over a block device. This is done by the host and may not be necessary if the image contains a recent version of cloud-init. Possible mechanisms require the nbd driver (for qcow and raw), or loop (for raw).
- timeout_nbd¶
- Type:
integer
- Default:
10
- Minimum Value:
0
Amount of time, in seconds, to wait for NBD device start up.
- pointer_model¶
- Type:
string
- Default:
usbtablet
- Valid Values:
ps2mouse, usbtablet, <None>
Generic property to specify the pointer type.
Input devices allow interaction with a graphical framebuffer. For example to provide a graphic tablet for absolute cursor movement.
If set, either the
hw_input_bus
orhw_pointer_model
image metadata properties will take precedence over this configuration option.Related options:
usbtablet must be configured with VNC enabled or SPICE enabled and SPICE agent disabled. When used with libvirt the instance mode should be configured as HVM.
Possible values
- ps2mouse
Uses relative movement. Mouse connected by PS2
- usbtablet
Uses absolute movement. Tablet connect by USB
- <None>
Uses default behavior provided by drivers (mouse on PS2 for libvirt x86)
- reimage_timeout_per_gb¶
- Type:
integer
- Default:
20
- Minimum Value:
1
Timeout for reimaging a volume.
Number of seconds to wait for volume-reimaged events to arrive before continuing or failing.
This is a per gigabyte time which has a default value of 20 seconds and will be multiplied by the GB size of image. Eg: an image of 6 GB will have a timeout of 20 * 6 = 120 seconds. Try increasing the timeout if the image copy per GB takes more time and you are hitting timeout failures.
- vcpu_pin_set¶
- Type:
string
- Default:
<None>
Mask of host CPUs that can be used for
VCPU
resources.The behavior of this option depends on the definition of the
[compute] cpu_dedicated_set
option and affects the behavior of the[compute] cpu_shared_set
option.If
[compute] cpu_dedicated_set
is defined, defining this option will result in an error.If
[compute] cpu_dedicated_set
is not defined, this option will be used to determine inventory forVCPU
resources and to limit the host CPUs that both pinned and unpinned instances can be scheduled to, overriding the[compute] cpu_shared_set
option.
Possible values:
A comma-separated list of physical CPU numbers that virtual CPUs can be allocated from. Each element should be either a single CPU number, a range of CPU numbers, or a caret followed by a CPU number to be excluded from a previous range. For example:
vcpu_pin_set = "4-12,^8,15"
Related options:
[compute] cpu_dedicated_set
[compute] cpu_shared_set
Warning
This option is deprecated for removal since 20.0.0. Its value may be silently ignored in the future.
- Reason:
This option has been superseded by the
[compute] cpu_dedicated_set
and[compute] cpu_shared_set
options, which allow things like the co-existence of pinned and unpinned instances on the same host (for the libvirt driver).
- reserved_huge_pages¶
- Type:
unknown type
- Default:
<None>
Number of huge/large memory pages to reserve per NUMA host cell.
Possible values:
A list of valid key=value which reflect NUMA node ID, page size (Default unit is KiB) and number of pages to be reserved. For example:
reserved_huge_pages = node:0,size:2048,count:64 reserved_huge_pages = node:1,size:1GB,count:1
In this example we are reserving on NUMA node 0 64 pages of 2MiB and on NUMA node 1 1 page of 1GiB.
- reserved_host_disk_mb¶
- Type:
integer
- Default:
0
- Minimum Value:
0
Amount of disk resources in MB to make them always available to host. The disk usage gets reported back to the scheduler from nova-compute running on the compute nodes. To prevent the disk resources from being considered as available, this option can be used to reserve disk space for that host.
Possible values:
Any positive integer representing amount of disk in MB to reserve for the host.
- reserved_host_memory_mb¶
- Type:
integer
- Default:
512
- Minimum Value:
0
Amount of memory in MB to reserve for the host so that it is always available to host processes. The host resources usage is reported back to the scheduler continuously from nova-compute running on the compute node. To prevent the host memory from being considered as available, this option is used to reserve memory for the host.
Possible values:
Any positive integer representing amount of memory in MB to reserve for the host.
- reserved_host_cpus¶
- Type:
integer
- Default:
0
- Minimum Value:
0
Number of host CPUs to reserve for host processes.
The host resources usage is reported back to the scheduler continuously from nova-compute running on the compute node. This value is used to determine the
reserved
value reported to placement.This option cannot be set if the
[compute] cpu_shared_set
or[compute] cpu_dedicated_set
config options have been defined. When these options are defined, any host CPUs not included in these values are considered reserved for the host.Possible values:
Any positive integer representing number of physical CPUs to reserve for the host.
Related options:
[compute] cpu_shared_set
[compute] cpu_dedicated_set
- cpu_allocation_ratio¶
- Type:
floating point
- Default:
<None>
- Minimum Value:
0.0
Virtual CPU to physical CPU allocation ratio.
This option is used to influence the hosts selected by the Placement API by configuring the allocation ratio for
VCPU
inventory.Note
This option does not affect
PCPU
inventory, which cannot be overcommitted.Note
If this option is set to something other than
None
or0.0
, the allocation ratio will be overwritten by the value of this option, otherwise, the allocation ratio will not change. Once set to a non-default value, it is not possible to “unset” the config to get back to the default behavior. If you want to reset back to the initial value, explicitly specify it to the value ofinitial_cpu_allocation_ratio
.Possible values:
Any valid positive integer or float value
Related options:
initial_cpu_allocation_ratio
- ram_allocation_ratio¶
- Type:
floating point
- Default:
<None>
- Minimum Value:
0.0
Virtual RAM to physical RAM allocation ratio.
This option is used to influence the hosts selected by the Placement API by configuring the allocation ratio for
MEMORY_MB
inventory.Note
If this option is set to something other than
None
or0.0
, the allocation ratio will be overwritten by the value of this option, otherwise, the allocation ratio will not change. Once set to a non-default value, it is not possible to “unset” the config to get back to the default behavior. If you want to reset back to the initial value, explicitly specify it to the value ofinitial_ram_allocation_ratio
.Possible values:
Any valid positive integer or float value
Related options:
initial_ram_allocation_ratio
- disk_allocation_ratio¶
- Type:
floating point
- Default:
<None>
- Minimum Value:
0.0
Virtual disk to physical disk allocation ratio.
This option is used to influence the hosts selected by the Placement API by configuring the allocation ratio for
DISK_GB
inventory.When configured, a ratio greater than 1.0 will result in over-subscription of the available physical disk, which can be useful for more efficiently packing instances created with images that do not use the entire virtual disk, such as sparse or compressed images. It can be set to a value between 0.0 and 1.0 in order to preserve a percentage of the disk for uses other than instances.
Note
If the value is set to
>1
, we recommend keeping track of the free disk space, as the value approaching0
may result in the incorrect functioning of instances using it at the moment.Note
If this option is set to something other than
None
or0.0
, the allocation ratio will be overwritten by the value of this option, otherwise, the allocation ratio will not change. Once set to a non-default value, it is not possible to “unset” the config to get back to the default behavior. If you want to reset back to the initial value, explicitly specify it to the value ofinitial_disk_allocation_ratio
.Possible values:
Any valid positive integer or float value
Related options:
initial_disk_allocation_ratio
- initial_cpu_allocation_ratio¶
- Type:
floating point
- Default:
4.0
- Minimum Value:
0.0
Initial virtual CPU to physical CPU allocation ratio.
This is only used when initially creating the
computes_nodes
table record for a given nova-compute service.See https://docs.openstack.org/nova/latest/admin/configuration/schedulers.html for more details and usage scenarios.
Related options:
cpu_allocation_ratio
- initial_ram_allocation_ratio¶
- Type:
floating point
- Default:
1.0
- Minimum Value:
0.0
Initial virtual RAM to physical RAM allocation ratio.
This is only used when initially creating the
computes_nodes
table record for a given nova-compute service.See https://docs.openstack.org/nova/latest/admin/configuration/schedulers.html for more details and usage scenarios.
Related options:
ram_allocation_ratio
- initial_disk_allocation_ratio¶
- Type:
floating point
- Default:
1.0
- Minimum Value:
0.0
Initial virtual disk to physical disk allocation ratio.
This is only used when initially creating the
computes_nodes
table record for a given nova-compute service.See https://docs.openstack.org/nova/latest/admin/configuration/schedulers.html for more details and usage scenarios.
Related options:
disk_allocation_ratio
- console_host¶
- Type:
string
- Default:
<current_hostname>
This option has a sample default set, which means that its actual default value may vary from the one documented above.
Console proxy host to be used to connect to instances on this host. It is the publicly visible name for the console host.
Possible values:
Current hostname (default) or any string representing hostname.
- default_access_ip_network_name¶
- Type:
string
- Default:
<None>
Name of the network to be used to set access IPs for instances. If there are multiple IPs to choose from, an arbitrary one will be chosen.
Possible values:
None (default)
Any string representing network name.
- instances_path¶
- Type:
string
- Default:
$state_path/instances
This option has a sample default set, which means that its actual default value may vary from the one documented above.
Specifies where instances are stored on the hypervisor’s disk. It can point to locally attached storage or a directory on NFS.
Possible values:
$state_path/instances where state_path is a config option that specifies the top-level directory for maintaining nova’s state. (default) or Any string representing directory path.
Related options:
[workarounds]/ensure_libvirt_rbd_instance_dir_cleanup
- instance_usage_audit¶
- Type:
boolean
- Default:
False
This option enables periodic compute.instance.exists notifications. Each compute node must be configured to generate system usage data. These notifications are consumed by OpenStack Telemetry service.
- live_migration_retry_count¶
- Type:
integer
- Default:
30
- Minimum Value:
0
Maximum number of 1 second retries in live_migration. It specifies number of retries to iptables when it complains. It happens when an user continuously sends live-migration request to same host leading to concurrent request to iptables.
Possible values:
Any positive integer representing retry count.
- resume_guests_state_on_host_boot¶
- Type:
boolean
- Default:
False
This option specifies whether to start guests that were running before the host rebooted. It ensures that all of the instances on a Nova compute node resume their state each time the compute node boots or restarts.
- network_allocate_retries¶
- Type:
integer
- Default:
0
- Minimum Value:
0
Number of times to retry network allocation. It is required to attempt network allocation retries if the virtual interface plug fails.
Possible values:
Any positive integer representing retry count.
- max_concurrent_builds¶
- Type:
integer
- Default:
10
- Minimum Value:
0
Limits the maximum number of instance builds to run concurrently by nova-compute. Compute service can attempt to build an infinite number of instances, if asked to do so. This limit is enforced to avoid building unlimited instance concurrently on a compute node. This value can be set per compute node.
Possible Values:
0 : treated as unlimited.
Any positive integer representing maximum concurrent builds.
- max_concurrent_snapshots¶
- Type:
integer
- Default:
5
- Minimum Value:
0
Maximum number of instance snapshot operations to run concurrently. This limit is enforced to prevent snapshots overwhelming the host/network/storage and causing failure. This value can be set per compute node.
Possible Values:
0 : treated as unlimited.
Any positive integer representing maximum concurrent snapshots.
- max_concurrent_live_migrations¶
- Type:
integer
- Default:
1
- Minimum Value:
0
Maximum number of live migrations to run concurrently. This limit is enforced to avoid outbound live migrations overwhelming the host/network and causing failures. It is not recommended that you change this unless you are very sure that doing so is safe and stable in your environment.
Possible values:
0 : treated as unlimited.
Any positive integer representing maximum number of live migrations to run concurrently.
- block_device_allocate_retries¶
- Type:
integer
- Default:
60
- Minimum Value:
0
The number of times to check for a volume to be “available” before attaching it during server create.
When creating a server with block device mappings where
source_type
is one ofblank
,image
orsnapshot
and thedestination_type
isvolume
, thenova-compute
service will create a volume and then attach it to the server. Before the volume can be attached, it must be in status “available”. This option controls how many times to check for the created volume to be “available” before it is attached.If the operation times out, the volume will be deleted if the block device mapping
delete_on_termination
value is True.It is recommended to configure the image cache in the block storage service to speed up this operation. See https://docs.openstack.org/cinder/latest/admin/blockstorage-image-volume-cache.html for details.
Possible values:
60 (default)
If value is 0, then one attempt is made.
For any value > 0, total attempts are (value + 1)
Related options:
block_device_allocate_retries_interval
- controls the interval between checks
- sync_power_state_pool_size¶
- Type:
integer
- Default:
1000
Number of greenthreads available for use to sync power states.
This option can be used to reduce the number of concurrent requests made to the hypervisor or system with real instance power states for performance reasons, for example, with Ironic.
Possible values:
Any positive integer representing greenthreads count.
- sync_power_state_interval¶
- Type:
integer
- Default:
600
Interval to sync power states between the database and the hypervisor.
The interval that Nova checks the actual virtual machine power state and the power state that Nova has in its database. If a user powers down their VM, Nova updates the API to report the VM has been powered down. Should something turn on the VM unexpectedly, Nova will turn the VM back off to keep the system in the expected state.
Possible values:
0: Will run at the default periodic interval.
Any value < 0: Disables the option.
Any positive integer in seconds.
Related options:
If
handle_virt_lifecycle_events
in theworkarounds
group is false and this option is negative, then instances that get out of sync between the hypervisor and the Nova database will have to be synchronized manually.
- heal_instance_info_cache_interval¶
- Type:
integer
- Default:
60
Interval between instance network information cache updates.
Number of seconds after which each compute node runs the task of querying Neutron for all of its instances networking information, then updates the Nova db with that information. Nova will never update it’s cache if this option is set to 0. If we don’t update the cache, the metadata service and nova-api endpoints will be proxying incorrect network data about the instance. So, it is not recommended to set this option to 0.
Possible values:
Any positive integer in seconds.
Any value <=0 will disable the sync. This is not recommended.
- reclaim_instance_interval¶
- Type:
integer
- Default:
0
Interval for reclaiming deleted instances.
A value greater than 0 will enable SOFT_DELETE of instances. This option decides whether the server to be deleted will be put into the SOFT_DELETED state. If this value is greater than 0, the deleted server will not be deleted immediately, instead it will be put into a queue until it’s too old (deleted time greater than the value of reclaim_instance_interval). The server can be recovered from the delete queue by using the restore action. If the deleted server remains longer than the value of reclaim_instance_interval, it will be deleted by a periodic task in the compute service automatically.
Note that this option is read from both the API and compute nodes, and must be set globally otherwise servers could be put into a soft deleted state in the API and never actually reclaimed (deleted) on the compute node.
Note
When using this option, you should also configure the
[cinder]
auth options, e.g.auth_type
,auth_url
,username
, etc. Since the reclaim happens in a periodic task, there is no user token to cleanup volumes attached to any SOFT_DELETED servers so nova must be configured with administrator role access to cleanup those resources in cinder.Possible values:
Any positive integer(in seconds) greater than 0 will enable this option.
Any value <=0 will disable the option.
Related options:
[cinder] auth options for cleaning up volumes attached to servers during the reclaim process
- volume_usage_poll_interval¶
- Type:
integer
- Default:
0
Interval for gathering volume usages.
This option updates the volume usage cache for every volume_usage_poll_interval number of seconds.
Possible values:
Any positive integer(in seconds) greater than 0 will enable this option.
Any value <=0 will disable the option.
- shelved_poll_interval¶
- Type:
integer
- Default:
3600
Interval for polling shelved instances to offload.
The periodic task runs for every shelved_poll_interval number of seconds and checks if there are any shelved instances. If it finds a shelved instance, based on the ‘shelved_offload_time’ config value it offloads the shelved instances. Check ‘shelved_offload_time’ config option description for details.
Possible values:
Any value <= 0: Disables the option.
Any positive integer in seconds.
Related options:
shelved_offload_time
- shelved_offload_time¶
- Type:
integer
- Default:
0
Time before a shelved instance is eligible for removal from a host.
By default this option is set to 0 and the shelved instance will be removed from the hypervisor immediately after shelve operation. Otherwise, the instance will be kept for the value of shelved_offload_time(in seconds) so that during the time period the unshelve action will be faster, then the periodic task will remove the instance from hypervisor after shelved_offload_time passes.
Possible values:
- 0: Instance will be immediately offloaded after being
shelved.
Any value < 0: An instance will never offload.
Any positive integer in seconds: The instance will exist for the specified number of seconds before being offloaded.
- instance_delete_interval¶
- Type:
integer
- Default:
300
Interval for retrying failed instance file deletes.
This option depends on ‘maximum_instance_delete_attempts’. This option specifies how often to retry deletes whereas ‘maximum_instance_delete_attempts’ specifies the maximum number of retry attempts that can be made.
Possible values:
0: Will run at the default periodic interval.
Any value < 0: Disables the option.
Any positive integer in seconds.
Related options:
maximum_instance_delete_attempts
from instance_cleaning_opts group.
- block_device_allocate_retries_interval¶
- Type:
integer
- Default:
3
- Minimum Value:
0
Interval (in seconds) between block device allocation retries on failures.
This option allows the user to specify the time interval between consecutive retries. The
block_device_allocate_retries
option specifies the maximum number of retries.Possible values:
0: Disables the option.
Any positive integer in seconds enables the option.
Related options:
block_device_allocate_retries
- controls the number of retries
- scheduler_instance_sync_interval¶
- Type:
integer
- Default:
120
Interval between sending the scheduler a list of current instance UUIDs to verify that its view of instances is in sync with nova.
If the CONF option ‘scheduler_tracks_instance_changes’ is False, the sync calls will not be made. So, changing this option will have no effect.
If the out of sync situations are not very common, this interval can be increased to lower the number of RPC messages being sent. Likewise, if sync issues turn out to be a problem, the interval can be lowered to check more frequently.
Possible values:
0: Will run at the default periodic interval.
Any value < 0: Disables the option.
Any positive integer in seconds.
Related options:
This option has no impact if
scheduler_tracks_instance_changes
is set to False.
- update_resources_interval¶
- Type:
integer
- Default:
0
Interval for updating compute resources.
This option specifies how often the update_available_resource periodic task should run. A number less than 0 means to disable the task completely. Leaving this at the default of 0 will cause this to run at the default periodic interval. Setting it to any positive value will cause it to run at approximately that number of seconds.
Possible values:
0: Will run at the default periodic interval.
Any value < 0: Disables the option.
Any positive integer in seconds.
- reboot_timeout¶
- Type:
integer
- Default:
0
- Minimum Value:
0
Time interval after which an instance is hard rebooted automatically.
When doing a soft reboot, it is possible that a guest kernel is completely hung in a way that causes the soft reboot task to not ever finish. Setting this option to a time period in seconds will automatically hard reboot an instance if it has been stuck in a rebooting state longer than N seconds.
Possible values:
0: Disables the option (default).
Any positive integer in seconds: Enables the option.
- instance_build_timeout¶
- Type:
integer
- Default:
0
- Minimum Value:
0
Maximum time in seconds that an instance can take to build.
If this timer expires, instance status will be changed to ERROR. Enabling this option will make sure an instance will not be stuck in BUILD state for a longer period.
Possible values:
0: Disables the option (default)
Any positive integer in seconds: Enables the option.
- rescue_timeout¶
- Type:
integer
- Default:
0
- Minimum Value:
0
Interval to wait before un-rescuing an instance stuck in RESCUE.
Possible values:
0: Disables the option (default)
Any positive integer in seconds: Enables the option.
- resize_confirm_window¶
- Type:
integer
- Default:
0
- Minimum Value:
0
Automatically confirm resizes after N seconds.
Resize functionality will save the existing server before resizing. After the resize completes, user is requested to confirm the resize. The user has the opportunity to either confirm or revert all changes. Confirm resize removes the original server and changes server status from resized to active. Setting this option to a time period (in seconds) will automatically confirm the resize if the server is in resized state longer than that time.
Possible values:
0: Disables the option (default)
Any positive integer in seconds: Enables the option.
- shutdown_timeout¶
- Type:
integer
- Default:
60
- Minimum Value:
0
Total time to wait in seconds for an instance to perform a clean shutdown.
It determines the overall period (in seconds) a VM is allowed to perform a clean shutdown. While performing stop, rescue and shelve, rebuild operations, configuring this option gives the VM a chance to perform a controlled shutdown before the instance is powered off. The default timeout is 60 seconds. A value of 0 (zero) means the guest will be powered off immediately with no opportunity for guest OS clean-up.
The timeout value can be overridden on a per image basis by means of os_shutdown_timeout that is an image metadata setting allowing different types of operating systems to specify how much time they need to shut down cleanly.
Possible values:
A positive integer or 0 (default value is 60).
- running_deleted_instance_action¶
- Type:
string
- Default:
reap
- Valid Values:
reap, log, shutdown, noop
The compute service periodically checks for instances that have been deleted in the database but remain running on the compute node. The above option enables action to be taken when such instances are identified.
Related options:
running_deleted_instance_poll_interval
running_deleted_instance_timeout
Possible values
- reap
Powers down the instances and deletes them
- log
Logs warning message about deletion of the resource
- shutdown
Powers down instances and marks them as non-bootable which can be later used for debugging/analysis
- noop
Takes no action
- running_deleted_instance_poll_interval¶
- Type:
integer
- Default:
1800
Time interval in seconds to wait between runs for the clean up action. If set to 0, above check will be disabled. If “running_deleted_instance _action” is set to “log” or “reap”, a value greater than 0 must be set.
Possible values:
Any positive integer in seconds enables the option.
0: Disables the option.
1800: Default value.
Related options:
running_deleted_instance_action
- running_deleted_instance_timeout¶
- Type:
integer
- Default:
0
Time interval in seconds to wait for the instances that have been marked as deleted in database to be eligible for cleanup.
Possible values:
Any positive integer in seconds(default is 0).
Related options:
“running_deleted_instance_action”
- maximum_instance_delete_attempts¶
- Type:
integer
- Default:
5
- Minimum Value:
1
The number of times to attempt to reap an instance’s files.
This option specifies the maximum number of retry attempts that can be made.
Possible values:
Any positive integer defines how many attempts are made.
Related options:
[DEFAULT] instance_delete_interval
can be used to disable this option.
- osapi_compute_unique_server_name_scope¶
- Type:
string
- Default:
''
- Valid Values:
‘’, project, global
Sets the scope of the check for unique instance names.
The default doesn’t check for unique names. If a scope for the name check is set, a launch of a new instance or an update of an existing instance with a duplicate name will result in an ‘’InstanceExists’’ error. The uniqueness is case-insensitive. Setting this option can increase the usability for end users as they don’t have to distinguish among instances with the same name by their IDs.
Possible values
- ‘’
An empty value means that no uniqueness check is done and duplicate names are possible
- project
The instance name check is done only for instances within the same project
- global
The instance name check is done for all instances regardless of the project
- enable_new_services¶
- Type:
boolean
- Default:
True
Enable new nova-compute services on this host automatically.
When a new nova-compute service starts up, it gets registered in the database as an enabled service. Sometimes it can be useful to register new compute services in disabled state and then enabled them at a later point in time. This option only sets this behavior for nova-compute services, it does not auto-disable other services like nova-conductor, nova-scheduler, or nova-osapi_compute.
Possible values:
True
: Each new compute service is enabled as soon as it registers itself.False
: Compute services must be enabled via an os-services REST API call or with the CLI withnova service-enable <hostname> <binary>
, otherwise they are not ready to use.
- instance_name_template¶
- Type:
string
- Default:
instance-%08x
Template string to be used to generate instance names.
This template controls the creation of the database name of an instance. This is not the display name you enter when creating an instance (via Horizon or CLI). For a new deployment it is advisable to change the default value (which uses the database autoincrement) to another value which makes use of the attributes of an instance, like
instance-%(uuid)s
. If you already have instances in your deployment when you change this, your deployment will break.Possible values:
A string which either uses the instance database ID (like the default)
A string with a list of named database columns, for example
%(id)d
or%(uuid)s
or%(hostname)s
.
- migrate_max_retries¶
- Type:
integer
- Default:
-1
- Minimum Value:
-1
Number of times to retry live-migration before failing.
Possible values:
If == -1, try until out of hosts (default)
If == 0, only try once, no retries
Integer greater than 0
- config_drive_format¶
- Type:
string
- Default:
iso9660
- Valid Values:
iso9660, vfat
Config drive format.
Config drive format that will contain metadata attached to the instance when it boots.
Related options:
This option is meaningful when one of the following alternatives occur:
force_config_drive
option set totrue
the REST API call to create the instance contains an enable flag for config drive option
the image used to create the instance requires a config drive, this is defined by
img_config_drive
property for that image.
Possible values
- iso9660
A file system image standard that is widely supported across operating systems.
- vfat
Provided for legacy reasons and to enable live migration with the libvirt driver and non-shared storage
Warning
This option is deprecated for removal since 19.0.0. Its value may be silently ignored in the future.
- Reason:
This option was originally added as a workaround for bug in libvirt, #1246201, that was resolved in libvirt v1.2.17. As a result, this option is no longer necessary or useful.
- force_config_drive¶
- Type:
boolean
- Default:
False
Force injection to take place on a config drive
When this option is set to true config drive functionality will be forced enabled by default, otherwise users can still enable config drives via the REST API or image metadata properties. Launched instances are not affected by this option.
Possible values:
- True: Force to use of config drive regardless the user’s input in the
REST API call.
- False: Do not force use of config drive. Config drives can still be
enabled via the REST API or image metadata properties.
Related options:
Use the ‘mkisofs_cmd’ flag to set the path where you install the genisoimage program. If genisoimage is in same path as the nova-compute service, you do not need to set this flag.
- mkisofs_cmd¶
- Type:
string
- Default:
genisoimage
Name or path of the tool used for ISO image creation.
Use the
mkisofs_cmd
flag to set the path where you install thegenisoimage
program. Ifgenisoimage
is on the system path, you do not need to change the default value.Possible values:
Name of the ISO image creator program, in case it is in the same directory as the nova-compute service
Path to ISO image creator program
Related options:
This option is meaningful when config drives are enabled.
- my_ip¶
- Type:
string
- Default:
<host_ipv4>
This option has a sample default set, which means that its actual default value may vary from the one documented above.
The IP address which the host is using to connect to the management network.
Possible values:
String with valid IP address. Default is IPv4 address of this host.
Related options:
my_block_storage_ip
- my_block_storage_ip¶
- Type:
string
- Default:
$my_ip
The IP address which is used to connect to the block storage network.
Possible values:
String with valid IP address. Default is IP address of this host.
Related options:
my_ip - if my_block_storage_ip is not set, then my_ip value is used.
- host¶
- Type:
unknown type
- Default:
<current_hostname>
This option has a sample default set, which means that its actual default value may vary from the one documented above.
Hostname, FQDN or IP address of this host.
Used as:
the oslo.messaging queue name for nova-compute worker
we use this value for the binding_host sent to neutron. This means if you use a neutron agent, it should have the same value for host.
cinder host attachment information
Must be valid within AMQP key.
Possible values:
String with hostname, FQDN or IP address. Default is hostname of this host.
- flat_injected¶
- Type:
boolean
- Default:
False
This option determines whether the network setup information is injected into the VM before it is booted. While it was originally designed to be used only by nova-network, it is also used by the vmware virt driver to control whether network information is injected into a VM. The libvirt virt driver also uses it when we use config_drive to configure network to control whether network information is injected into a VM.
- record¶
- Type:
string
- Default:
<None>
Filename that will be used for storing websocket frames received and sent by a proxy service (like VNC, spice, serial) running on this host. If this is not set, no recording will be done.
- daemon¶
- Type:
boolean
- Default:
False
Run as a background process.
- ssl_only¶
- Type:
boolean
- Default:
False
Disallow non-encrypted connections.
Related options:
cert
key
- source_is_ipv6¶
- Type:
boolean
- Default:
False
Set to True if source host is addressed with IPv6.
- cert¶
- Type:
string
- Default:
self.pem
Path to SSL certificate file.
Related options:
key
ssl_only
[console] ssl_ciphers
[console] ssl_minimum_version
- key¶
- Type:
string
- Default:
<None>
SSL key file (if separate from cert).
Related options:
cert
- web¶
- Type:
string
- Default:
/usr/share/spice-html5
Path to directory with content which will be served by a web server.
- pybasedir¶
- Type:
string
- Default:
<Path>
This option has a sample default set, which means that its actual default value may vary from the one documented above.
The directory where the Nova python modules are installed.
This directory is used to store template files for networking and remote console access. It is also the default path for other config options which need to persist Nova internal data. It is very unlikely that you need to change this option from its default value.
Possible values:
The full path to a directory.
Related options:
state_path
- state_path¶
- Type:
string
- Default:
$pybasedir
The top-level directory for maintaining Nova’s state.
This directory is used to store Nova’s internal state. It is used by a variety of other config options which derive from this. In some scenarios (for example migrations) it makes sense to use a storage location which is shared between multiple compute hosts (for example via NFS). Unless the option
instances_path
gets overwritten, this directory can grow very large.Possible values:
The full path to a directory. Defaults to value provided in
pybasedir
.
- long_rpc_timeout¶
- Type:
integer
- Default:
1800
This option allows setting an alternate timeout value for RPC calls that have the potential to take a long time. If set, RPC calls to other services will use this value for the timeout (in seconds) instead of the global rpc_response_timeout value.
Operations with RPC calls that utilize this value:
live migration
scheduling
enabling/disabling a compute service
image pre-caching
snapshot-based / cross-cell resize
resize / cold migration
volume attach
Related options:
rpc_response_timeout
- report_interval¶
- Type:
integer
- Default:
10
Number of seconds indicating how frequently the state of services on a given hypervisor is reported. Nova needs to know this to determine the overall health of the deployment.
Related Options:
service_down_time report_interval should be less than service_down_time. If service_down_time is less than report_interval, services will routinely be considered down, because they report in too rarely.
- service_down_time¶
- Type:
integer
- Default:
60
Maximum time in seconds since last check-in for up service
Each compute node periodically updates their database status based on the specified report interval. If the compute node hasn’t updated the status for more than service_down_time, then the compute node is considered down.
Related Options:
report_interval (service_down_time should not be less than report_interval)
- periodic_enable¶
- Type:
boolean
- Default:
True
Enable periodic tasks.
If set to true, this option allows services to periodically run tasks on the manager.
In case of running multiple schedulers or conductors you may want to run periodic tasks on only one host - in this case disable this option for all hosts but one.
- periodic_fuzzy_delay¶
- Type:
integer
- Default:
60
- Minimum Value:
0
Number of seconds to randomly delay when starting the periodic task scheduler to reduce stampeding.
When compute workers are restarted in unison across a cluster, they all end up running the periodic tasks at the same time causing problems for the external services. To mitigate this behavior, periodic_fuzzy_delay option allows you to introduce a random initial delay when starting the periodic task scheduler.
Possible Values:
Any positive integer (in seconds)
0 : disable the random delay
- enabled_apis¶
- Type:
list
- Default:
['osapi_compute', 'metadata']
List of APIs to be enabled by default.
- enabled_ssl_apis¶
- Type:
list
- Default:
[]
List of APIs with enabled SSL.
Nova provides SSL support for the API servers. enabled_ssl_apis option allows configuring the SSL support.
- osapi_compute_listen¶
- Type:
string
- Default:
0.0.0.0
IP address on which the OpenStack API will listen.
The OpenStack API service listens on this IP address for incoming requests.
- osapi_compute_listen_port¶
- Type:
port number
- Default:
8774
- Minimum Value:
0
- Maximum Value:
65535
Port on which the OpenStack API will listen.
The OpenStack API service listens on this port number for incoming requests.
- osapi_compute_workers¶
- Type:
integer
- Default:
<None>
- Minimum Value:
1
Number of workers for OpenStack API service. The default will be the number of CPUs available.
OpenStack API services can be configured to run as multi-process (workers). This overcomes the problem of reduction in throughput when API request concurrency increases. OpenStack API service will run in the specified number of processes.
Possible Values:
Any positive integer
None (default value)
- metadata_listen¶
- Type:
string
- Default:
0.0.0.0
IP address on which the metadata API will listen.
The metadata API service listens on this IP address for incoming requests.
- metadata_listen_port¶
- Type:
port number
- Default:
8775
- Minimum Value:
0
- Maximum Value:
65535
Port on which the metadata API will listen.
The metadata API service listens on this port number for incoming requests.
- metadata_workers¶
- Type:
integer
- Default:
<None>
- Minimum Value:
1
Number of workers for metadata service. If not specified the number of available CPUs will be used.
The metadata service can be configured to run as multi-process (workers). This overcomes the problem of reduction in throughput when API request concurrency increases. The metadata service will run in the specified number of processes.
Possible Values:
Any positive integer
None (default value)
- servicegroup_driver¶
- Type:
string
- Default:
db
- Valid Values:
db, mc
This option specifies the driver to be used for the servicegroup service.
ServiceGroup API in nova enables checking status of a compute node. When a compute worker running the nova-compute daemon starts, it calls the join API to join the compute group. Services like nova scheduler can query the ServiceGroup API to check if a node is alive. Internally, the ServiceGroup client driver automatically updates the compute worker status. There are multiple backend implementations for this service: Database ServiceGroup driver and Memcache ServiceGroup driver.
Related Options:
service_down_time
(maximum time since last check-in for up service)
Possible values
- db
Database ServiceGroup driver
- mc
Memcache ServiceGroup driver
api¶
Options under this group are used to define Nova API.
- auth_strategy¶
- Type:
string
- Default:
keystone
- Valid Values:
keystone, noauth2
Determine the strategy to use for authentication.
Possible values
- keystone
Use keystone for authentication.
- noauth2
Designed for testing only, as it does no actual credential checking. ‘noauth2’ provides administrative credentials only if ‘admin’ is specified as the username.
Warning
This option is deprecated for removal since 21.0.0. Its value may be silently ignored in the future.
- Reason:
The only non-default choice,
noauth2
, is for internal development and testing purposes only and should not be used in deployments. This option and its middleware, NoAuthMiddleware[V2_18], will be removed in a future release.
- config_drive_skip_versions¶
- Type:
string
- Default:
1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01
When gathering the existing metadata for a config drive, the EC2-style metadata is returned for all versions that don’t appear in this option. As of the Liberty release, the available versions are:
1.0
2007-01-19
2007-03-01
2007-08-29
2007-10-10
2007-12-15
2008-02-01
2008-09-01
2009-04-04
The option is in the format of a single string, with each version separated by a space.
Possible values:
Any string that represents zero or more versions, separated by spaces.
¶ Group
Name
DEFAULT
config_drive_skip_versions
- vendordata_providers¶
- Type:
list
- Default:
['StaticJSON']
A list of vendordata providers.
vendordata providers are how deployers can provide metadata via configdrive and metadata that is specific to their deployment.
For more information on the requirements for implementing a vendordata dynamic endpoint, please see the vendordata.rst file in the nova developer reference.
Related options:
vendordata_dynamic_targets
vendordata_dynamic_ssl_certfile
vendordata_dynamic_connect_timeout
vendordata_dynamic_read_timeout
vendordata_dynamic_failure_fatal
¶ Group
Name
DEFAULT
vendordata_providers
- vendordata_dynamic_targets¶
- Type:
list
- Default:
[]
A list of targets for the dynamic vendordata provider. These targets are of the form
<name>@<url>
.The dynamic vendordata provider collects metadata by contacting external REST services and querying them for information about the instance. This behaviour is documented in the vendordata.rst file in the nova developer reference.
¶ Group
Name
DEFAULT
vendordata_dynamic_targets
- vendordata_dynamic_ssl_certfile¶
- Type:
string
- Default:
''
Path to an optional certificate file or CA bundle to verify dynamic vendordata REST services ssl certificates against.
Possible values:
An empty string, or a path to a valid certificate file
Related options:
vendordata_providers
vendordata_dynamic_targets
vendordata_dynamic_connect_timeout
vendordata_dynamic_read_timeout
vendordata_dynamic_failure_fatal
¶ Group
Name
DEFAULT
vendordata_dynamic_ssl_certfile
- vendordata_dynamic_connect_timeout¶
- Type:
integer
- Default:
5
- Minimum Value:
3
Maximum wait time for an external REST service to connect.
Possible values:
Any integer with a value greater than three (the TCP packet retransmission timeout). Note that instance start may be blocked during this wait time, so this value should be kept small.
Related options:
vendordata_providers
vendordata_dynamic_targets
vendordata_dynamic_ssl_certfile
vendordata_dynamic_read_timeout
vendordata_dynamic_failure_fatal
¶ Group
Name
DEFAULT
vendordata_dynamic_connect_timeout
- vendordata_dynamic_read_timeout¶
- Type:
integer
- Default:
5
- Minimum Value:
0
Maximum wait time for an external REST service to return data once connected.
Possible values:
Any integer. Note that instance start is blocked during this wait time, so this value should be kept small.
Related options:
vendordata_providers
vendordata_dynamic_targets
vendordata_dynamic_ssl_certfile
vendordata_dynamic_connect_timeout
vendordata_dynamic_failure_fatal
¶ Group
Name
DEFAULT
vendordata_dynamic_read_timeout
- vendordata_dynamic_failure_fatal¶
- Type:
boolean
- Default:
False
Should failures to fetch dynamic vendordata be fatal to instance boot?
Related options:
vendordata_providers
vendordata_dynamic_targets
vendordata_dynamic_ssl_certfile
vendordata_dynamic_connect_timeout
vendordata_dynamic_read_timeout
- metadata_cache_expiration¶
- Type:
integer
- Default:
15
- Minimum Value:
0
This option is the time (in seconds) to cache metadata. When set to 0, metadata caching is disabled entirely; this is generally not recommended for performance reasons. Increasing this setting should improve response times of the metadata API when under heavy load. Higher values may increase memory usage, and result in longer times for host metadata changes to take effect.
¶ Group
Name
DEFAULT
metadata_cache_expiration
- local_metadata_per_cell¶
- Type:
boolean
- Default:
False
Indicates that the nova-metadata API service has been deployed per-cell, so that we can have better performance and data isolation in a multi-cell deployment. Users should consider the use of this configuration depending on how neutron is setup. If you have networks that span cells, you might need to run nova-metadata API service globally. If your networks are segmented along cell boundaries, then you can run nova-metadata API service per cell. When running nova-metadata API service per cell, you should also configure each Neutron metadata-agent to point to the corresponding nova-metadata API service.
- dhcp_domain¶
- Type:
string
- Default:
novalocal
Domain name used to configure FQDN for instances.
Configure a fully-qualified domain name for instance hostnames. The value is suffixed to the instance hostname from the database to construct the hostname that appears in the metadata API. To disable this behavior (for example in order to correctly support microversion’s 2.94 FQDN hostnames), set this to the empty string.
Possible values:
Any string that is a valid domain name.
¶ Group
Name
DEFAULT
dhcp_domain
- vendordata_jsonfile_path¶
- Type:
string
- Default:
<None>
Cloud providers may store custom data in vendor data file that will then be available to the instances via the metadata service, and to the rendering of config-drive. The default class for this, JsonFileVendorData, loads this information from a JSON file, whose path is configured by this option. If there is no path set by this option, the class returns an empty dictionary.
Note that when using this to provide static vendor data to a configuration drive, the nova-compute service must be configured with this option and the file must be accessible from the nova-compute host.
Possible values:
Any string representing the path to the data file, or an empty string (default).
¶ Group
Name
DEFAULT
vendordata_jsonfile_path
- max_limit¶
- Type:
integer
- Default:
1000
- Minimum Value:
0
As a query can potentially return many thousands of items, you can limit the maximum number of items in a single response by setting this option.
¶ Group
Name
DEFAULT
osapi_max_limit
- compute_link_prefix¶
- Type:
string
- Default:
<None>
This string is prepended to the normal URL that is returned in links to the OpenStack Compute API. If it is empty (the default), the URLs are returned unchanged.
Possible values:
Any string, including an empty string (the default).
¶ Group
Name
DEFAULT
osapi_compute_link_prefix
- glance_link_prefix¶
- Type:
string
- Default:
<None>
This string is prepended to the normal URL that is returned in links to Glance resources. If it is empty (the default), the URLs are returned unchanged.
Possible values:
Any string, including an empty string (the default).
¶ Group
Name
DEFAULT
osapi_glance_link_prefix
- instance_list_per_project_cells¶
- Type:
boolean
- Default:
False
When enabled, this will cause the API to only query cell databases in which the tenant has mapped instances. This requires an additional (fast) query in the API database before each list, but also (potentially) limits the number of cell databases that must be queried to provide the result. If you have a small number of cells, or tenants are likely to have instances in all cells, then this should be False. If you have many cells, especially if you confine tenants to a small subset of those cells, this should be True.
- instance_list_cells_batch_strategy¶
- Type:
string
- Default:
distributed
- Valid Values:
distributed, fixed
This controls the method by which the API queries cell databases in smaller batches during large instance list operations. If batching is performed, a large instance list operation will request some fraction of the overall API limit from each cell database initially, and will re-request that same batch size as records are consumed (returned) from each cell as necessary. Larger batches mean less chattiness between the API and the database, but potentially more wasted effort processing the results from the database which will not be returned to the user. Any strategy will yield a batch size of at least 100 records, to avoid a user causing many tiny database queries in their request.
Related options:
instance_list_cells_batch_fixed_size
max_limit
Possible values
- distributed
Divide the limit requested by the user by the number of cells in the system. This requires counting the cells in the system initially, which will not be refreshed until service restart or SIGHUP. The actual batch size will be increased by 10% over the result of ($limit / $num_cells).
- fixed
Request fixed-size batches from each cell, as defined by
instance_list_cells_batch_fixed_size
. If the limit is smaller than the batch size, the limit will be used instead. If you do not wish batching to be used at all, setting the fixed size equal to themax_limit
value will cause only one request per cell database to be issued.
- instance_list_cells_batch_fixed_size¶
- Type:
integer
- Default:
100
- Minimum Value:
100
This controls the batch size of instances requested from each cell database if
instance_list_cells_batch_strategy`
is set tofixed
. This integral value will define the limit issued to each cell every time a batch of instances is requested, regardless of the number of cells in the system or any other factors. Per the general logic called out in the documentation forinstance_list_cells_batch_strategy
, the minimum value for this is 100 records per batch.Related options:
instance_list_cells_batch_strategy
max_limit
- list_records_by_skipping_down_cells¶
- Type:
boolean
- Default:
True
When set to False, this will cause the API to return a 500 error if there is an infrastructure failure like non-responsive cells. If you want the API to skip the down cells and return the results from the up cells set this option to True.
Note that from API microversion 2.69 there could be transient conditions in the deployment where certain records are not available and the results could be partial for certain requests containing those records. In those cases this option will be ignored. See “Handling Down Cells” section of the Compute API guide (https://docs.openstack.org/api-guide/compute/down_cells.html) for more information.
- use_neutron_default_nets¶
- Type:
boolean
- Default:
False
When True, the TenantNetworkController will query the Neutron API to get the default networks to use.
Related options:
neutron_default_tenant_id
¶ Group
Name
DEFAULT
use_neutron_default_nets
- neutron_default_tenant_id¶
- Type:
string
- Default:
default
Tenant ID for getting the default network from Neutron API (also referred in some places as the ‘project ID’) to use.
Related options:
use_neutron_default_nets
¶ Group
Name
DEFAULT
neutron_default_tenant_id
- enable_instance_password¶
- Type:
boolean
- Default:
True
Enables returning of the instance password by the relevant server API calls such as create, rebuild, evacuate, or rescue. If the hypervisor does not support password injection, then the password returned will not be correct, so if your hypervisor does not support password injection, set this to False.
¶ Group
Name
DEFAULT
enable_instance_password
api_database¶
The Nova API Database is a separate database which is used for information which is used across cells. This database is mandatory since the Mitaka release (13.0.0).
This group should not be configured for the nova-compute
service.
- sqlite_synchronous¶
- Type:
boolean
- Default:
True
If True, SQLite uses synchronous mode.
- backend¶
- Type:
string
- Default:
sqlalchemy
The back end to use for the database.
- connection¶
- Type:
string
- Default:
<None>
The SQLAlchemy connection string to use to connect to the database.
- slave_connection¶
- Type:
string
- Default:
<None>
The SQLAlchemy connection string to use to connect to the slave database.
- mysql_sql_mode¶
- Type:
string
- Default:
TRADITIONAL
The SQL mode to be used for MySQL sessions. This option, including the default, overrides any server-set SQL mode. To use whatever SQL mode is set by the server configuration, set this to no value. Example: mysql_sql_mode=
- mysql_wsrep_sync_wait¶
- Type:
integer
- Default:
<None>
For Galera only, configure wsrep_sync_wait causality checks on new connections. Default is None, meaning don’t configure any setting.
- connection_recycle_time¶
- Type:
integer
- Default:
3600
Connections which have been present in the connection pool longer than this number of seconds will be replaced with a new one the next time they are checked out from the pool.
- max_pool_size¶
- Type:
integer
- Default:
5
Maximum number of SQL connections to keep open in a pool. Setting a value of 0 indicates no limit.
- max_retries¶
- Type:
integer
- Default:
10
Maximum number of database connection retries during startup. Set to -1 to specify an infinite retry count.
- retry_interval¶
- Type:
integer
- Default:
10
Interval between retries of opening a SQL connection.
- max_overflow¶
- Type:
integer
- Default:
50
If set, use this value for max_overflow with SQLAlchemy.
- connection_debug¶
- Type:
integer
- Default:
0
- Minimum Value:
0
- Maximum Value:
100
Verbosity of SQL debugging information: 0=None, 100=Everything.
- connection_trace¶
- Type:
boolean
- Default:
False
Add Python stack traces to SQL as comment strings.
- pool_timeout¶
- Type:
integer
- Default:
<None>
If set, use this value for pool_timeout with SQLAlchemy.
- db_retry_interval¶
- Type:
integer
- Default:
1
Seconds between retries of a database transaction.
- db_inc_retry_interval¶
- Type:
boolean
- Default:
True
If True, increases the interval between retries of a database operation up to db_max_retry_interval.
- db_max_retry_interval¶
- Type:
integer
- Default:
10
If db_inc_retry_interval is set, the maximum seconds between retries of a database operation.
- db_max_retries¶
- Type:
integer
- Default:
20
Maximum retries in case of connection error or deadlock error before error is raised. Set to -1 to specify an infinite retry count.
- connection_parameters¶
- Type:
string
- Default:
''
Optional URL parameters to append onto the connection URL at connect time; specify as param1=value1¶m2=value2&…
barbican¶
- barbican_endpoint¶
- Type:
string
- Default:
<None>
Use this endpoint to connect to Barbican, for example: “http://localhost:9311/”
- barbican_api_version¶
- Type:
string
- Default:
<None>
Version of the Barbican API, for example: “v1”
- auth_endpoint¶
- Type:
string
- Default:
http://localhost/identity/v3
Use this endpoint to connect to Keystone
¶ Group
Name
key_manager
auth_url
- retry_delay¶
- Type:
integer
- Default:
1
Number of seconds to wait before retrying poll for key creation completion
- number_of_retries¶
- Type:
integer
- Default:
60
Number of times to retry poll for key creation completion
- verify_ssl¶
- Type:
boolean
- Default:
True
Specifies if insecure TLS (https) requests. If False, the server’s certificate will not be validated, if True, we can set the verify_ssl_path config meanwhile.
- verify_ssl_path¶
- Type:
string
- Default:
<None>
A path to a bundle or CA certs to check against, or None for requests to attempt to locate and use certificates which verify_ssh is True. If verify_ssl is False, this is ignored.
- barbican_endpoint_type¶
- Type:
string
- Default:
public
- Valid Values:
public, internal, admin
Specifies the type of endpoint. Allowed values are: public, private, and admin
- barbican_region_name¶
- Type:
string
- Default:
<None>
Specifies the region of the chosen endpoint.
- send_service_user_token¶
- Type:
boolean
- Default:
False
When True, if sending a user token to a REST API, also send a service token.
Nova often reuses the user token provided to the nova-api to talk to other REST APIs, such as Cinder, Glance and Neutron. It is possible that while the user token was valid when the request was made to Nova, the token may expire before it reaches the other service. To avoid any failures, and to make it clear it is Nova calling the service on the user’s behalf, we include a service token along with the user token. Should the user’s token have expired, a valid service token ensures the REST API request will still be accepted by the keystone middleware.
barbican_service_user¶
- cafile¶
- Type:
string
- Default:
<None>
PEM encoded Certificate Authority to use when verifying HTTPs connections.
- certfile¶
- Type:
string
- Default:
<None>
PEM encoded client certificate cert file
- keyfile¶
- Type:
string
- Default:
<None>
PEM encoded client certificate key file
- insecure¶
- Type:
boolean
- Default:
False
Verify HTTPS connections.
- timeout¶
- Type:
integer
- Default:
<None>
Timeout value for http requests
- collect_timing¶
- Type:
boolean
- Default:
False
Collect per-API call timing information.
- split_loggers¶
- Type:
boolean
- Default:
False
Log requests to multiple loggers.
- auth_type¶
- Type:
unknown type
- Default:
<None>
Authentication type to load
¶ Group
Name
barbican_service_user
auth_plugin
- auth_section¶
- Type:
unknown type
- Default:
<None>
Config Section from which to load plugin specific options
cache¶
- config_prefix¶
- Type:
string
- Default:
cache.oslo
Prefix for building the configuration dictionary for the cache region. This should not need to be changed unless there is another dogpile.cache region with the same configuration name.
- expiration_time¶
- Type:
integer
- Default:
600
Default TTL, in seconds, for any cached item in the dogpile.cache region. This applies to any cached method that doesn’t have an explicit cache expiration time defined for it.
- backend¶
- Type:
string
- Default:
dogpile.cache.null
- Valid Values:
oslo_cache.memcache_pool, oslo_cache.dict, oslo_cache.mongo, oslo_cache.etcd3gw, dogpile.cache.pymemcache, dogpile.cache.memcached, dogpile.cache.pylibmc, dogpile.cache.bmemcached, dogpile.cache.dbm, dogpile.cache.redis, dogpile.cache.redis_sentinel, dogpile.cache.memory, dogpile.cache.memory_pickle, dogpile.cache.null
Cache backend module. For eventlet-based or environments with hundreds of threaded servers, Memcache with pooling (oslo_cache.memcache_pool) is recommended. For environments with less than 100 threaded servers, Memcached (dogpile.cache.memcached) or Redis (dogpile.cache.redis) is recommended. Test environments with a single instance of the server can use the dogpile.cache.memory backend.
- backend_argument¶
- Type:
multi-valued
- Default:
''
Arguments supplied to the backend module. Specify this option once per argument to be passed to the dogpile.cache backend. Example format: “<argname>:<value>”.
- proxies¶
- Type:
list
- Default:
[]
Proxy classes to import that will affect the way the dogpile.cache backend functions. See the dogpile.cache documentation on changing-backend-behavior.
- enabled¶
- Type:
boolean
- Default:
False
Global toggle for caching.
- debug_cache_backend¶
- Type:
boolean
- Default:
False
Extra debugging from the cache backend (cache keys, get/set/delete/etc calls). This is only really useful if you need to see the specific cache-backend get/set/delete calls with the keys/values. Typically this should be left set to false.
- memcache_servers¶
- Type:
list
- Default:
['localhost:11211']
Memcache servers in the format of “host:port”. This is used by backends dependent on Memcached.If
dogpile.cache.memcached
oroslo_cache.memcache_pool
is used and a given host refer to an IPv6 or a given domain refer to IPv6 then you should prefix the given address with the address family (inet6
) (e.ginet6[::1]:11211
,inet6:[fd12:3456:789a:1::1]:11211
,inet6:[controller-0.internalapi]:11211
). If the address family is not given then these backends will use the defaultinet
address family which corresponds to IPv4
- memcache_dead_retry¶
- Type:
integer
- Default:
300
Number of seconds memcached server is considered dead before it is tried again. (dogpile.cache.memcache and oslo_cache.memcache_pool backends only).
- memcache_socket_timeout¶
- Type:
floating point
- Default:
1.0
Timeout in seconds for every call to a server. (dogpile.cache.memcache and oslo_cache.memcache_pool backends only).
- memcache_pool_maxsize¶
- Type:
integer
- Default:
10
Max total number of open connections to every memcached server. (oslo_cache.memcache_pool backend only).
- memcache_pool_unused_timeout¶
- Type:
integer
- Default:
60
Number of seconds a connection to memcached is held unused in the pool before it is closed. (oslo_cache.memcache_pool backend only).
- memcache_pool_connection_get_timeout¶
- Type:
integer
- Default:
10
Number of seconds that an operation will wait to get a memcache client connection.
- memcache_pool_flush_on_reconnect¶
- Type:
boolean
- Default:
False
Global toggle if memcache will be flushed on reconnect. (oslo_cache.memcache_pool backend only).
- memcache_sasl_enabled¶
- Type:
boolean
- Default:
False
Enable the SASL(Simple Authentication and SecurityLayer) if the SASL_enable is true, else disable.
- memcache_username¶
- Type:
string
- Default:
<None>
the user name for the memcached which SASL enabled
- memcache_password¶
- Type:
string
- Default:
<None>
the password for the memcached which SASL enabled
- redis_server¶
- Type:
string
- Default:
localhost:6379
Redis server in the format of “host:port”
- redis_username¶
- Type:
string
- Default:
<None>
the user name for redis
- redis_password¶
- Type:
string
- Default:
<None>
the password for redis
- redis_sentinels¶
- Type:
list
- Default:
['localhost:26379']
Redis sentinel servers in the format of “host:port”
- redis_socket_timeout¶
- Type:
floating point
- Default:
1.0
Timeout in seconds for every call to a server. (dogpile.cache.redis and dogpile.cache.redis_sentinel backends only).
- redis_sentinel_service_name¶
- Type:
string
- Default:
mymaster
Service name of the redis sentinel cluster.
- tls_enabled¶
- Type:
boolean
- Default:
False
Global toggle for TLS usage when communicating with the caching servers. Currently supported by
dogpile.cache.bmemcache
,dogpile.cache.pymemcache
,oslo_cache.memcache_pool
,dogpile.cache.redis
anddogpile.cache.redis_sentinel
.
- tls_cafile¶
- Type:
string
- Default:
<None>
Path to a file of concatenated CA certificates in PEM format necessary to establish the caching servers’ authenticity. If tls_enabled is False, this option is ignored.
- tls_certfile¶
- Type:
string
- Default:
<None>
Path to a single file in PEM format containing the client’s certificate as well as any number of CA certificates needed to establish the certificate’s authenticity. This file is only required when client side authentication is necessary. If tls_enabled is False, this option is ignored.
- tls_keyfile¶
- Type:
string
- Default:
<None>
Path to a single file containing the client’s private key in. Otherwise the private key will be taken from the file specified in tls_certfile. If tls_enabled is False, this option is ignored.
- tls_allowed_ciphers¶
- Type:
string
- Default:
<None>
Set the available ciphers for sockets created with the TLS context. It should be a string in the OpenSSL cipher list format. If not specified, all OpenSSL enabled ciphers will be available. Currently supported by
dogpile.cache.bmemcache
,dogpile.cache.pymemcache
andoslo_cache.memcache_pool
.
- enable_socket_keepalive¶
- Type:
boolean
- Default:
False
Global toggle for the socket keepalive of dogpile’s pymemcache backend
- socket_keepalive_idle¶
- Type:
integer
- Default:
1
- Minimum Value:
0
The time (in seconds) the connection needs to remain idle before TCP starts sending keepalive probes. Should be a positive integer most greater than zero.
- socket_keepalive_interval¶
- Type:
integer
- Default:
1
- Minimum Value:
0
The time (in seconds) between individual keepalive probes. Should be a positive integer greater than zero.
- socket_keepalive_count¶
- Type:
integer
- Default:
1
- Minimum Value:
0
The maximum number of keepalive probes TCP should send before dropping the connection. Should be a positive integer greater than zero.
- enable_retry_client¶
- Type:
boolean
- Default:
False
Enable retry client mechanisms to handle failure. Those mechanisms can be used to wrap all kind of pymemcache clients. The wrapper allows you to define how many attempts to make and how long to wait between attemots.
- retry_attempts¶
- Type:
integer
- Default:
2
- Minimum Value:
1
Number of times to attempt an action before failing.
- retry_delay¶
- Type:
floating point
- Default:
0
Number of seconds to sleep between each attempt.
- hashclient_retry_attempts¶
- Type:
integer
- Default:
2
- Minimum Value:
1
Amount of times a client should be tried before it is marked dead and removed from the pool in the HashClient’s internal mechanisms.
- hashclient_retry_delay¶
- Type:
floating point
- Default:
1
Time in seconds that should pass between retry attempts in the HashClient’s internal mechanisms.
- dead_timeout¶
- Type:
floating point
- Default:
60
Time in seconds before attempting to add a node back in the pool in the HashClient’s internal mechanisms.
- enforce_fips_mode¶
- Type:
boolean
- Default:
False
Global toggle for enforcing the OpenSSL FIPS mode. This feature requires Python support. This is available in Python 3.9 in all environments and may have been backported to older Python versions on select environments. If the Python executable used does not support OpenSSL FIPS mode, an exception will be raised. Currently supported by
dogpile.cache.bmemcache
,dogpile.cache.pymemcache
andoslo_cache.memcache_pool
.
cinder¶
- catalog_info¶
- Type:
string
- Default:
volumev3::publicURL
Info to match when looking for cinder in the service catalog.
The
<service_name>
is optional and omitted by default since it should not be necessary in most deployments.Possible values:
Format is separated values of the form: <service_type>:<service_name>:<endpoint_type>
Note: Nova does not support the Cinder v2 API since the Nova 17.0.0 Queens release.
Related options:
endpoint_template - Setting this option will override catalog_info
- endpoint_template¶
- Type:
string
- Default:
<None>
If this option is set then it will override service catalog lookup with this template for cinder endpoint
Possible values:
URL for cinder endpoint API e.g. http://localhost:8776/v3/%(project_id)s
Note: Nova does not support the Cinder v2 API since the Nova 17.0.0 Queens release.
Related options:
catalog_info - If endpoint_template is not set, catalog_info will be used.
- os_region_name¶
- Type:
string
- Default:
<None>
Region name of this node. This is used when picking the URL in the service catalog.
Possible values:
Any string representing region name
- http_retries¶
- Type:
integer
- Default:
3
- Minimum Value:
0
Number of times cinderclient should retry on any failed http call. 0 means connection is attempted only once. Setting it to any positive integer means that on failure connection is retried that many times e.g. setting it to 3 means total attempts to connect will be 4.
Possible values:
Any integer value. 0 means connection is attempted only once
- cross_az_attach¶
- Type:
boolean
- Default:
True
Allow attach between instance and volume in different availability zones.
If False, volumes attached to an instance must be in the same availability zone in Cinder as the instance availability zone in Nova.
This also means care should be taken when booting an instance from a volume where source is not “volume” because Nova will attempt to create a volume using the same availability zone as what is assigned to the instance.
If that AZ is not in Cinder (or
allow_availability_zone_fallback=False
in cinder.conf), the volume create request will fail and the instance will fail the build request.By default there is no availability zone restriction on volume attach.
Related options:
[DEFAULT]/default_schedule_zone
- debug¶
- Type:
boolean
- Default:
False
Enable DEBUG logging with cinderclient and os_brick independently of the rest of Nova.
- cafile¶
- Type:
string
- Default:
<None>
PEM encoded Certificate Authority to use when verifying HTTPs connections.
- certfile¶
- Type:
string
- Default:
<None>
PEM encoded client certificate cert file
- keyfile¶
- Type:
string
- Default:
<None>
PEM encoded client certificate key file
- insecure¶
- Type:
boolean
- Default:
False
Verify HTTPS connections.
- timeout¶
- Type:
integer
- Default:
<None>
Timeout value for http requests
- collect_timing¶
- Type:
boolean
- Default:
False
Collect per-API call timing information.
- split_loggers¶
- Type:
boolean
- Default:
False
Log requests to multiple loggers.
- auth_type¶
- Type:
unknown type
- Default:
<None>
Authentication type to load
¶ Group
Name
cinder
auth_plugin
- auth_section¶
- Type:
unknown type
- Default:
<None>
Config Section from which to load plugin specific options
- auth_url¶
- Type:
unknown type
- Default:
<None>
Authentication URL
- system_scope¶
- Type:
unknown type
- Default:
<None>
Scope for system operations
- domain_id¶
- Type:
unknown type
- Default:
<None>
Domain ID to scope to
- domain_name¶
- Type:
unknown type
- Default:
<None>
Domain name to scope to
- project_id¶
- Type:
unknown type
- Default:
<None>
Project ID to scope to
- project_name¶
- Type:
unknown type
- Default:
<None>
Project name to scope to
- project_domain_id¶
- Type:
unknown type
- Default:
<None>
Domain ID containing project
- project_domain_name¶
- Type:
unknown type
- Default:
<None>
Domain name containing project
- trust_id¶
- Type:
unknown type
- Default:
<None>
ID of the trust to use as a trustee use
- default_domain_id¶
- Type:
unknown type
- Default:
<None>
Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication.
- default_domain_name¶
- Type:
unknown type
- Default:
<None>
Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication.
- user_id¶
- Type:
unknown type
- Default:
<None>
User ID
- username¶
- Type:
unknown type
- Default:
<None>
Username
¶ Group
Name
cinder
user-name
cinder
user_name
- user_domain_id¶
- Type:
unknown type
- Default:
<None>
User’s domain id
- user_domain_name¶
- Type:
unknown type
- Default:
<None>
User’s domain name
- password¶
- Type:
unknown type
- Default:
<None>
User’s password
- tenant_id¶
- Type:
unknown type
- Default:
<None>
Tenant ID
- tenant_name¶
- Type:
unknown type
- Default:
<None>
Tenant Name
compute¶
- consecutive_build_service_disable_threshold¶
- Type:
integer
- Default:
10
Enables reporting of build failures to the scheduler.
Any nonzero value will enable sending build failure statistics to the scheduler for use by the BuildFailureWeigher.
Possible values:
Any positive integer enables reporting build failures.
Zero to disable reporting build failures.
Related options:
[filter_scheduler]/build_failure_weight_multiplier
- shutdown_retry_interval¶
- Type:
integer
- Default:
10
- Minimum Value:
1
Time to wait in seconds before resending an ACPI shutdown signal to instances.
The overall time to wait is set by
shutdown_timeout
.Possible values:
Any integer greater than 0 in seconds
Related options:
shutdown_timeout
- resource_provider_association_refresh¶
- Type:
integer
- Default:
300
- Minimum Value:
0
- Mutable:
This option can be changed without restarting.
Interval for updating nova-compute-side cache of the compute node resource provider’s inventories, aggregates, and traits.
This option specifies the number of seconds between attempts to update a provider’s inventories, aggregates and traits in the local cache of the compute node.
A value of zero disables cache refresh completely.
The cache can be cleared manually at any time by sending SIGHUP to the compute process, causing it to be repopulated the next time the data is accessed.
Possible values:
Any positive integer in seconds, or zero to disable refresh.
- Type:
string
- Default:
<None>
Mask of host CPUs that can be used for
VCPU
resources and offloaded emulator threads.The behavior of this option depends on the definition of the deprecated
vcpu_pin_set
option.If
vcpu_pin_set
is not defined,[compute] cpu_shared_set
will be be used to provideVCPU
inventory and to determine the host CPUs that unpinned instances can be scheduled to. It will also be used to determine the host CPUS that instance emulator threads should be offloaded to for instances configured with theshare
emulator thread policy (hw:emulator_threads_policy=share
).If
vcpu_pin_set
is defined,[compute] cpu_shared_set
will only be used to determine the host CPUs that instance emulator threads should be offloaded to for instances configured with theshare
emulator thread policy (hw:emulator_threads_policy=share
).vcpu_pin_set
will be used to provideVCPU
inventory and to determine the host CPUs that both pinned and unpinned instances can be scheduled to.
This behavior will be simplified in a future release when
vcpu_pin_set
is removed.Possible values:
A comma-separated list of physical CPU numbers that instance VCPUs can be allocated from. Each element should be either a single CPU number, a range of CPU numbers, or a caret followed by a CPU number to be excluded from a previous range. For example:
cpu_shared_set = "4-12,^8,15"
Related options:
[compute] cpu_dedicated_set
: This is the counterpart option for defining wherePCPU
resources should be allocated from.vcpu_pin_set
: A legacy option whose definition may change the behavior of this option.
- cpu_dedicated_set¶
- Type:
string
- Default:
<None>
Mask of host CPUs that can be used for
PCPU
resources.The behavior of this option affects the behavior of the deprecated
vcpu_pin_set
option.If this option is defined, defining
vcpu_pin_set
will result in an error.If this option is not defined,
vcpu_pin_set
will be used to determine inventory forVCPU
resources and to limit the host CPUs that both pinned and unpinned instances can be scheduled to.
This behavior will be simplified in a future release when
vcpu_pin_set
is removed.Possible values:
A comma-separated list of physical CPU numbers that instance VCPUs can be allocated from. Each element should be either a single CPU number, a range of CPU numbers, or a caret followed by a CPU number to be excluded from a previous range. For example:
cpu_dedicated_set = "4-12,^8,15"
Related options:
[compute] cpu_shared_set
: This is the counterpart option for defining whereVCPU
resources should be allocated from.vcpu_pin_set
: A legacy option that this option partially replaces.
- live_migration_wait_for_vif_plug¶
- Type:
boolean
- Default:
True
Determine if the source compute host should wait for a
network-vif-plugged
event from the (neutron) networking service before starting the actual transfer of the guest to the destination compute host.Note that this option is read on the destination host of a live migration. If you set this option the same on all of your compute hosts, which you should do if you use the same networking backend universally, you do not have to worry about this.
Before starting the transfer of the guest, some setup occurs on the destination compute host, including plugging virtual interfaces. Depending on the networking backend on the destination host, a
network-vif-plugged
event may be triggered and then received on the source compute host and the source compute can wait for that event to ensure networking is set up on the destination host before starting the guest transfer in the hypervisor.Note
The compute service cannot reliably determine which types of virtual interfaces (
port.binding:vif_type
) will sendnetwork-vif-plugged
events without an accompanying portbinding:host_id
change. Open vSwitch and linuxbridge should be OK, but OpenDaylight is at least one known backend that will not currently work in this case, see bug https://launchpad.net/bugs/1755890 for more details.Possible values:
True: wait for
network-vif-plugged
events before starting guest transferFalse: do not wait for
network-vif-plugged
events before starting guest transfer (this is the legacy behavior)
Related options:
[DEFAULT]/vif_plugging_is_fatal: if
live_migration_wait_for_vif_plug
is True andvif_plugging_timeout
is greater than 0, and a timeout is reached, the live migration process will fail with an error but the guest transfer will not have started to the destination host[DEFAULT]/vif_plugging_timeout: if
live_migration_wait_for_vif_plug
is True, this controls the amount of time to wait before timing out and either failing ifvif_plugging_is_fatal
is True, or simply continuing with the live migration
- max_concurrent_disk_ops¶
- Type:
integer
- Default:
0
- Minimum Value:
0
Number of concurrent disk-IO-intensive operations (glance image downloads, image format conversions, etc.) that we will do in parallel. If this is set too high then response time suffers. The default value of 0 means no limit.
- max_disk_devices_to_attach¶
- Type:
integer
- Default:
-1
- Minimum Value:
-1
Maximum number of disk devices allowed to attach to a single server. Note that the number of disks supported by an server depends on the bus used. For example, the
ide
disk bus is limited to 4 attached devices. The configured maximum is enforced during server create, rebuild, evacuate, unshelve, live migrate, and attach volume.Usually, disk bus is determined automatically from the device type or disk device, and the virtualization type. However, disk bus can also be specified via a block device mapping or an image property. See the
disk_bus
field in Block Device Mapping in Nova for more information about specifying disk bus in a block device mapping, and see https://docs.openstack.org/glance/latest/admin/useful-image-properties.html for more information about thehw_disk_bus
image property.Operators changing the
[compute]/max_disk_devices_to_attach
on a compute service that is hosting servers should be aware that it could cause rebuilds to fail, if the maximum is decreased lower than the number of devices already attached to servers. For example, if server A has 26 devices attached and an operators changes[compute]/max_disk_devices_to_attach
to 20, a request to rebuild server A will fail and go into ERROR state because 26 devices are already attached and exceed the new configured maximum of 20.Operators setting
[compute]/max_disk_devices_to_attach
should also be aware that during a cold migration, the configured maximum is only enforced in-place and the destination is not checked before the move. This means if an operator has set a maximum of 26 on compute host A and a maximum of 20 on compute host B, a cold migration of a server with 26 attached devices from compute host A to compute host B will succeed. Then, once the server is on compute host B, a subsequent request to rebuild the server will fail and go into ERROR state because 26 devices are already attached and exceed the configured maximum of 20 on compute host B.The configured maximum is not enforced on shelved offloaded servers, as they have no compute host.
Warning
If this option is set to 0, the
nova-compute
service will fail to start, as 0 disk devices is an invalid configuration that would prevent instances from being able to boot.Possible values:
-1 means unlimited
Any integer >= 1 represents the maximum allowed. A value of 0 will cause the
nova-compute
service to fail to start, as 0 disk devices is an invalid configuration that would prevent instances from being able to boot.
- provider_config_location¶
- Type:
string
- Default:
/etc/nova/provider_config/
Location of YAML files containing resource provider configuration data.
These files allow the operator to specify additional custom inventory and traits to assign to one or more resource providers.
Additional documentation is available here:
- image_type_exclude_list¶
- Type:
list
- Default:
[]
A list of image formats that should not be advertised as supported by this compute node.
In some situations, it may be desirable to have a compute node refuse to support an expensive or complex image format. This factors into the decisions made by the scheduler about which compute node to select when booted with a given image.
Possible values:
Any glance image
disk_format
name (i.e.raw
,qcow2
, etc)
Related options:
[scheduler]query_placement_for_image_type_support
- enables filtering computes based on supported image types, which is required to be enabled for this to take effect.
- vmdk_allowed_types¶
- Type:
list
- Default:
['streamOptimized', 'monolithicSparse']
A list of strings describing allowed VMDK “create-type” subformats that will be allowed. This is recommended to only include single-file-with-sparse-header variants to avoid potential host file exposure due to processing named extents. If this list is empty, then no form of VMDK image will be allowed.
- packing_host_numa_cells_allocation_strategy¶
- Type:
boolean
- Default:
False
This option controls allocation strategy used to choose NUMA cells on host for placing VM’s NUMA cells (for VMs with defined numa topology). By default host’s NUMA cell with more resources consumed will be chosen last for placing attempt. When the packing_host_numa_cells_allocation_strategy variable is set to
False
, host’s NUMA cell with more resources available will be used. When set toTrue
cells with some usage will be packed with VM’s cell until it will be completely exhausted, before a new free host’s cell will be used.Possible values:
True
: Packing VM’s NUMA cell on most used host NUMA cell.False
: Spreading VM’s NUMA cell on host’s NUMA cells with more resources available.
conductor¶
Options under this group are used to define Conductor’s communication, which manager should be act as a proxy between computes and database, and finally, how many worker processes will be used.
- workers¶
- Type:
integer
- Default:
<None>
Number of workers for OpenStack Conductor service. The default will be the number of CPUs available.
console¶
Options under this group allow to tune the configuration of the console proxy service.
Note: in configuration of every compute is a console_host
option,
which allows to select the console proxy service to connect to.
- allowed_origins¶
- Type:
list
- Default:
[]
Adds list of allowed origins to the console websocket proxy to allow connections from other origin hostnames. Websocket proxy matches the host header with the origin header to prevent cross-site requests. This list specifies if any there are values other than host are allowed in the origin header.
Possible values:
A list where each element is an allowed origin hostnames, else an empty list
¶ Group
Name
DEFAULT
console_allowed_origins
- ssl_ciphers¶
- Type:
string
- Default:
<None>
OpenSSL cipher preference string that specifies what ciphers to allow for TLS connections from clients. For example:
ssl_ciphers = "kEECDH+aECDSA+AES:kEECDH+AES+aRSA:kEDH+aRSA+AES"
See the man page for the OpenSSL ciphers command for details of the cipher preference string format and allowed values:
https://www.openssl.org/docs/man1.1.0/man1/ciphers.html
Related options:
[DEFAULT] cert
[DEFAULT] key
- ssl_minimum_version¶
- Type:
string
- Default:
default
- Valid Values:
default, tlsv1_1, tlsv1_2, tlsv1_3
Minimum allowed SSL/TLS protocol version.
Related options:
[DEFAULT] cert
[DEFAULT] key
Possible values
- default
Use the underlying system OpenSSL defaults
- tlsv1_1
Require TLS v1.1 or greater for TLS connections
- tlsv1_2
Require TLS v1.2 or greater for TLS connections
- tlsv1_3
Require TLS v1.3 or greater for TLS connections
consoleauth¶
- token_ttl¶
- Type:
integer
- Default:
600
- Minimum Value:
0
The lifetime of a console auth token (in seconds).
A console auth token is used in authorizing console access for a user. Once the auth token time to live count has elapsed, the token is considered expired. Expired tokens are then deleted.
¶ Group
Name
DEFAULT
console_token_ttl
- enforce_session_timeout¶
- Type:
boolean
- Default:
False
Enable or disable enforce session timeout for VM console.
This allows operators to enforce a console session timeout. When set to True, Nova will automatically close the console session at the server end once token_ttl expires, providing enhanced control over console session duration.
cors¶
- allowed_origin¶
- Type:
list
- Default:
<None>
Indicate whether this resource may be shared with the domain received in the requests “origin” header. Format: “<protocol>://<host>[:<port>]”, no trailing slash. Example: https://horizon.example.com
- allow_credentials¶
- Type:
boolean
- Default:
True
Indicate that the actual request can include user credentials
- expose_headers¶
- Type:
list
- Default:
['X-Auth-Token', 'X-Openstack-Request-Id', 'X-Subject-Token', 'X-Service-Token', 'X-OpenStack-Nova-API-Version', 'OpenStack-API-Version']
Indicate which headers are safe to expose to the API. Defaults to HTTP Simple Headers.
- max_age¶
- Type:
integer
- Default:
3600
Maximum cache age of CORS preflight requests.
- allow_methods¶
- Type:
list
- Default:
['GET', 'PUT', 'POST', 'DELETE', 'PATCH']
Indicate which methods can be used during the actual request.
- allow_headers¶
- Type:
list
- Default:
['X-Auth-Token', 'X-Openstack-Request-Id', 'X-Identity-Status', 'X-Roles', 'X-Service-Catalog', 'X-User-Id', 'X-Tenant-Id', 'X-OpenStack-Nova-API-Version', 'OpenStack-API-Version']
Indicate which header field names may be used during the actual request.
cyborg¶
Configuration options for Cyborg (accelerator as a service).
- cafile¶
- Type:
string
- Default:
<None>
PEM encoded Certificate Authority to use when verifying HTTPs connections.
- certfile¶
- Type:
string
- Default:
<None>
PEM encoded client certificate cert file
- keyfile¶
- Type:
string
- Default:
<None>
PEM encoded client certificate key file
- insecure¶
- Type:
boolean
- Default:
False
Verify HTTPS connections.
- timeout¶
- Type:
integer
- Default:
<None>
Timeout value for http requests
- collect_timing¶
- Type:
boolean
- Default:
False
Collect per-API call timing information.
- split_loggers¶
- Type:
boolean
- Default:
False
Log requests to multiple loggers.
- service_type¶
- Type:
string
- Default:
accelerator
The default service_type for endpoint URL discovery.
- service_name¶
- Type:
string
- Default:
<None>
The default service_name for endpoint URL discovery.
- valid_interfaces¶
- Type:
list
- Default:
['internal', 'public']
List of interfaces, in order of preference, for endpoint URL.
- region_name¶
- Type:
string
- Default:
<None>
The default region_name for endpoint URL discovery.
- endpoint_override¶
- Type:
string
- Default:
<None>
Always use this endpoint URL for requests for this client. NOTE: The unversioned endpoint should be specified here; to request a particular API version, use the version, min-version, and/or max-version options.
- connect_retries¶
- Type:
integer
- Default:
<None>
The maximum number of retries that should be attempted for connection errors.
- connect_retry_delay¶
- Type:
floating point
- Default:
<None>
Delay (in seconds) between two retries for connection errors. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used.
- status_code_retries¶
- Type:
integer
- Default:
<None>
The maximum number of retries that should be attempted for retriable HTTP status codes.
- status_code_retry_delay¶
- Type:
floating point
- Default:
<None>
Delay (in seconds) between two retries for retriable status codes. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used.
- retriable_status_codes¶
- Type:
list
- Default:
<None>
List of retriable HTTP status codes that should be retried. If not set default to [503]
database¶
The Nova Database is the primary database which is used for information local to a cell.
This group should not be configured for the nova-compute
service.
- sqlite_synchronous¶
- Type:
boolean
- Default:
True
If True, SQLite uses synchronous mode.
- backend¶
- Type:
string
- Default:
sqlalchemy
The back end to use for the database.
- connection¶
- Type:
string
- Default:
<None>
The SQLAlchemy connection string to use to connect to the database.
- slave_connection¶
- Type:
string
- Default:
<None>
The SQLAlchemy connection string to use to connect to the slave database.
- mysql_sql_mode¶
- Type:
string
- Default:
TRADITIONAL
The SQL mode to be used for MySQL sessions. This option, including the default, overrides any server-set SQL mode. To use whatever SQL mode is set by the server configuration, set this to no value. Example: mysql_sql_mode=
- mysql_wsrep_sync_wait¶
- Type:
integer
- Default:
<None>
For Galera only, configure wsrep_sync_wait causality checks on new connections. Default is None, meaning don’t configure any setting.
- connection_recycle_time¶
- Type:
integer
- Default:
3600
Connections which have been present in the connection pool longer than this number of seconds will be replaced with a new one the next time they are checked out from the pool.
- max_pool_size¶
- Type:
integer
- Default:
5
Maximum number of SQL connections to keep open in a pool. Setting a value of 0 indicates no limit.
- max_retries¶
- Type:
integer
- Default:
10
Maximum number of database connection retries during startup. Set to -1 to specify an infinite retry count.
- retry_interval¶
- Type:
integer
- Default:
10
Interval between retries of opening a SQL connection.
- max_overflow¶
- Type:
integer
- Default:
50
If set, use this value for max_overflow with SQLAlchemy.
- connection_debug¶
- Type:
integer
- Default:
0
- Minimum Value:
0
- Maximum Value:
100
Verbosity of SQL debugging information: 0=None, 100=Everything.
- connection_trace¶
- Type:
boolean
- Default:
False
Add Python stack traces to SQL as comment strings.
- pool_timeout¶
- Type:
integer
- Default:
<None>
If set, use this value for pool_timeout with SQLAlchemy.
- db_retry_interval¶
- Type:
integer
- Default:
1
Seconds between retries of a database transaction.
- db_inc_retry_interval¶
- Type:
boolean
- Default:
True
If True, increases the interval between retries of a database operation up to db_max_retry_interval.
- db_max_retry_interval¶
- Type:
integer
- Default:
10
If db_inc_retry_interval is set, the maximum seconds between retries of a database operation.
- db_max_retries¶
- Type:
integer
- Default:
20
Maximum retries in case of connection error or deadlock error before error is raised. Set to -1 to specify an infinite retry count.
- connection_parameters¶
- Type:
string
- Default:
''
Optional URL parameters to append onto the connection URL at connect time; specify as param1=value1¶m2=value2&…
devices¶
- enabled_mdev_types¶
- Type:
list
- Default:
[]
The mdev types enabled in the compute node.
Some hardware (e.g. NVIDIA GRID K1) support different mdev types. User can use this option to specify a list of enabled mdev types that may be assigned to a guest instance.
If more than one single mdev type is provided, then for each mdev type an additional section,
[mdev_$(MDEV_TYPE)]
, must be added to the configuration file. Each section then can be configured with a single configuration option,device_addresses
, which should be a list of PCI addresses corresponding to the physical GPU(s) or mdev-capable hardware to assign to this type. If device_addresses is not provided, then the related GPU type will be the default for all the found GPUs that aren’t used by other types.If one or more sections are missing (meaning that a specific type is not wanted to use for at least one physical device), then Nova will only use the first type that was provided by
[devices]/enabled_mdev_types
.If two or more sections are not set with
device_addresses
values, then only the first one will be used for defaulting all the non-defined GPUs to use this type.If the same PCI address is provided for two different types, nova-compute will return an InvalidLibvirtMdevConfig exception at restart.
As an interim period, old configuration groups named
[vgpu_$(MDEV_TYPE)]
will be accepted. A valid configuration could then be:[devices] enabled_mdev_types = nvidia-35, nvidia-36 [mdev_nvidia-35] device_addresses = 0000:84:00.0,0000:85:00.0 [vgpu_nvidia-36] device_addresses = 0000:86:00.0
Another valid configuration could be:
[devices] enabled_mdev_types = nvidia-35, nvidia-36 [mdev_nvidia-35] [mdev_nvidia-36] device_addresses = 0000:86:00.0
¶ Group
Name
devices
enabled_vgpu_types
ephemeral_storage_encryption¶
- enabled¶
- Type:
boolean
- Default:
False
Enables/disables LVM ephemeral storage encryption.
- cipher¶
- Type:
string
- Default:
aes-xts-plain64
Cipher-mode string to be used.
The cipher and mode to be used to encrypt ephemeral storage. The set of cipher-mode combinations available depends on kernel support. According to the dm-crypt documentation, the cipher is expected to be in the format: “<cipher>-<chainmode>-<ivmode>”.
Possible values:
Any crypto option listed in
/proc/crypto
.
- key_size¶
- Type:
integer
- Default:
512
- Minimum Value:
1
Encryption key length in bits.
The bit length of the encryption key to be used to encrypt ephemeral storage. In XTS mode only half of the bits are used for encryption key.
- default_format¶
- Type:
string
- Default:
luks
- Valid Values:
luks
Default ephemeral encryption format.
Only ‘luks’ is supported at this time.
Note that this does not apply to LVM ephemeral storage encryption.
filter_scheduler¶
- host_subset_size¶
- Type:
integer
- Default:
1
- Minimum Value:
1
Size of subset of best hosts selected by scheduler.
New instances will be scheduled on a host chosen randomly from a subset of the N best hosts, where N is the value set by this option.
Setting this to a value greater than 1 will reduce the chance that multiple scheduler processes handling similar requests will select the same host, creating a potential race condition. By selecting a host randomly from the N hosts that best fit the request, the chance of a conflict is reduced. However, the higher you set this value, the less optimal the chosen host may be for a given request.
Possible values:
An integer, where the integer corresponds to the size of a host subset.
- max_io_ops_per_host¶
- Type:
integer
- Default:
8
- Minimum Value:
0
The number of instances that can be actively performing IO on a host.
Instances performing IO includes those in the following states: build, resize, snapshot, migrate, rescue, unshelve.
Note that this setting only affects scheduling if the
IoOpsFilter
filter is enabled.Possible values:
An integer, where the integer corresponds to the max number of instances that can be actively performing IO on any given host.
Related options:
[filter_scheduler] enabled_filters
- max_instances_per_host¶
- Type:
integer
- Default:
50
- Minimum Value:
1
Maximum number of instances that can exist on a host.
If you need to limit the number of instances on any given host, set this option to the maximum number of instances you want to allow. The NumInstancesFilter and AggregateNumInstancesFilter will reject any host that has at least as many instances as this option’s value.
Note that this setting only affects scheduling if the
NumInstancesFilter
orAggregateNumInstancesFilter
filter is enabled.Possible values:
An integer, where the integer corresponds to the max instances that can be scheduled on a host.
Related options:
[filter_scheduler] enabled_filters
- track_instance_changes¶
- Type:
boolean
- Default:
True
Enable querying of individual hosts for instance information.
The scheduler may need information about the instances on a host in order to evaluate its filters and weighers. The most common need for this information is for the (anti-)affinity filters, which need to choose a host based on the instances already running on a host.
If the configured filters and weighers do not need this information, disabling this option will improve performance. It may also be disabled when the tracking overhead proves too heavy, although this will cause classes requiring host usage data to query the database on each request instead.
Note
In a multi-cell (v2) setup where the cell MQ is separated from the top-level, computes cannot directly communicate with the scheduler. Thus, this option cannot be enabled in that scenario. See also the
[workarounds] disable_group_policy_check_upcall
option.Related options:
[filter_scheduler] enabled_filters
[workarounds] disable_group_policy_check_upcall
- available_filters¶
- Type:
multi-valued
- Default:
nova.scheduler.filters.all_filters
Filters that the scheduler can use.
An unordered list of the filter classes the nova scheduler may apply. Only the filters specified in the
[filter_scheduler] enabled_filters
option will be used, but any filter appearing in that option must also be included in this list.By default, this is set to all filters that are included with nova.
Possible values:
A list of zero or more strings, where each string corresponds to the name of a filter that may be used for selecting a host
Related options:
[filter_scheduler] enabled_filters
- enabled_filters¶
- Type:
list
- Default:
['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter']
Filters that the scheduler will use.
An ordered list of filter class names that will be used for filtering hosts. These filters will be applied in the order they are listed so place your most restrictive filters first to make the filtering process more efficient.
All of the filters in this option must be present in the
[scheduler_filter] available_filter
option, or aSchedulerHostFilterNotFound
exception will be raised.Possible values:
A list of zero or more strings, where each string corresponds to the name of a filter to be used for selecting a host
Related options:
[filter_scheduler] available_filters
- weight_classes¶
- Type:
list
- Default:
['nova.scheduler.weights.all_weighers']
Weighers that the scheduler will use.
Only hosts which pass the filters are weighed. The weight for any host starts at 0, and the weighers order these hosts by adding to or subtracting from the weight assigned by the previous weigher. Weights may become negative. An instance will be scheduled to one of the N most-weighted hosts, where N is
[filter_scheduler] host_subset_size
.By default, this is set to all weighers that are included with Nova.
Possible values:
A list of zero or more strings, where each string corresponds to the name of a weigher that will be used for selecting a host
- ram_weight_multiplier¶
- Type:
floating point
- Default:
1.0
RAM weight multiplier ratio.
This option determines how hosts with more or less available RAM are weighed. A positive value will result in the scheduler preferring hosts with more available RAM, and a negative number will result in the scheduler preferring hosts with less available RAM. Another way to look at it is that positive values for this option will tend to spread instances across many hosts, while negative values will tend to fill up (stack) hosts as much as possible before scheduling to a less-used host. The absolute value, whether positive or negative, controls how strong the RAM weigher is relative to other weighers.
Note that this setting only affects scheduling if the
RAMWeigher
weigher is enabled.Possible values:
An integer or float value, where the value corresponds to the multiplier ratio for this weigher.
Related options:
[filter_scheduler] weight_classes
- cpu_weight_multiplier¶
- Type:
floating point
- Default:
1.0
CPU weight multiplier ratio.
Multiplier used for weighting free vCPUs. Negative numbers indicate stacking rather than spreading.
Note that this setting only affects scheduling if the
CPUWeigher
weigher is enabled.Possible values:
An integer or float value, where the value corresponds to the multiplier ratio for this weigher.
Related options:
[filter_scheduler] weight_classes
- disk_weight_multiplier¶
- Type:
floating point
- Default:
1.0
Disk weight multiplier ratio.
Multiplier used for weighing free disk space. Negative numbers mean to stack vs spread.
Note that this setting only affects scheduling if the
DiskWeigher
weigher is enabled.Possible values:
An integer or float value, where the value corresponds to the multiplier ratio for this weigher.
- hypervisor_version_weight_multiplier¶
- Type:
floating point
- Default:
1.0
Hypervisor Version weight multiplier ratio.
The multiplier is used for weighting hosts based on the reported hypervisor version. Negative numbers indicate preferring older hosts, the default is to prefer newer hosts to aid with upgrades.
Possible values:
An integer or float value, where the value corresponds to the multiplier ratio for this weigher.
Example:
Strongly prefer older hosts
[filter_scheduler] hypervisor_version_weight_multiplier=-1000
Moderately prefer new hosts
[filter_scheduler] hypervisor_version_weight_multiplier=2.5
Disable weigher influence
[filter_scheduler] hypervisor_version_weight_multiplier=0
Related options:
[filter_scheduler] weight_classes
- num_instances_weight_multiplier¶
- Type:
floating point
- Default:
0.0
Number of instances weight multiplier ratio.
The multiplier is used for weighting hosts based on the reported number of instances they have. Negative numbers indicate preferring hosts with fewer instances (i.e. choosing to spread instances), while positive numbers mean preferring hosts with more hosts (ie. choosing to pack). The default is 0.0 which means that you have to choose a strategy if you want to use it.
Possible values:
An integer or float value, where the value corresponds to the multiplier ratio for this weigher.
Example:
Strongly prefer to pack instances to hosts.
[filter_scheduler] num_instances_weight_multiplier=1000
Softly prefer to spread instances between hosts.
[filter_scheduler] num_instances_weight_multiplier=1.0
Disable weigher influence
[filter_scheduler] num_instances_weight_multiplier=0
Related options:
[filter_scheduler] weight_classes
- io_ops_weight_multiplier¶
- Type:
floating point
- Default:
-1.0
IO operations weight multiplier ratio.
This option determines how hosts with differing workloads are weighed. Negative values, such as the default, will result in the scheduler preferring hosts with lighter workloads whereas positive values will prefer hosts with heavier workloads. Another way to look at it is that positive values for this option will tend to schedule instances onto hosts that are already busy, while negative values will tend to distribute the workload across more hosts. The absolute value, whether positive or negative, controls how strong the io_ops weigher is relative to other weighers.
Note that this setting only affects scheduling if the
IoOpsWeigher
weigher is enabled.Possible values:
An integer or float value, where the value corresponds to the multiplier ratio for this weigher.
Related options:
[filter_scheduler] weight_classes
- pci_weight_multiplier¶
- Type:
floating point
- Default:
1.0
- Minimum Value:
0.0
PCI device affinity weight multiplier.
The PCI device affinity weighter computes a weighting based on the number of PCI devices on the host and the number of PCI devices requested by the instance.
Note that this setting only affects scheduling if the
PCIWeigher
weigher andNUMATopologyFilter
filter are enabled.Possible values:
A positive integer or float value, where the value corresponds to the multiplier ratio for this weigher.
Related options:
[filter_scheduler] weight_classes
- soft_affinity_weight_multiplier¶
- Type:
floating point
- Default:
1.0
- Minimum Value:
0.0
Multiplier used for weighing hosts for group soft-affinity.
Note that this setting only affects scheduling if the
ServerGroupSoftAffinityWeigher
weigher is enabled.Possible values:
A non-negative integer or float value, where the value corresponds to weight multiplier for hosts with group soft affinity.
Related options:
[filter_scheduler] weight_classes
- soft_anti_affinity_weight_multiplier¶
- Type:
floating point
- Default:
1.0
- Minimum Value:
0.0
Multiplier used for weighing hosts for group soft-anti-affinity.
Note that this setting only affects scheduling if the
ServerGroupSoftAntiAffinityWeigher
weigher is enabled.Possible values:
A non-negative integer or float value, where the value corresponds to weight multiplier for hosts with group soft anti-affinity.
Related options:
[filter_scheduler] weight_classes
- build_failure_weight_multiplier¶
- Type:
floating point
- Default:
1000000.0
Multiplier used for weighing hosts that have had recent build failures.
This option determines how much weight is placed on a compute node with recent build failures. Build failures may indicate a failing, misconfigured, or otherwise ailing compute node, and avoiding it during scheduling may be beneficial. The weight is inversely proportional to the number of recent build failures the compute node has experienced. This value should be set to some high value to offset weight given by other enabled weighers due to available resources. To disable weighing compute hosts by the number of recent failures, set this to zero.
Note that this setting only affects scheduling if the
BuildFailureWeigher
weigher is enabled.Possible values:
An integer or float value, where the value corresponds to the multiplier ratio for this weigher.
Related options:
[compute] consecutive_build_service_disable_threshold
- Must be nonzero for a compute to report data considered by this weigher.[filter_scheduler] weight_classes
- cross_cell_move_weight_multiplier¶
- Type:
floating point
- Default:
1000000.0
Multiplier used for weighing hosts during a cross-cell move.
This option determines how much weight is placed on a host which is within the same source cell when moving a server, for example during cross-cell resize. By default, when moving an instance, the scheduler will prefer hosts within the same cell since cross-cell move operations can be slower and riskier due to the complicated nature of cross-cell migrations.
Note that this setting only affects scheduling if the
CrossCellWeigher
weigher is enabled. If your cloud is not configured to support cross-cell migrations, then this option has no effect.The value of this configuration option can be overridden per host aggregate by setting the aggregate metadata key with the same name (
cross_cell_move_weight_multiplier
).Possible values:
An integer or float value, where the value corresponds to the multiplier ratio for this weigher. Positive values mean the weigher will prefer hosts within the same cell in which the instance is currently running. Negative values mean the weigher will prefer hosts in other cells from which the instance is currently running.
Related options:
[filter_scheduler] weight_classes
- shuffle_best_same_weighed_hosts¶
- Type:
boolean
- Default:
False
Enable spreading the instances between hosts with the same best weight.
Enabling it is beneficial for cases when
[filter_scheduler] host_subset_size
is 1 (default), but there is a large number of hosts with same maximal weight. This scenario is common in Ironic deployments where there are typically many baremetal nodes with identical weights returned to the scheduler. In such case enabling this option will reduce contention and chances for rescheduling events. At the same time it will make the instance packing (even in unweighed case) less dense.
- image_properties_default_architecture¶
- Type:
string
- Default:
<None>
- Valid Values:
alpha, armv6, armv7l, armv7b, aarch64, cris, i686, ia64, lm32, m68k, microblaze, microblazeel, mips, mipsel, mips64, mips64el, openrisc, parisc, parisc64, ppc, ppcle, ppc64, ppc64le, ppcemb, s390, s390x, sh4, sh4eb, sparc, sparc64, unicore32, x86_64, xtensa, xtensaeb
The default architecture to be used when using the image properties filter.
When using the
ImagePropertiesFilter
, it is possible that you want to define a default architecture to make the user experience easier and avoid having something like x86_64 images landing on AARCH64 compute nodes because the user did not specify thehw_architecture
property in Glance.Possible values:
CPU Architectures such as x86_64, aarch64, s390x.
- isolated_images¶
- Type:
list
- Default:
[]
List of UUIDs for images that can only be run on certain hosts.
If there is a need to restrict some images to only run on certain designated hosts, list those image UUIDs here.
Note that this setting only affects scheduling if the
IsolatedHostsFilter
filter is enabled.Possible values:
A list of UUID strings, where each string corresponds to the UUID of an image
Related options:
[filter_scheduler] isolated_hosts
[filter_scheduler] restrict_isolated_hosts_to_isolated_images
- isolated_hosts¶
- Type:
list
- Default:
[]
List of hosts that can only run certain images.
If there is a need to restrict some images to only run on certain designated hosts, list those host names here.
Note that this setting only affects scheduling if the
IsolatedHostsFilter
filter is enabled.Possible values:
A list of strings, where each string corresponds to the name of a host
Related options:
[filter_scheduler] isolated_images
[filter_scheduler] restrict_isolated_hosts_to_isolated_images
- restrict_isolated_hosts_to_isolated_images¶
- Type:
boolean
- Default:
True
Prevent non-isolated images from being built on isolated hosts.
Note that this setting only affects scheduling if the
IsolatedHostsFilter
filter is enabled. Even then, this option doesn’t affect the behavior of requests for isolated images, which will always be restricted to isolated hosts.Related options:
[filter_scheduler] isolated_images
[filter_scheduler] isolated_hosts
- aggregate_image_properties_isolation_namespace¶
- Type:
string
- Default:
<None>
Image property namespace for use in the host aggregate.
Images and hosts can be configured so that certain images can only be scheduled to hosts in a particular aggregate. This is done with metadata values set on the host aggregate that are identified by beginning with the value of this option. If the host is part of an aggregate with such a metadata key, the image in the request spec must have the value of that metadata in its properties in order for the scheduler to consider the host as acceptable.
Note that this setting only affects scheduling if the
AggregateImagePropertiesIsolation
filter is enabled.Possible values:
A string, where the string corresponds to an image property namespace
Related options:
[filter_scheduler] aggregate_image_properties_isolation_separator
- aggregate_image_properties_isolation_separator¶
- Type:
string
- Default:
.
Separator character(s) for image property namespace and name.
When using the aggregate_image_properties_isolation filter, the relevant metadata keys are prefixed with the namespace defined in the aggregate_image_properties_isolation_namespace configuration option plus a separator. This option defines the separator to be used.
Note that this setting only affects scheduling if the
AggregateImagePropertiesIsolation
filter is enabled.Possible values:
A string, where the string corresponds to an image property namespace separator character
Related options:
[filter_scheduler] aggregate_image_properties_isolation_namespace
- pci_in_placement¶
- Type:
boolean
- Default:
False
Enable scheduling and claiming PCI devices in Placement.
This can be enabled after
[pci]report_in_placement
is enabled on all compute hosts.When enabled the scheduler queries Placement about the PCI device availability to select destination for a server with PCI request. The scheduler also allocates the selected PCI devices in Placement. Note that this logic does not replace the PCIPassthroughFilter but extends it.
[pci] report_in_placement
[pci] alias
[pci] device_spec
glance¶
Configuration options for the Image service
- api_servers¶
- Type:
list
- Default:
<None>
List of glance api servers endpoints available to nova.
https is used for ssl-based glance api servers.
NOTE: The preferred mechanism for endpoint discovery is via keystoneauth1 loading options. Only use api_servers if you need multiple endpoints and are unable to use a load balancer for some reason.
Possible values:
A list of any fully qualified url of the form “scheme://hostname:port[/path]” (i.e. “http://10.0.1.0:9292” or “https://my.glance.server/image”).
Warning
This option is deprecated for removal since 21.0.0. Its value may be silently ignored in the future.
- Reason:
Support for image service configuration via standard keystoneauth1 Adapter options was added in the 17.0.0 Queens release. The api_servers option was retained temporarily to allow consumers time to cut over to a real load balancing solution.
- num_retries¶
- Type:
integer
- Default:
3
- Minimum Value:
0
Enable glance operation retries.
Specifies the number of retries when uploading / downloading an image to / from glance. 0 means no retries.
- verify_glance_signatures¶
- Type:
boolean
- Default:
False
Enable image signature verification.
nova uses the image signature metadata from glance and verifies the signature of a signed image while downloading that image. If the image signature cannot be verified or if the image signature metadata is either incomplete or unavailable, then nova will not boot the image and instead will place the instance into an error state. This provides end users with stronger assurances of the integrity of the image data they are using to create servers.
Related options:
The options in the key_manager group, as the key_manager is used for the signature validation.
Both enable_certificate_validation and default_trusted_certificate_ids below depend on this option being enabled.
- enable_certificate_validation¶
- Type:
boolean
- Default:
False
Enable certificate validation for image signature verification.
During image signature verification nova will first verify the validity of the image’s signing certificate using the set of trusted certificates associated with the instance. If certificate validation fails, signature verification will not be performed and the instance will be placed into an error state. This provides end users with stronger assurances that the image data is unmodified and trustworthy. If left disabled, image signature verification can still occur but the end user will not have any assurance that the signing certificate used to generate the image signature is still trustworthy.
Related options:
This option only takes effect if verify_glance_signatures is enabled.
The value of default_trusted_certificate_ids may be used when this option is enabled.
Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
- Reason:
This option is intended to ease the transition for deployments leveraging image signature verification. The intended state long-term is for signature verification and certificate validation to always happen together.
- default_trusted_certificate_ids¶
- Type:
list
- Default:
[]
List of certificate IDs for certificates that should be trusted.
May be used as a default list of trusted certificate IDs for certificate validation. The value of this option will be ignored if the user provides a list of trusted certificate IDs with an instance API request. The value of this option will be persisted with the instance data if signature verification and certificate validation are enabled and if the user did not provide an alternative list. If left empty when certificate validation is enabled the user must provide a list of trusted certificate IDs otherwise certificate validation will fail.
Related options:
The value of this option may be used if both verify_glance_signatures and enable_certificate_validation are enabled.
- enable_rbd_download¶
- Type:
boolean
- Default:
False
Enable Glance image downloads directly via RBD.
Allow non-rbd computes using local storage to download and cache images from Ceph via rbd rather than the Glance API via http.
Note
This option should only be enabled when the compute itself is not also using Ceph as a backing store. For example with the libvirt driver it should only be enabled when
libvirt.images_type
is not set torbd
.Related options:
- rbd_user¶
- Type:
string
- Default:
''
The RADOS client name for accessing Glance images stored as rbd volumes.
Related options:
This option is only used if
glance.enable_rbd_download
is set toTrue
.
- rbd_connect_timeout¶
- Type:
integer
- Default:
5
The RADOS client timeout in seconds when initially connecting to the cluster.
Related options:
This option is only used if
glance.enable_rbd_download
is set toTrue
.
- rbd_pool¶
- Type:
string
- Default:
''
The RADOS pool in which the Glance images are stored as rbd volumes.
Related options:
This option is only used if
glance.enable_rbd_download
is set toTrue
.
- rbd_ceph_conf¶
- Type:
string
- Default:
''
Path to the ceph configuration file to use.
Related options:
This option is only used if
glance.enable_rbd_download
is set toTrue
.
- debug¶
- Type:
boolean
- Default:
False
Enable or disable debug logging with glanceclient.
- cafile¶
- Type:
string
- Default:
<None>
PEM encoded Certificate Authority to use when verifying HTTPs connections.
- certfile¶
- Type:
string
- Default:
<None>
PEM encoded client certificate cert file
- keyfile¶
- Type:
string
- Default:
<None>
PEM encoded client certificate key file
- insecure¶
- Type:
boolean
- Default:
False
Verify HTTPS connections.
- timeout¶
- Type:
integer
- Default:
<None>
Timeout value for http requests
- collect_timing¶
- Type:
boolean
- Default:
False
Collect per-API call timing information.
- split_loggers¶
- Type:
boolean
- Default:
False
Log requests to multiple loggers.
- service_type¶
- Type:
string
- Default:
image
The default service_type for endpoint URL discovery.
- service_name¶
- Type:
string
- Default:
<None>
The default service_name for endpoint URL discovery.
- valid_interfaces¶
- Type:
list
- Default:
['internal', 'public']
List of interfaces, in order of preference, for endpoint URL.
- region_name¶
- Type:
string
- Default:
<None>
The default region_name for endpoint URL discovery.
- endpoint_override¶
- Type:
string
- Default:
<None>
Always use this endpoint URL for requests for this client. NOTE: The unversioned endpoint should be specified here; to request a particular API version, use the version, min-version, and/or max-version options.
- connect_retries¶
- Type:
integer
- Default:
<None>
The maximum number of retries that should be attempted for connection errors.
- connect_retry_delay¶
- Type:
floating point
- Default:
<None>
Delay (in seconds) between two retries for connection errors. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used.
- status_code_retries¶
- Type:
integer
- Default:
<None>
The maximum number of retries that should be attempted for retriable HTTP status codes.
- status_code_retry_delay¶
- Type:
floating point
- Default:
<None>
Delay (in seconds) between two retries for retriable status codes. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used.
- retriable_status_codes¶
- Type:
list
- Default:
<None>
List of retriable HTTP status codes that should be retried. If not set default to [503]
guestfs¶
libguestfs is a set of tools for accessing and modifying virtual machine (VM) disk images. You can use this for viewing and editing files inside guests, scripting changes to VMs, monitoring disk used/free statistics, creating guests, P2V, V2V, performing backups, cloning VMs, building VMs, formatting disks and resizing disks.
- debug¶
- Type:
boolean
- Default:
False
Enable/disables guestfs logging.
This configures guestfs to debug messages and push them to OpenStack logging system. When set to True, it traces libguestfs API calls and enable verbose debug messages. In order to use the above feature, “libguestfs” package must be installed.
Related options:
Since libguestfs access and modifies VM’s managed by libvirt, below options should be set to give access to those VM’s.
libvirt.inject_key
libvirt.inject_partition
libvirt.inject_password
healthcheck¶
- path¶
- Type:
string
- Default:
/healthcheck
The path to respond to healtcheck requests on.
Warning
This option is deprecated for removal. Its value may be silently ignored in the future.
- detailed¶
- Type:
boolean
- Default:
False
Show more detailed information as part of the response. Security note: Enabling this option may expose sensitive details about the service being monitored. Be sure to verify that it will not violate your security policies.
- backends¶
- Type:
list
- Default:
[]
Additional backends that can perform health checks and report that information back as part of a request.
- allowed_source_ranges¶
- Type:
list
- Default:
[]
A list of network addresses to limit source ip allowed to access healthcheck information. Any request from ip outside of these network addresses are ignored.
- ignore_proxied_requests¶
- Type:
boolean
- Default:
False
Ignore requests with proxy headers.
- disable_by_file_path¶
- Type:
string
- Default:
<None>
Check the presence of a file to determine if an application is running on a port. Used by DisableByFileHealthcheck plugin.
- disable_by_file_paths¶
- Type:
list
- Default:
[]
Check the presence of a file based on a port to determine if an application is running on a port. Expects a “port:path” list of strings. Used by DisableByFilesPortsHealthcheck plugin.
image_cache¶
A collection of options specific to image caching.
- manager_interval¶
- Type:
integer
- Default:
2400
- Minimum Value:
-1
Number of seconds to wait between runs of the image cache manager.
Note that when using shared storage for the
[DEFAULT]/instances_path
configuration option across multiple nova-compute services, this periodic could process a large number of instances. Similarly, using a compute driver that manages a cluster (like vmwareapi.VMwareVCDriver) could result in processing a large number of instances. Therefore you may need to adjust the time interval for the anticipated load, or only run on one nova-compute service within a shared storage aggregate. Additional note, every time the image_cache_manager runs the timestamps of images in[DEFAULT]/instances_path
are updated.Possible values:
0: run at the default interval of 60 seconds (not recommended)
-1: disable
Any other value
Related options:
[DEFAULT]/compute_driver
[DEFAULT]/instances_path
¶ Group
Name
DEFAULT
image_cache_manager_interval
- subdirectory_name¶
- Type:
string
- Default:
_base
Location of cached images.
This is NOT the full path - just a folder name relative to ‘$instances_path’. For per-compute-host cached images, set to ‘_base_$my_ip’
¶ Group
Name
DEFAULT
image_cache_subdirectory_name
- remove_unused_base_images¶
- Type:
boolean
- Default:
True
Should unused base images be removed?
When there are no remaining instances on the hypervisor created from this base image or linked to it, the base image is considered unused.
¶ Group
Name
DEFAULT
remove_unused_base_images
- remove_unused_original_minimum_age_seconds¶
- Type:
integer
- Default:
86400
Unused unresized base images younger than this will not be removed.
¶ Group
Name
DEFAULT
remove_unused_original_minimum_age_seconds
- remove_unused_resized_minimum_age_seconds¶
- Type:
integer
- Default:
3600
Unused resized base images younger than this will not be removed.
¶ Group
Name
libvirt
remove_unused_resized_minimum_age_seconds
- precache_concurrency¶
- Type:
integer
- Default:
1
- Minimum Value:
1
Maximum number of compute hosts to trigger image precaching in parallel.
When an image precache request is made, compute nodes will be contacted to initiate the download. This number constrains the number of those that will happen in parallel. Higher numbers will cause more computes to work in parallel and may result in reduced time to complete the operation, but may also DDoS the image service. Lower numbers will result in more sequential operation, lower image service load, but likely longer runtime to completion.
ironic¶
Configuration options for Ironic driver (Bare Metal). If using the Ironic driver following options must be set:
auth_type
auth_url
project_name
username
password
project_domain_id or project_domain_name
user_domain_id or user_domain_name
- api_max_retries¶
- Type:
integer
- Default:
60
- Minimum Value:
0
The number of times to retry when a request conflicts. If set to 0, only try once, no retries.
Related options:
api_retry_interval
- api_retry_interval¶
- Type:
integer
- Default:
2
- Minimum Value:
0
The number of seconds to wait before retrying the request.
Related options:
api_max_retries
- serial_console_state_timeout¶
- Type:
integer
- Default:
10
- Minimum Value:
0
Timeout (seconds) to wait for node serial console state changed. Set to 0 to disable timeout.
- conductor_group¶
- Type:
string
- Default:
<None>
- Mutable:
This option can be changed without restarting.
Case-insensitive key to limit the set of nodes that may be managed by this service to the set of nodes in Ironic which have a matching conductor_group property. If unset, all available nodes will be eligible to be managed by this service. Note that setting this to the empty string (
""
) will match the default conductor group, and is different than leaving the option unset.¶ Group
Name
ironic
partition_key
- shard¶
- Type:
string
- Default:
<None>
Specify which ironic shard this nova-compute will manage. This allows you to shard Ironic nodes between compute services across conductors and conductor groups. When a shard is set, the peer_list configuration is ignored. We require that there is at most one nova-compute service for each shard.
- peer_list¶
- Type:
list
- Default:
[]
List of hostnames for all nova-compute services (including this host) with this conductor_group config value. Nodes matching the conductor_group value will be distributed between all services specified here. If conductor_group is unset, this option is ignored.
Warning
This option is deprecated for removal since 28.0.0. Its value may be silently ignored in the future.
- Reason:
We do not recommend using nova-compute HA, please use passive failover of a single nova-compute service instead.
- cafile¶
- Type:
string
- Default:
<None>
PEM encoded Certificate Authority to use when verifying HTTPs connections.
- certfile¶
- Type:
string
- Default:
<None>
PEM encoded client certificate cert file
- keyfile¶
- Type:
string
- Default:
<None>
PEM encoded client certificate key file
- insecure¶
- Type:
boolean
- Default:
False
Verify HTTPS connections.
- timeout¶
- Type:
integer
- Default:
<None>
Timeout value for http requests
- collect_timing¶
- Type:
boolean
- Default:
False
Collect per-API call timing information.
- split_loggers¶
- Type:
boolean
- Default:
False
Log requests to multiple loggers.
- auth_type¶
- Type:
unknown type
- Default:
<None>
Authentication type to load
¶ Group
Name
ironic
auth_plugin
- auth_section¶
- Type:
unknown type
- Default:
<None>
Config Section from which to load plugin specific options
- auth_url¶
- Type:
unknown type
- Default:
<None>
Authentication URL
- system_scope¶
- Type:
unknown type
- Default:
<None>
Scope for system operations
- domain_id¶
- Type:
unknown type
- Default:
<None>
Domain ID to scope to
- domain_name¶
- Type:
unknown type
- Default:
<None>
Domain name to scope to
- project_id¶
- Type:
unknown type
- Default:
<None>
Project ID to scope to
- project_name¶
- Type:
unknown type
- Default:
<None>
Project name to scope to
- project_domain_id¶
- Type:
unknown type
- Default:
<None>
Domain ID containing project
- project_domain_name¶
- Type:
unknown type
- Default:
<None>
Domain name containing project
- trust_id¶
- Type:
unknown type
- Default:
<None>
ID of the trust to use as a trustee use
- user_id¶
- Type:
unknown type
- Default:
<None>
User ID
- username¶
- Type:
unknown type
- Default:
<None>
Username
¶ Group
Name
ironic
user-name
ironic
user_name
- user_domain_id¶
- Type:
unknown type
- Default:
<None>
User’s domain id
- user_domain_name¶
- Type:
unknown type
- Default:
<None>
User’s domain name
- password¶
- Type:
unknown type
- Default:
<None>
User’s password
- service_type¶
- Type:
string
- Default:
baremetal
The default service_type for endpoint URL discovery.
- service_name¶
- Type:
string
- Default:
<None>
The default service_name for endpoint URL discovery.
- valid_interfaces¶
- Type:
list
- Default:
['internal', 'public']
List of interfaces, in order of preference, for endpoint URL.
- region_name¶
- Type:
string
- Default:
<None>
The default region_name for endpoint URL discovery.
- endpoint_override¶
- Type:
string
- Default:
<None>
Always use this endpoint URL for requests for this client. NOTE: The unversioned endpoint should be specified here; to request a particular API version, use the version, min-version, and/or max-version options.
- connect_retries¶
- Type:
integer
- Default:
<None>
The maximum number of retries that should be attempted for connection errors.
- connect_retry_delay¶
- Type:
floating point
- Default:
<None>
Delay (in seconds) between two retries for connection errors. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used.
- status_code_retries¶
- Type:
integer
- Default:
<None>
The maximum number of retries that should be attempted for retriable HTTP status codes.
- status_code_retry_delay¶
- Type:
floating point
- Default:
<None>
Delay (in seconds) between two retries for retriable status codes. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used.
- retriable_status_codes¶
- Type:
list
- Default:
<None>
List of retriable HTTP status codes that should be retried. If not set default to [503]
key_manager¶
- fixed_key¶
- Type:
string
- Default:
<None>
Fixed key returned by key manager, specified in hex.
Possible values:
Empty string or a key in hex value
¶ Group
Name
keymgr
fixed_key
- backend¶
- Type:
string
- Default:
barbican
Specify the key manager implementation. Options are “barbican” and “vault”. Default is “barbican”. Will support the values earlier set using [key_manager]/api_class for some time.
¶ Group
Name
key_manager
api_class
- auth_type¶
- Type:
string
- Default:
<None>
The type of authentication credential to create. Possible values are ‘token’, ‘password’, ‘keystone_token’, and ‘keystone_password’. Required if no context is passed to the credential factory.
- token¶
- Type:
string
- Default:
<None>
Token for authentication. Required for ‘token’ and ‘keystone_token’ auth_type if no context is passed to the credential factory.
- username¶
- Type:
string
- Default:
<None>
Username for authentication. Required for ‘password’ auth_type. Optional for the ‘keystone_password’ auth_type.
- password¶
- Type:
string
- Default:
<None>
Password for authentication. Required for ‘password’ and ‘keystone_password’ auth_type.
- auth_url¶
- Type:
string
- Default:
<None>
Use this endpoint to connect to Keystone.
- user_id¶
- Type:
string
- Default:
<None>
User ID for authentication. Optional for ‘keystone_token’ and ‘keystone_password’ auth_type.
- user_domain_id¶
- Type:
string
- Default:
<None>
User’s domain ID for authentication. Optional for ‘keystone_token’ and ‘keystone_password’ auth_type.
- user_domain_name¶
- Type:
string
- Default:
<None>
User’s domain name for authentication. Optional for ‘keystone_token’ and ‘keystone_password’ auth_type.
- trust_id¶
- Type:
string
- Default:
<None>
Trust ID for trust scoping. Optional for ‘keystone_token’ and ‘keystone_password’ auth_type.
- domain_id¶
- Type:
string
- Default:
<None>
Domain ID for domain scoping. Optional for ‘keystone_token’ and ‘keystone_password’ auth_type.
- domain_name¶
- Type:
string
- Default:
<None>
Domain name for domain scoping. Optional for ‘keystone_token’ and ‘keystone_password’ auth_type.
- project_id¶
- Type:
string
- Default:
<None>
Project ID for project scoping. Optional for ‘keystone_token’ and ‘keystone_password’ auth_type.
- project_name¶
- Type:
string
- Default:
<None>
Project name for project scoping. Optional for ‘keystone_token’ and ‘keystone_password’ auth_type.
- project_domain_id¶
- Type:
string
- Default:
<None>
Project’s domain ID for project. Optional for ‘keystone_token’ and ‘keystone_password’ auth_type.
- project_domain_name¶
- Type:
string
- Default:
<None>
Project’s domain name for project. Optional for ‘keystone_token’ and ‘keystone_password’ auth_type.
- reauthenticate¶
- Type:
boolean
- Default:
True
Allow fetching a new token if the current one is going to expire. Optional for ‘keystone_token’ and ‘keystone_password’ auth_type.
keystone¶
Configuration options for the identity service
- cafile¶
- Type:
string
- Default:
<None>
PEM encoded Certificate Authority to use when verifying HTTPs connections.
- certfile¶
- Type:
string
- Default:
<None>
PEM encoded client certificate cert file
- keyfile¶
- Type:
string
- Default:
<None>
PEM encoded client certificate key file
- insecure¶
- Type:
boolean
- Default:
False
Verify HTTPS connections.
- timeout¶
- Type:
integer
- Default:
<None>
Timeout value for http requests
- collect_timing¶
- Type:
boolean
- Default:
False
Collect per-API call timing information.
- split_loggers¶
- Type:
boolean
- Default:
False
Log requests to multiple loggers.
- auth_type¶
- Type:
unknown type
- Default:
<None>
Authentication type to load
¶ Group
Name
keystone
auth_plugin
- auth_section¶
- Type:
unknown type
- Default:
<None>
Config Section from which to load plugin specific options
- auth_url¶
- Type:
unknown type
- Default:
<None>
Authentication URL
- system_scope¶
- Type:
unknown type
- Default:
<None>
Scope for system operations
- domain_id¶
- Type:
unknown type
- Default:
<None>
Domain ID to scope to
- domain_name¶
- Type:
unknown type
- Default:
<None>
Domain name to scope to
- project_id¶
- Type:
unknown type
- Default:
<None>
Project ID to scope to
- project_name¶
- Type:
unknown type
- Default:
<None>
Project name to scope to
- project_domain_id¶
- Type:
unknown type
- Default:
<None>
Domain ID containing project
- project_domain_name¶
- Type:
unknown type
- Default:
<None>
Domain name containing project
- trust_id¶
- Type:
unknown type
- Default:
<None>
ID of the trust to use as a trustee use
- default_domain_id¶
- Type:
unknown type
- Default:
<None>
Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication.
- default_domain_name¶
- Type:
unknown type
- Default:
<None>
Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication.
- user_id¶
- Type:
unknown type
- Default:
<None>
User ID
- username¶
- Type:
unknown type
- Default:
<None>
Username
¶ Group
Name
keystone
user-name
keystone
user_name
- user_domain_id¶
- Type:
unknown type
- Default:
<None>
User’s domain id
- user_domain_name¶
- Type:
unknown type
- Default:
<None>
User’s domain name
- password¶
- Type:
unknown type
- Default:
<None>
User’s password
- tenant_id¶
- Type:
unknown type
- Default:
<None>
Tenant ID
- tenant_name¶
- Type:
unknown type
- Default:
<None>
Tenant Name
- service_type¶
- Type:
string
- Default:
identity
The default service_type for endpoint URL discovery.
- service_name¶
- Type:
string
- Default:
<None>
The default service_name for endpoint URL discovery.
- valid_interfaces¶
- Type:
list
- Default:
['internal', 'public']
List of interfaces, in order of preference, for endpoint URL.
- region_name¶
- Type:
string
- Default:
<None>
The default region_name for endpoint URL discovery.
- endpoint_override¶
- Type:
string
- Default:
<None>
Always use this endpoint URL for requests for this client. NOTE: The unversioned endpoint should be specified here; to request a particular API version, use the version, min-version, and/or max-version options.
- connect_retries¶
- Type:
integer
- Default:
<None>
The maximum number of retries that should be attempted for connection errors.
- connect_retry_delay¶
- Type:
floating point
- Default:
<None>
Delay (in seconds) between two retries for connection errors. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used.
- status_code_retries¶
- Type:
integer
- Default:
<None>
The maximum number of retries that should be attempted for retriable HTTP status codes.
- status_code_retry_delay¶
- Type:
floating point
- Default:
<None>
Delay (in seconds) between two retries for retriable status codes. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used.
- retriable_status_codes¶
- Type:
list
- Default:
<None>
List of retriable HTTP status codes that should be retried. If not set default to [503]
keystone_authtoken¶
- www_authenticate_uri¶
- Type:
string
- Default:
<None>
Complete “public” Identity API endpoint. This endpoint should not be an “admin” endpoint, as it should be accessible by all end users. Unauthenticated clients are redirected to this endpoint to authenticate. Although this endpoint should ideally be unversioned, client support in the wild varies. If you’re using a versioned v2 endpoint here, then this should not be the same endpoint the service user utilizes for validating tokens, because normal end users may not be able to reach that endpoint.
¶ Group
Name
keystone_authtoken
auth_uri
- auth_uri¶
- Type:
string
- Default:
<None>
Complete “public” Identity API endpoint. This endpoint should not be an “admin” endpoint, as it should be accessible by all end users. Unauthenticated clients are redirected to this endpoint to authenticate. Although this endpoint should ideally be unversioned, client support in the wild varies. If you’re using a versioned v2 endpoint here, then this should not be the same endpoint the service user utilizes for validating tokens, because normal end users may not be able to reach that endpoint. This option is deprecated in favor of www_authenticate_uri and will be removed in the S release.
Warning
This option is deprecated for removal since Queens. Its value may be silently ignored in the future.
- Reason:
The auth_uri option is deprecated in favor of www_authenticate_uri and will be removed in the S release.
- auth_version¶
- Type:
string
- Default:
<None>
API version of the Identity API endpoint.
- interface¶
- Type:
string
- Default:
internal
Interface to use for the Identity API endpoint. Valid values are “public”, “internal” (default) or “admin”.
- delay_auth_decision¶
- Type:
boolean
- Default:
False
Do not handle authorization requests within the middleware, but delegate the authorization decision to downstream WSGI components.
- http_connect_timeout¶
- Type:
integer
- Default:
<None>
Request timeout value for communicating with Identity API server.
- http_request_max_retries¶
- Type:
integer
- Default:
3
How many times are we trying to reconnect when communicating with Identity API Server.
- cache¶
- Type:
string
- Default:
<None>
Request environment key where the Swift cache object is stored. When auth_token middleware is deployed with a Swift cache, use this option to have the middleware share a caching backend with swift. Otherwise, use the
memcached_servers
option instead.
- certfile¶
- Type:
string
- Default:
<None>
Required if identity server requires client certificate
- keyfile¶
- Type:
string
- Default:
<None>
Required if identity server requires client certificate
- cafile¶
- Type:
string
- Default:
<None>
A PEM encoded Certificate Authority to use when verifying HTTPs connections. Defaults to system CAs.
- insecure¶
- Type:
boolean
- Default:
False
Verify HTTPS connections.
- region_name¶
- Type:
string
- Default:
<None>
The region in which the identity server can be found.
- memcached_servers¶
- Type:
list
- Default:
<None>
Optionally specify a list of memcached server(s) to use for caching. If left undefined, tokens will instead be cached in-process.
¶ Group
Name
keystone_authtoken
memcache_servers
- token_cache_time¶
- Type:
integer
- Default:
300
In order to prevent excessive effort spent validating tokens, the middleware caches previously-seen tokens for a configurable duration (in seconds). Set to -1 to disable caching completely.
- memcache_security_strategy¶
- Type:
string
- Default:
None
- Valid Values:
None, MAC, ENCRYPT
(Optional) If defined, indicate whether token data should be authenticated or authenticated and encrypted. If MAC, token data is authenticated (with HMAC) in the cache. If ENCRYPT, token data is encrypted and authenticated in the cache. If the value is not one of these options or empty, auth_token will raise an exception on initialization.
- memcache_secret_key¶
- Type:
string
- Default:
<None>
(Optional, mandatory if memcache_security_strategy is defined) This string is used for key derivation.
- memcache_pool_dead_retry¶
- Type:
integer
- Default:
300
(Optional) Number of seconds memcached server is considered dead before it is tried again.
- memcache_pool_maxsize¶
- Type:
integer
- Default:
10
(Optional) Maximum total number of open connections to every memcached server.
- memcache_pool_socket_timeout¶
- Type:
integer
- Default:
3
(Optional) Socket timeout in seconds for communicating with a memcached server.
- memcache_pool_unused_timeout¶
- Type:
integer
- Default:
60
(Optional) Number of seconds a connection to memcached is held unused in the pool before it is closed.
- memcache_pool_conn_get_timeout¶
- Type:
integer
- Default:
10
(Optional) Number of seconds that an operation will wait to get a memcached client connection from the pool.
- memcache_use_advanced_pool¶
- Type:
boolean
- Default:
True
(Optional) Use the advanced (eventlet safe) memcached client pool.
- include_service_catalog¶
- Type:
boolean
- Default:
True
(Optional) Indicate whether to set the X-Service-Catalog header. If False, middleware will not ask for service catalog on token validation and will not set the X-Service-Catalog header.
- enforce_token_bind¶
- Type:
string
- Default:
permissive
Used to control the use and type of token binding. Can be set to: “disabled” to not check token binding. “permissive” (default) to validate binding information if the bind type is of a form known to the server and ignore it if not. “strict” like “permissive” but if the bind type is unknown the token will be rejected. “required” any form of token binding is needed to be allowed. Finally the name of a binding method that must be present in tokens.
- service_token_roles¶
- Type:
list
- Default:
['service']
A choice of roles that must be present in a service token. Service tokens are allowed to request that an expired token can be used and so this check should tightly control that only actual services should be sending this token. Roles here are applied as an ANY check so any role in this list must be present. For backwards compatibility reasons this currently only affects the allow_expired check.
- service_token_roles_required¶
- Type:
boolean
- Default:
False
For backwards compatibility reasons we must let valid service tokens pass that don’t pass the service_token_roles check as valid. Setting this true will become the default in a future release and should be enabled if possible.
- service_type¶
- Type:
string
- Default:
<None>
The name or type of the service as it appears in the service catalog. This is used to validate tokens that have restricted access rules.
- auth_type¶
- Type:
unknown type
- Default:
<None>
Authentication type to load
¶ Group
Name
keystone_authtoken
auth_plugin
- auth_section¶
- Type:
unknown type
- Default:
<None>
Config Section from which to load plugin specific options
libvirt¶
Libvirt options allows cloud administrator to configure related libvirt hypervisor driver to be used within an OpenStack deployment.
Almost all of the libvirt config options are influence by virt_type
config
which describes the virtualization type (or so called domain type) libvirt
should use for specific features such as live migration, snapshot.
- rescue_image_id¶
- Type:
string
- Default:
<None>
The ID of the image to boot from to rescue data from a corrupted instance.
If the rescue REST API operation doesn’t provide an ID of an image to use, the image which is referenced by this ID is used. If this option is not set, the image from the instance is used.
Possible values:
An ID of an image or nothing. If it points to an Amazon Machine Image (AMI), consider to set the config options
rescue_kernel_id
andrescue_ramdisk_id
too. If nothing is set, the image of the instance is used.
Related options:
rescue_kernel_id
: If the chosen rescue image allows the separate definition of its kernel disk, the value of this option is used, if specified. This is the case when Amazon’s AMI/AKI/ARI image format is used for the rescue image.rescue_ramdisk_id
: If the chosen rescue image allows the separate definition of its RAM disk, the value of this option is used if, specified. This is the case when Amazon’s AMI/AKI/ARI image format is used for the rescue image.
- rescue_kernel_id¶
- Type:
string
- Default:
<None>
The ID of the kernel (AKI) image to use with the rescue image.
If the chosen rescue image allows the separate definition of its kernel disk, the value of this option is used, if specified. This is the case when Amazon’s AMI/AKI/ARI image format is used for the rescue image.
Possible values:
An ID of an kernel image or nothing. If nothing is specified, the kernel disk from the instance is used if it was launched with one.
Related options:
rescue_image_id
: If that option points to an image in Amazon’s AMI/AKI/ARI image format, it’s useful to userescue_kernel_id
too.
- rescue_ramdisk_id¶
- Type:
string
- Default:
<None>
The ID of the RAM disk (ARI) image to use with the rescue image.
If the chosen rescue image allows the separate definition of its RAM disk, the value of this option is used, if specified. This is the case when Amazon’s AMI/AKI/ARI image format is used for the rescue image.
Possible values:
An ID of a RAM disk image or nothing. If nothing is specified, the RAM disk from the instance is used if it was launched with one.
Related options:
rescue_image_id
: If that option points to an image in Amazon’s AMI/AKI/ARI image format, it’s useful to userescue_ramdisk_id
too.
- virt_type¶
- Type:
string
- Default:
kvm
- Valid Values:
kvm, lxc, qemu, parallels
Describes the virtualization type (or so called domain type) libvirt should use.
The choice of this type must match the underlying virtualization strategy you have chosen for this host.
Related options:
connection_uri
: depends on thisdisk_prefix
: depends on thiscpu_mode
: depends on thiscpu_models
: depends on thistb_cache_size
: depends on this
- connection_uri¶
- Type:
string
- Default:
''
Overrides the default libvirt URI of the chosen virtualization type.
If set, Nova will use this URI to connect to libvirt.
Possible values:
An URI like
qemu:///system
.This is only necessary if the URI differs to the commonly known URIs for the chosen virtualization type.
Related options:
virt_type
: Influences what is used as default value here.
- inject_password¶
- Type:
boolean
- Default:
False
Allow the injection of an admin password for instance only at
create
andrebuild
process.There is no agent needed within the image to do this. If libguestfs is available on the host, it will be used. Otherwise nbd is used. The file system of the image will be mounted and the admin password, which is provided in the REST API call will be injected as password for the root user. If no root user is available, the instance won’t be launched and an error is thrown. Be aware that the injection is not possible when the instance gets launched from a volume.
Linux distribution guest only.
Possible values:
True: Allows the injection.
False: Disallows the injection. Any via the REST API provided admin password will be silently ignored.
Related options:
inject_partition
: That option will decide about the discovery and usage of the file system. It also can disable the injection at all.
- inject_key¶
- Type:
boolean
- Default:
False
Allow the injection of an SSH key at boot time.
There is no agent needed within the image to do this. If libguestfs is available on the host, it will be used. Otherwise nbd is used. The file system of the image will be mounted and the SSH key, which is provided in the REST API call will be injected as SSH key for the root user and appended to the
authorized_keys
of that user. The SELinux context will be set if necessary. Be aware that the injection is not possible when the instance gets launched from a volume.This config option will enable directly modifying the instance disk and does not affect what cloud-init may do using data from config_drive option or the metadata service.
Linux distribution guest only.
Related options:
inject_partition
: That option will decide about the discovery and usage of the file system. It also can disable the injection at all.
- inject_partition¶
- Type:
integer
- Default:
-2
- Minimum Value:
-2
Determines how the file system is chosen to inject data into it.
libguestfs is used to inject data. If libguestfs is not able to determine the root partition (because there are more or less than one root partition) or cannot mount the file system it will result in an error and the instance won’t boot.
Possible values:
-2 => disable the injection of data.
-1 => find the root partition with the file system to mount with libguestfs
0 => The image is not partitioned
>0 => The number of the partition to use for the injection
Linux distribution guest only.
Related options:
inject_key
: If this option allows the injection of a SSH key it depends on value greater or equal to -1 forinject_partition
.inject_password
: If this option allows the injection of an admin password it depends on value greater or equal to -1 forinject_partition
.[guestfs]/debug
You can enable the debug log level of libguestfs with this config option. A more verbose output will help in debugging issues.virt_type
: If you uselxc
as virt_type it will be treated as a single partition image
- live_migration_scheme¶
- Type:
string
- Default:
<None>
URI scheme for live migration used by the source of live migration traffic.
Override the default libvirt live migration scheme (which is dependent on virt_type). If this option is set to None, nova will automatically choose a sensible default based on the hypervisor. It is not recommended that you change this unless you are very sure that hypervisor supports a particular scheme.
Related options:
virt_type
: This option is meaningful only whenvirt_type
is set to kvm or qemu.live_migration_uri
: Iflive_migration_uri
value is not None, the scheme used for live migration is taken fromlive_migration_uri
instead.
- live_migration_inbound_addr¶
- Type:
unknown type
- Default:
<None>
IP address used as the live migration address for this host.
This option indicates the IP address which should be used as the target for live migration traffic when migrating to this hypervisor. This metadata is then used by the source of the live migration traffic to construct a migration URI.
If this option is set to None, the hostname of the migration target compute node will be used.
This option is useful in environments where the live-migration traffic can impact the network plane significantly. A separate network for live-migration traffic can then use this config option and avoids the impact on the management network.
- live_migration_uri¶
- Type:
string
- Default:
<None>
Live migration target URI used by the source of live migration traffic.
Override the default libvirt live migration target URI (which is dependent on virt_type). Any included “%s” is replaced with the migration target hostname, or live_migration_inbound_addr if set.
If this option is set to None (which is the default), Nova will automatically generate the live_migration_uri value based on only 4 supported virt_type in following list:
‘kvm’: ‘qemu+tcp://%s/system’
‘qemu’: ‘qemu+tcp://%s/system’
‘parallels’: ‘parallels+tcp://%s/system’
Related options:
live_migration_inbound_addr
: Iflive_migration_inbound_addr
value is not None andlive_migration_tunnelled
is False, the ip/hostname address of target compute node is used instead oflive_migration_uri
as the uri for live migration.live_migration_scheme
: Iflive_migration_uri
is not set, the scheme used for live migration is taken fromlive_migration_scheme
instead.
Warning
This option is deprecated for removal since 15.0.0. Its value may be silently ignored in the future.
- Reason:
live_migration_uri is deprecated for removal in favor of two other options that allow to change live migration scheme and target URI:
live_migration_scheme
andlive_migration_inbound_addr
respectively.
- live_migration_tunnelled¶
- Type:
boolean
- Default:
False
Enable tunnelled migration.
This option enables the tunnelled migration feature, where migration data is transported over the libvirtd connection. If enabled, we use the VIR_MIGRATE_TUNNELLED migration flag, avoiding the need to configure the network to allow direct hypervisor to hypervisor communication. If False, use the native transport. If not set, Nova will choose a sensible default based on, for example the availability of native encryption support in the hypervisor. Enabling this option will definitely impact performance massively.
Note that this option is NOT compatible with use of block migration.
Warning
This option is deprecated for removal since 23.0.0. Its value may be silently ignored in the future.
- Reason:
The “tunnelled live migration” has two inherent limitations: it cannot handle live migration of disks in a non-shared storage setup; and it has a huge performance cost. Both these problems are solved by
live_migration_with_native_tls
(requires a pre-configured TLS environment), which is the recommended approach for securing all live migration streams.
- live_migration_bandwidth¶
- Type:
integer
- Default:
0
Maximum bandwidth(in MiB/s) to be used during migration.
If set to 0, the hypervisor will choose a suitable default. Some hypervisors do not support this feature and will return an error if bandwidth is not 0. Please refer to the libvirt documentation for further details.
- live_migration_downtime¶
- Type:
integer
- Default:
500
- Minimum Value:
100
Target maximum period of time Nova will try to keep the instance paused during the last part of the memory copy, in milliseconds.
Minimum downtime is 100ms. You can increase this value if you want to allow live-migrations to complete faster, or avoid live-migration timeout errors by allowing the guest to be paused for longer during the live-migration switch over. This value may be exceeded if there is any reduction on the transfer rate after the VM is paused.
Related options:
live_migration_completion_timeout
- live_migration_downtime_steps¶
- Type:
integer
- Default:
10
- Minimum Value:
3
Number of incremental steps to reach max downtime value.
Minimum number of steps is 3.
- live_migration_downtime_delay¶
- Type:
integer
- Default:
75
- Minimum Value:
3
Time to wait, in seconds, between each step increase of the migration downtime.
Minimum delay is 3 seconds. Value is per GiB of guest RAM + disk to be transferred, with lower bound of a minimum of 2 GiB per device.
- live_migration_completion_timeout¶
- Type:
integer
- Default:
800
- Minimum Value:
0
- Mutable:
This option can be changed without restarting.
Time to wait, in seconds, for migration to successfully complete transferring data before aborting the operation.
Value is per GiB of guest RAM + disk to be transferred, with lower bound of a minimum of 2 GiB. Should usually be larger than downtime delay * downtime steps. Set to 0 to disable timeouts.
Related options:
live_migration_downtime
live_migration_downtime_steps
live_migration_downtime_delay
- live_migration_timeout_action¶
- Type:
string
- Default:
abort
- Valid Values:
abort, force_complete
- Mutable:
This option can be changed without restarting.
This option will be used to determine what action will be taken against a VM after
live_migration_completion_timeout
expires. By default, the live migrate operation will be aborted after completion timeout. If it is set toforce_complete
, the compute service will either pause the VM or trigger post-copy depending on if post copy is enabled and available (live_migration_permit_post_copy
is set to True).Related options:
live_migration_completion_timeout
live_migration_permit_post_copy
- live_migration_permit_post_copy¶
- Type:
boolean
- Default:
False
This option allows nova to switch an on-going live migration to post-copy mode, i.e., switch the active VM to the one on the destination node before the migration is complete, therefore ensuring an upper bound on the memory that needs to be transferred. Post-copy requires libvirt>=1.3.3 and QEMU>=2.5.0.
When permitted, post-copy mode will be automatically activated if we reach the timeout defined by
live_migration_completion_timeout
andlive_migration_timeout_action
is set to ‘force_complete’. Note if you change to no timeout or choose to use ‘abort’, i.e.live_migration_completion_timeout = 0
, then there will be no automatic switch to post-copy.The live-migration force complete API also uses post-copy when permitted. If post-copy mode is not available, force complete falls back to pausing the VM to ensure the live-migration operation will complete.
When using post-copy mode, if the source and destination hosts lose network connectivity, the VM being live-migrated will need to be rebooted. For more details, please see the Administration guide.
Related options:
live_migration_permit_auto_converge
live_migration_timeout_action
- live_migration_permit_auto_converge¶
- Type:
boolean
- Default:
False
This option allows nova to start live migration with auto converge on.
Auto converge throttles down CPU if a progress of on-going live migration is slow. Auto converge will only be used if this flag is set to True and post copy is not permitted or post copy is unavailable due to the version of libvirt and QEMU in use.
Related options:
live_migration_permit_post_copy
- snapshot_image_format¶
- Type:
string
- Default:
<None>
- Valid Values:
raw, qcow2, vmdk, vdi
Determine the snapshot image format when sending to the image service.
If set, this decides what format is used when sending the snapshot to the image service. If not set, defaults to same type as source image.
Possible values
- raw
RAW disk format
- qcow2
KVM default disk format
- vmdk
VMWare default disk format
- vdi
VirtualBox default disk format
- live_migration_with_native_tls¶
- Type:
boolean
- Default:
False
Use QEMU-native TLS encryption when live migrating.
This option will allow both migration stream (guest RAM plus device state) and disk stream to be transported over native TLS, i.e. TLS support built into QEMU.
Prerequisite: TLS environment is configured correctly on all relevant Compute nodes. This means, Certificate Authority (CA), server, client certificates, their corresponding keys, and their file permissions are in place, and are validated.
Notes:
To have encryption for migration stream and disk stream (also called: “block migration”),
live_migration_with_native_tls
is the preferred config attribute instead oflive_migration_tunnelled
.The
live_migration_tunnelled
will be deprecated in the long-term for two main reasons: (a) it incurs a huge performance penalty; and (b) it is not compatible with block migration. Therefore, if your compute nodes have at least libvirt 4.4.0 and QEMU 2.11.0, it is strongly recommended to uselive_migration_with_native_tls
.The
live_migration_tunnelled
andlive_migration_with_native_tls
should not be used at the same time.Unlike
live_migration_tunnelled
, thelive_migration_with_native_tls
is compatible with block migration. That is, with this option, NBD stream, over which disks are migrated to a target host, will be encrypted.
Related options:
live_migration_tunnelled
: This transports migration stream (but not disk stream) over libvirtd.
- disk_prefix¶
- Type:
string
- Default:
<None>
Override the default disk prefix for the devices attached to an instance.
If set, this is used to identify a free disk device name for a bus.
Possible values:
Any prefix which will result in a valid disk device name like ‘sda’ or ‘hda’ for example. This is only necessary if the device names differ to the commonly known device name prefixes for a virtualization type such as: sd, xvd, uvd, vd.
Related options:
virt_type
: Influences which device type is used, which determines the default disk prefix.
- wait_soft_reboot_seconds¶
- Type:
integer
- Default:
120
Number of seconds to wait for instance to shut down after soft reboot request is made. We fall back to hard reboot if instance does not shutdown within this window.
- cpu_mode¶
- Type:
string
- Default:
<None>
- Valid Values:
host-model, host-passthrough, custom, none
Is used to set the CPU mode an instance should have.
If
virt_type="kvm|qemu"
, it will default tohost-model
, otherwise it will default tonone
.Related options:
cpu_models
: This should be set ONLY whencpu_mode
is set tocustom
. Otherwise, it would result in an error and the instance launch will fail.
Possible values
- host-model
Clone the host CPU feature flags
- host-passthrough
Use the host CPU model exactly
- custom
Use the CPU model in
[libvirt]cpu_models
- none
Don’t set a specific CPU model. For instances with
[libvirt] virt_type
as KVM/QEMU, the default CPU model from QEMU will be used, which provides a basic set of CPU features that are compatible with most hosts
- cpu_models¶
- Type:
list
- Default:
[]
An ordered list of CPU models the host supports.
It is expected that the list is ordered so that the more common and less advanced CPU models are listed earlier. Here is an example:
SandyBridge,IvyBridge,Haswell,Broadwell
, the latter CPU model’s features is richer that the previous CPU model.Possible values:
The named CPU models can be found via
virsh cpu-models ARCH
, where ARCH is your host architecture.
Related options:
cpu_mode
: This should be set tocustom
ONLY when you want to configure (viacpu_models
) a specific named CPU model. Otherwise, it would result in an error and the instance launch will fail.virt_type
: Only the virtualization typeskvm
andqemu
use this.
Note
Be careful to only specify models which can be fully supported in hardware.
¶ Group
Name
libvirt
cpu_model
- cpu_model_extra_flags¶
- Type:
list
- Default:
[]
Enable or disable guest CPU flags.
To explicitly enable or disable CPU flags, use the
+flag
or-flag
notation – the+
sign will enable the CPU flag for the guest, while a-
sign will disable it. If neither+
nor-
is specified, the flag will be enabled, which is the default behaviour. For example, if you specify the following (assuming the said CPU model and features are supported by the host hardware and software):[libvirt] cpu_mode = custom cpu_models = Cascadelake-Server cpu_model_extra_flags = -hle, -rtm, +ssbd, mtrr
Nova will disable the
hle
andrtm
flags for the guest; and it will enablessbd
andmttr
(because it was specified with neither+
nor-
prefix).The CPU flags are case-insensitive. In the following example, the
pdpe1gb
flag will be disabled for the guest;vmx
andpcid
flags will be enabled:[libvirt] cpu_mode = custom cpu_models = Haswell-noTSX-IBRS cpu_model_extra_flags = -PDPE1GB, +VMX, pcid
Specifying extra CPU flags is valid in combination with all the three possible values of
cpu_mode
config attribute:custom
(this also requires an explicit CPU model to be specified via thecpu_models
config attribute),host-model
, orhost-passthrough
.There can be scenarios where you may need to configure extra CPU flags even for
host-passthrough
CPU mode, because sometimes QEMU may disable certain CPU features. An example of this is Intel’s “invtsc” (Invariable Time Stamp Counter) CPU flag – if you need to expose this flag to a Nova instance, you need to explicitly enable it.The possible values for
cpu_model_extra_flags
depends on the CPU model in use. Refer to /usr/share/libvirt/cpu_map/*.xml` for possible CPU feature flags for a given CPU model.A special note on a particular CPU flag:
pcid
(an Intel processor feature that alleviates guest performance degradation as a result of applying the ‘Meltdown’ CVE fixes). When configuring this flag with thecustom
CPU mode, not all CPU models (as defined by QEMU and libvirt) need it:The only virtual CPU models that include the
pcid
capability are Intel “Haswell”, “Broadwell”, and “Skylake” variants.The libvirt / QEMU CPU models “Nehalem”, “Westmere”, “SandyBridge”, and “IvyBridge” will _not_ expose the
pcid
capability by default, even if the host CPUs by the same name include it. I.e. ‘PCID’ needs to be explicitly specified when using the said virtual CPU models.
The libvirt driver’s default CPU mode,
host-model
, will do the right thing with respect to handling ‘PCID’ CPU flag for the guest – assuming you are running updated processor microcode, host and guest kernel, libvirt, and QEMU. The other mode,host-passthrough
, checks if ‘PCID’ is available in the hardware, and if so directly passes it through to the Nova guests. Thus, in context of ‘PCID’, with either of these CPU modes (host-model
orhost-passthrough
), there is no need to use thecpu_model_extra_flags
.Related options:
cpu_mode
cpu_models
- snapshots_directory¶
- Type:
string
- Default:
$instances_path/snapshots
Location where libvirt driver will store snapshots before uploading them to image service
- disk_cachemodes¶
- Type:
list
- Default:
[]
Specific cache modes to use for different disk types.
For example: file=directsync,block=none,network=writeback
For local or direct-attached storage, it is recommended that you use writethrough (default) mode, as it ensures data integrity and has acceptable I/O performance for applications running in the guest, especially for read operations. However, caching mode none is recommended for remote NFS storage, because direct I/O operations (O_DIRECT) perform better than synchronous I/O operations (with O_SYNC). Caching mode none effectively turns all guest I/O operations into direct I/O operations on the host, which is the NFS client in this environment.
Possible cache modes:
default: “It Depends” – For Nova-managed disks,
none
, if the host file system is capable of Linux’s ‘O_DIRECT’ semantics; otherwisewriteback
. For volume drivers, the default is driver-dependent:none
for everything except for SMBFS and Virtuzzo (which usewriteback
).none: With caching mode set to none, the host page cache is disabled, but the disk write cache is enabled for the guest. In this mode, the write performance in the guest is optimal because write operations bypass the host page cache and go directly to the disk write cache. If the disk write cache is battery-backed, or if the applications or storage stack in the guest transfer data properly (either through fsync operations or file system barriers), then data integrity can be ensured. However, because the host page cache is disabled, the read performance in the guest would not be as good as in the modes where the host page cache is enabled, such as writethrough mode. Shareable disk devices, like for a multi-attachable block storage volume, will have their cache mode set to ‘none’ regardless of configuration.
writethrough: With caching set to writethrough mode, the host page cache is enabled, but the disk write cache is disabled for the guest. Consequently, this caching mode ensures data integrity even if the applications and storage stack in the guest do not transfer data to permanent storage properly (either through fsync operations or file system barriers). Because the host page cache is enabled in this mode, the read performance for applications running in the guest is generally better. However, the write performance might be reduced because the disk write cache is disabled.
writeback: With caching set to writeback mode, both the host page cache and the disk write cache are enabled for the guest. Because of this, the I/O performance for applications running in the guest is good, but the data is not protected in a power failure. As a result, this caching mode is recommended only for temporary data where potential data loss is not a concern. NOTE: Certain backend disk mechanisms may provide safe writeback cache semantics. Specifically those that bypass the host page cache, such as QEMU’s integrated RBD driver. Ceph documentation recommends setting this to writeback for maximum performance while maintaining data safety.
directsync: Like “writethrough”, but it bypasses the host page cache.
unsafe: Caching mode of unsafe ignores cache transfer operations completely. As its name implies, this caching mode should be used only for temporary data where data loss is not a concern. This mode can be useful for speeding up guest installations, but you should switch to another caching mode in production environments.
- rng_dev_path¶
- Type:
string
- Default:
/dev/urandom
The path to an RNG (Random Number Generator) device that will be used as the source of entropy on the host. Since libvirt 1.3.4, any path (that returns random numbers when read) is accepted. The recommended source of entropy is
/dev/urandom
– it is non-blocking, therefore relatively fast; and avoids the limitations of/dev/random
, which is a legacy interface. For more details (and comparison between different RNG sources), refer to the “Usage” section in the Linux kernel API documentation for[u]random
: http://man7.org/linux/man-pages/man4/urandom.4.html and http://man7.org/linux/man-pages/man7/random.7.html.
- hw_machine_type¶
- Type:
list
- Default:
<None>
For qemu or KVM guests, set this option to specify a default machine type per host architecture. You can find a list of supported machine types in your environment by checking the output of the virsh capabilities command. The format of the value for this config option is
host-arch=machine-type
. For example:x86_64=machinetype1,armv7l=machinetype2
.
- sysinfo_serial¶
- Type:
string
- Default:
unique
- Valid Values:
none, os, hardware, auto, unique
The data source used to the populate the host “serial” UUID exposed to guest in the virtual BIOS. All choices except
unique
will change the serial when migrating the instance to another host. Changing the choice of this option will also affect existing instances on this host once they are stopped and started again. It is recommended to use the default choice (unique
) since that will not change when an instance is migrated. However, if you have a need for per-host serials in addition to per-instance serial numbers, then consider restricting flavors via host aggregates.Possible values
- none
A serial number entry is not added to the guest domain xml.
- os
A UUID serial number is generated from the host
/etc/machine-id
file.- hardware
A UUID for the host hardware as reported by libvirt. This is typically from the host SMBIOS data, unless it has been overridden in
libvirtd.conf
.- auto
Uses the “os” source if possible, else “hardware”.
- unique
Uses instance UUID as the serial number.
- mem_stats_period_seconds¶
- Type:
integer
- Default:
10
A number of seconds to memory usage statistics period. Zero or negative value mean to disable memory usage statistics.
- uid_maps¶
- Type:
list
- Default:
[]
List of uid targets and ranges.Syntax is guest-uid:host-uid:count. Maximum of 5 allowed.
- gid_maps¶
- Type:
list
- Default:
[]
List of guid targets and ranges.Syntax is guest-gid:host-gid:count. Maximum of 5 allowed.
- realtime_scheduler_priority¶
- Type:
integer
- Default:
1
In a realtime host context vCPUs for guest will run in that scheduling priority. Priority depends on the host kernel (usually 1-99)
- enabled_perf_events¶
- Type:
list
- Default:
[]
Performance events to monitor and collect statistics for.
This will allow you to specify a list of events to monitor low-level performance of guests, and collect related statistics via the libvirt driver, which in turn uses the Linux kernel’s
perf
infrastructure. With this config attribute set, Nova will generate libvirt guest XML to monitor the specified events.For example, to monitor the count of CPU cycles (total/elapsed) and the count of cache misses, enable them as follows:
[libvirt] enabled_perf_events = cpu_clock, cache_misses
Possible values: A string list. The list of supported events can be found here. Note that Intel CMT events -
cmt
,mbmbt
andmbml
- are unsupported by recent Linux kernel versions (4.14+) and will be ignored by nova.
- num_pcie_ports¶
- Type:
integer
- Default:
0
- Minimum Value:
0
- Maximum Value:
28
The number of PCIe ports an instance will get.
Libvirt allows a custom number of PCIe ports (pcie-root-port controllers) a target instance will get. Some will be used by default, rest will be available for hotplug use.
By default we have just 1-2 free ports which limits hotplug.
More info: https://github.com/qemu/qemu/blob/master/docs/pcie.txt
Due to QEMU limitations for aarch64/virt maximum value is set to ‘28’.
Default value ‘0’ moves calculating amount of ports to libvirt.
- file_backed_memory¶
- Type:
integer
- Default:
0
- Minimum Value:
0
Available capacity in MiB for file-backed memory.
Set to 0 to disable file-backed memory.
When enabled, instances will create memory files in the directory specified in
/etc/libvirt/qemu.conf
’smemory_backing_dir
option. The default location is/var/lib/libvirt/qemu/ram
.When enabled, the value defined for this option is reported as the node memory capacity. Compute node system memory will be used as a cache for file-backed memory, via the kernel’s pagecache mechanism.
Note
This feature is not compatible with hugepages.
Note
This feature is not compatible with memory overcommit.
Related options:
virt_type
must be set tokvm
orqemu
.ram_allocation_ratio
must be set to 1.0.
- num_memory_encrypted_guests¶
- Type:
integer
- Default:
<None>
- Minimum Value:
0
Maximum number of guests with encrypted memory which can run concurrently on this compute host.
For now this is only relevant for AMD machines which support SEV (Secure Encrypted Virtualization). Such machines have a limited number of slots in their memory controller for storing encryption keys. Each running guest with encrypted memory will consume one of these slots.
The option may be reused for other equivalent technologies in the future. If the machine does not support memory encryption, the option will be ignored and inventory will be set to 0.
If the machine does support memory encryption and this option is not set, the driver detects maximum number of SEV guests from the libvirt API which is available since v8.0.0. Setting this option overrides the detected limit, unless the given value is not larger than the detected limit.
On the other hand, if an older version of libvirt is used,
None
means an effectively unlimited inventory, i.e. no limit will be imposed by Nova on the number of SEV guests which can be launched, even though the underlying hardware will enforce its own limit.Note
It is recommended to read the deployment documentation’s section on this option before deciding whether to configure this setting or leave it at the default.
Related options:
libvirt.virt_type
must be set tokvm
.It’s recommended to consider including
x86_64=q35
inlibvirt.hw_machine_type
; see Enabling SEV for more on this.
- device_detach_attempts¶
- Type:
integer
- Default:
8
- Minimum Value:
1
Maximum number of attempts the driver tries to detach a device in libvirt.
Related options:
- device_detach_timeout¶
- Type:
integer
- Default:
20
- Minimum Value:
1
Maximum number of seconds the driver waits for the success or the failure event from libvirt for a given device detach attempt before it re-trigger the detach.
Related options:
- tb_cache_size¶
- Type:
integer
- Default:
<None>
- Minimum Value:
0
Qemu>=5.0.0 bumped the default tb-cache size to 1GiB(from 32MiB) and this made it difficult to run multiple guest VMs on systems running with lower memory. With Libvirt>=8.0.0 this config option can be used to configure lower tb-cache size.
Set it to > 0 to configure tb-cache for guest VMs.
Related options:
compute_driver
(libvirt)virt_type
(qemu)
- migration_inbound_addr¶
- Type:
string
- Default:
$my_ip
The address used as the migration address for this host.
This option indicates the IP address, hostname, or FQDN which should be used as the target for cold migration, resize, and evacuate traffic when moving to this hypervisor. This metadata is then used by the source of the migration traffic to construct the commands used to copy data (e.g. disk image) to the destination.
An included “{‘default’: ‘the value above’}” is replaced with the hostname of the migration target hypervisor.
Related options:
my_ip
live_migration_inbound_addr
- images_type¶
- Type:
string
- Default:
default
- Valid Values:
raw, flat, qcow2, lvm, rbd, ploop, default
VM Images format.
If default is specified, then use_cow_images flag is used instead of this one.
Related options:
compute.use_cow_images
images_volume_group
[workarounds]/ensure_libvirt_rbd_instance_dir_cleanup
compute.force_raw_images
- images_volume_group¶
- Type:
string
- Default:
<None>
LVM Volume Group that is used for VM images, when you specify images_type=lvm
Related options:
images_type
- sparse_logical_volumes¶
- Type:
boolean
- Default:
False
Create sparse logical volumes (with virtualsize) if this flag is set to True.
Warning
This option is deprecated for removal since 18.0.0. Its value may be silently ignored in the future.
- Reason:
Sparse logical volumes is a feature that is not tested hence not supported. LVM logical volumes are preallocated by default. If you want thin provisioning, use Cinder thin-provisioned volumes.
- images_rbd_pool¶
- Type:
string
- Default:
rbd
The RADOS pool in which rbd volumes are stored
- images_rbd_ceph_conf¶
- Type:
string
- Default:
''
Path to the ceph configuration file to use
- images_rbd_glance_store_name¶
- Type:
string
- Default:
''
The name of the Glance store that represents the rbd cluster in use by this node. If set, this will allow Nova to request that Glance copy an image from an existing non-local store into the one named by this option before booting so that proper Copy-on-Write behavior is maintained.
Related options:
images_type - must be set to
rbd
images_rbd_glance_copy_poll_interval - controls the status poll frequency
images_rbd_glance_copy_timeout - controls the overall copy timeout
- images_rbd_glance_copy_poll_interval¶
- Type:
integer
- Default:
15
The interval in seconds with which to poll Glance after asking for it to copy an image to the local rbd store. This affects how often we ask Glance to report on copy completion, and thus should be short enough that we notice quickly, but not too aggressive that we generate undue load on the Glance server.
Related options:
images_type - must be set to
rbd
images_rbd_glance_store_name - must be set to a store name
- images_rbd_glance_copy_timeout¶
- Type:
integer
- Default:
600
The overall maximum time we will wait for Glance to complete an image copy to our local rbd store. This should be long enough to allow large images to be copied over the network link between our local store and the one where images typically reside. The downside of setting this too long is just to catch the case where the image copy is stalled or proceeding too slowly to be useful. Actual errors will be reported by Glance and noticed according to the poll interval.
Related options:
images_type - must be set to
rbd
images_rbd_glance_store_name - must be set to a store name
images_rbd_glance_copy_poll_interval - controls the failure time-to-notice
- hw_disk_discard¶
- Type:
string
- Default:
<None>
- Valid Values:
ignore, unmap
Discard option for nova managed disks.
Requires:
Libvirt >= 1.0.6
Qemu >= 1.5 (raw format)
Qemu >= 1.6 (qcow2 format)
- volume_clear¶
- Type:
string
- Default:
zero
- Valid Values:
zero, shred, none
Method used to wipe ephemeral disks when they are deleted. Only takes effect if LVM is set as backing storage.
Related options:
images_type - must be set to
lvm
volume_clear_size
Possible values
- zero
Overwrite volumes with zeroes
- shred
Overwrite volumes repeatedly
- none
Do not wipe deleted volumes
- volume_clear_size¶
- Type:
integer
- Default:
0
- Minimum Value:
0
Size of area in MiB, counting from the beginning of the allocated volume, that will be cleared using method set in
volume_clear
option.Possible values:
0 - clear whole volume
>0 - clear specified amount of MiB
Related options:
images_type - must be set to
lvm
volume_clear - must be set and the value must be different than
none
for this option to have any impact
- snapshot_compression¶
- Type:
boolean
- Default:
False
Enable snapshot compression for
qcow2
images.Note: you can set
snapshot_image_format
toqcow2
to force all snapshots to be inqcow2
format, independently from their original image type.Related options:
snapshot_image_format
- use_virtio_for_bridges¶
- Type:
boolean
- Default:
True
Use virtio for bridge interfaces with KVM/QEMU
- volume_use_multipath¶
- Type:
boolean
- Default:
False
Use multipath connection of the iSCSI or FC volume
Volumes can be connected in the LibVirt as multipath devices. This will provide high availability and fault tolerance.
¶ Group
Name
libvirt
iscsi_use_multipath
- num_volume_scan_tries¶
- Type:
integer
- Default:
5
Number of times to scan given storage protocol to find volume.
¶ Group
Name
libvirt
num_iscsi_scan_tries
- num_aoe_discover_tries¶
- Type:
integer
- Default:
3
Number of times to rediscover AoE target to find volume.
Nova provides support for block storage attaching to hosts via AOE (ATA over Ethernet). This option allows the user to specify the maximum number of retry attempts that can be made to discover the AoE device.
- iscsi_iface¶
- Type:
string
- Default:
<None>
The iSCSI transport iface to use to connect to target in case offload support is desired.
Default format is of the form
<transport_name>.<hwaddress>
, where<transport_name>
is one of (be2iscsi
,bnx2i
,cxgb3i
,cxgb4i
,qla4xxx
,ocs
,tcp
) and<hwaddress>
is the MAC address of the interface and can be generated via theiscsiadm -m iface
command. Do not confuse theiscsi_iface
parameter to be provided here with the actual transport name.¶ Group
Name
libvirt
iscsi_transport
- num_iser_scan_tries¶
- Type:
integer
- Default:
5
Number of times to scan iSER target to find volume.
iSER is a server network protocol that extends iSCSI protocol to use Remote Direct Memory Access (RDMA). This option allows the user to specify the maximum number of scan attempts that can be made to find iSER volume.
- iser_use_multipath¶
- Type:
boolean
- Default:
False
Use multipath connection of the iSER volume.
iSER volumes can be connected as multipath devices. This will provide high availability and fault tolerance.
- rbd_user¶
- Type:
string
- Default:
<None>
The RADOS client name for accessing rbd(RADOS Block Devices) volumes.
Libvirt will refer to this user when connecting and authenticating with the Ceph RBD server.
- rbd_secret_uuid¶
- Type:
string
- Default:
<None>
The libvirt UUID of the secret for the rbd_user volumes.
- rbd_connect_timeout¶
- Type:
integer
- Default:
5
The RADOS client timeout in seconds when initially connecting to the cluster.
- rbd_destroy_volume_retry_interval¶
- Type:
integer
- Default:
5
- Minimum Value:
0
Number of seconds to wait between each consecutive retry to destroy a RBD volume.
Related options:
[libvirt]/images_type = ‘rbd’
- rbd_destroy_volume_retries¶
- Type:
integer
- Default:
12
- Minimum Value:
0
Number of retries to destroy a RBD volume.
Related options:
[libvirt]/images_type = ‘rbd’
- nfs_mount_point_base¶
- Type:
string
- Default:
$state_path/mnt
Directory where the NFS volume is mounted on the compute node. The default is ‘mnt’ directory of the location where nova’s Python module is installed.
NFS provides shared storage for the OpenStack Block Storage service.
Possible values:
A string representing absolute path of mount point.
- nfs_mount_options¶
- Type:
string
- Default:
<None>
Mount options passed to the NFS client. See section of the nfs man page for details.
Mount options controls the way the filesystem is mounted and how the NFS client behaves when accessing files on this mount point.
Possible values:
Any string representing mount options separated by commas.
Example string: vers=3,lookupcache=pos
- quobyte_mount_point_base¶
- Type:
string
- Default:
$state_path/mnt
Directory where the Quobyte volume is mounted on the compute node.
Nova supports Quobyte volume driver that enables storing Block Storage service volumes on a Quobyte storage back end. This Option specifies the path of the directory where Quobyte volume is mounted.
Possible values:
A string representing absolute path of mount point.
- quobyte_client_cfg¶
- Type:
string
- Default:
<None>
Path to a Quobyte Client configuration file.
- smbfs_mount_point_base¶
- Type:
string
- Default:
$state_path/mnt
Directory where the SMBFS shares are mounted on the compute node.
- smbfs_mount_options¶
- Type:
string
- Default:
''
Mount options passed to the SMBFS client.
Provide SMBFS options as a single string containing all parameters. See mount.cifs man page for details. Note that the libvirt-qemu
uid
andgid
must be specified.
- remote_filesystem_transport¶
- Type:
string
- Default:
ssh
- Valid Values:
ssh, rsync
libvirt’s transport method for remote file operations.
Because libvirt cannot use RPC to copy files over network to/from other compute nodes, other method must be used for:
creating directory on remote host
creating file on remote host
removing file from remote host
copying file to remote host
- vzstorage_mount_point_base¶
- Type:
string
- Default:
$state_path/mnt
Directory where the Virtuozzo Storage clusters are mounted on the compute node.
This option defines non-standard mountpoint for Vzstorage cluster.
Related options:
vzstorage_mount_* group of parameters
- vzstorage_mount_user¶
- Type:
string
- Default:
stack
Mount owner user name.
This option defines the owner user of Vzstorage cluster mountpoint.
Related options:
vzstorage_mount_* group of parameters
- vzstorage_mount_group¶
- Type:
string
- Default:
qemu
Mount owner group name.
This option defines the owner group of Vzstorage cluster mountpoint.
Related options:
vzstorage_mount_* group of parameters
- vzstorage_mount_perms¶
- Type:
string
- Default:
0770
Mount access mode.
This option defines the access bits of Vzstorage cluster mountpoint, in the format similar to one of chmod(1) utility, like this: 0770. It consists of one to four digits ranging from 0 to 7, with missing lead digits assumed to be 0’s.
Related options:
vzstorage_mount_* group of parameters
- vzstorage_log_path¶
- Type:
string
- Default:
/var/log/vstorage/%(cluster_name)s/nova.log.gz
Path to vzstorage client log.
This option defines the log of cluster operations, it should include “%(cluster_name)s” template to separate logs from multiple shares.
Related options:
vzstorage_mount_opts may include more detailed logging options.
- vzstorage_cache_path¶
- Type:
string
- Default:
<None>
Path to the SSD cache file.
You can attach an SSD drive to a client and configure the drive to store a local cache of frequently accessed data. By having a local cache on a client’s SSD drive, you can increase the overall cluster performance by up to 10 and more times. WARNING! There is a lot of SSD models which are not server grade and may loose arbitrary set of data changes on power loss. Such SSDs should not be used in Vstorage and are dangerous as may lead to data corruptions and inconsistencies. Please consult with the manual on which SSD models are known to be safe or verify it using vstorage-hwflush-check(1) utility.
This option defines the path which should include “%(cluster_name)s” template to separate caches from multiple shares.
Related options:
vzstorage_mount_opts may include more detailed cache options.
- vzstorage_mount_opts¶
- Type:
list
- Default:
[]
Extra mount options for pstorage-mount
For full description of them, see https://static.openvz.org/vz-man/man1/pstorage-mount.1.gz.html Format is a python string representation of arguments list, like: “[‘-v’, ‘-R’, ‘500’]” Shouldn’t include -c, -l, -C, -u, -g and -m as those have explicit vzstorage_* options.
Related options:
All other vzstorage_* options
- rx_queue_size¶
- Type:
unknown type
- Default:
<None>
- Valid Values:
256, 512, 1024
Configure virtio rx queue size.
This option is only usable for virtio-net device with vhost and vhost-user backend. Available only with QEMU/KVM. Requires libvirt v2.3 QEMU v2.7.
- tx_queue_size¶
- Type:
unknown type
- Default:
<None>
- Valid Values:
256, 512, 1024
Configure virtio tx queue size.
This option is only usable for virtio-net device with vhost-user backend. Available only with QEMU/KVM. Requires libvirt v3.7 QEMU v2.10.
- max_queues¶
- Type:
integer
- Default:
<None>
- Minimum Value:
1
The maximum number of virtio queue pairs that can be enabled when creating a multiqueue guest. The number of virtio queues allocated will be the lesser of the CPUs requested by the guest and the max value defined. By default, this value is set to none meaning the legacy limits based on the reported kernel major version will be used.
- num_nvme_discover_tries¶
- Type:
integer
- Default:
5
Number of times to rediscover NVMe target to find volume
Nova provides support for block storage attaching to hosts via NVMe (Non-Volatile Memory Express). This option allows the user to specify the maximum number of retry attempts that can be made to discover the NVMe device.
- pmem_namespaces¶
- Type:
list
- Default:
[]
Configure persistent memory(pmem) namespaces.
These namespaces must have been already created on the host. This config option is in the following format:
"$LABEL:$NSNAME[|$NSNAME][,$LABEL:$NSNAME[|$NSNAME]]"
$NSNAME
is the name of the pmem namespace.$LABEL
represents one resource class, this is used to generatethe resource class name as
CUSTOM_PMEM_NAMESPACE_$LABEL
.
For example:
[libvirt] pmem_namespaces=128G:ns0|ns1|ns2|ns3,262144MB:ns4|ns5,MEDIUM:ns6|ns7
- swtpm_enabled¶
- Type:
boolean
- Default:
False
Enable emulated TPM (Trusted Platform Module) in guests.
- swtpm_user¶
- Type:
string
- Default:
tss
User that swtpm binary runs as.
When using emulated TPM, the
swtpm
binary will run to emulate a TPM device. The user this binary runs as depends on libvirt configuration, withtss
being the default.In order to support cold migration and resize, nova needs to know what user the swtpm binary is running as in order to ensure that files get the proper ownership after being moved between nodes.
Related options:
swtpm_group
must also be set.
- swtpm_group¶
- Type:
string
- Default:
tss
Group that swtpm binary runs as.
When using emulated TPM, the
swtpm
binary will run to emulate a TPM device. The user this binary runs as depends on libvirt configuration, withtss
being the default.In order to support cold migration and resize, nova needs to know what group the swtpm binary is running as in order to ensure that files get the proper ownership after being moved between nodes.
Related options:
swtpm_user
must also be set.
- cpu_power_management¶
- Type:
boolean
- Default:
False
Use libvirt to manage CPU cores performance.
- cpu_power_management_strategy¶
- Type:
string
- Default:
cpu_state
- Valid Values:
cpu_state, governor
Tuning strategy to reduce CPU power consumption when unused
- cpu_power_governor_low¶
- Type:
string
- Default:
powersave
Governor to use in order to reduce CPU power consumption
- cpu_power_governor_high¶
- Type:
string
- Default:
performance
Governor to use in order to have best CPU performance
metrics¶
Configuration options for metrics
Options under this group allow to adjust how values assigned to metrics are calculated.
- weight_multiplier¶
- Type:
floating point
- Default:
1.0
Multiplier used for weighing hosts based on reported metrics.
When using metrics to weight the suitability of a host, you can use this option to change how the calculated weight influences the weight assigned to a host as follows:
>1.0
: increases the effect of the metric on overall weight1.0
: no change to the calculated weight>0.0,<1.0
: reduces the effect of the metric on overall weight0.0
: the metric value is ignored, and the value of the[metrics] weight_of_unavailable
option is returned instead>-1.0,<0.0
: the effect is reduced and reversed-1.0
: the effect is reversed<-1.0
: the effect is increased proportionally and reversed
Possible values:
An integer or float value, where the value corresponds to the multiplier ratio for this weigher.
Related options:
[filter_scheduler] weight_classes
[metrics] weight_of_unavailable
- weight_setting¶
- Type:
list
- Default:
[]
Mapping of metric to weight modifier.
This setting specifies the metrics to be weighed and the relative ratios for each metric. This should be a single string value, consisting of a series of one or more ‘name=ratio’ pairs, separated by commas, where
name
is the name of the metric to be weighed, andratio
is the relative weight for that metric.Note that if the ratio is set to 0, the metric value is ignored, and instead the weight will be set to the value of the
[metrics] weight_of_unavailable
option.As an example, let’s consider the case where this option is set to:
name1=1.0, name2=-1.3
The final weight will be:
(name1.value * 1.0) + (name2.value * -1.3)
Possible values:
A list of zero or more key/value pairs separated by commas, where the key is a string representing the name of a metric and the value is a numeric weight for that metric. If any value is set to 0, the value is ignored and the weight will be set to the value of the
[metrics] weight_of_unavailable
option.
Related options:
[metrics] weight_of_unavailable
- required¶
- Type:
boolean
- Default:
True
Whether metrics are required.
This setting determines how any unavailable metrics are treated. If this option is set to True, any hosts for which a metric is unavailable will raise an exception, so it is recommended to also use the MetricFilter to filter out those hosts before weighing.
Possible values:
A boolean value, where False ensures any metric being unavailable for a host will set the host weight to
[metrics] weight_of_unavailable
.
Related options:
[metrics] weight_of_unavailable
- Type:
floating point
- Default:
-10000.0
Default weight for unavailable metrics.
When any of the following conditions are met, this value will be used in place of any actual metric value:
One of the metrics named in
[metrics] weight_setting
is not available for a host, and the value ofrequired
isFalse
.The ratio specified for a metric in
[metrics] weight_setting
is 0.The
[metrics] weight_multiplier
option is set to 0.
Possible values:
An integer or float value, where the value corresponds to the multiplier ratio for this weigher.
Related options:
[metrics] weight_setting
[metrics] required
[metrics] weight_multiplier
mks¶
Nova compute node uses WebMKS, a desktop sharing protocol to provide instance console access to VM’s created by VMware hypervisors.
Related options:
Following options must be set to provide console access.
mksproxy_base_url
enabled
- mksproxy_base_url¶
- Type:
URI
- Default:
http://127.0.0.1:6090/
Location of MKS web console proxy
The URL in the response points to a WebMKS proxy which starts proxying between client and corresponding vCenter server where instance runs. In order to use the web based console access, WebMKS proxy should be installed and configured
Possible values:
Must be a valid URL of the form:
http://host:port/
orhttps://host:port/
- enabled¶
- Type:
boolean
- Default:
False
Enables graphical console access for virtual machines.
neutron¶
Configuration options for neutron (network connectivity as a service).
- ovs_bridge¶
- Type:
string
- Default:
br-int
Default name for the Open vSwitch integration bridge.
Specifies the name of an integration bridge interface used by OpenvSwitch. This option is only used if Neutron does not specify the OVS bridge name in port binding responses.
- default_floating_pool¶
- Type:
string
- Default:
nova
Default name for the floating IP pool.
Specifies the name of floating IP pool used for allocating floating IPs. This option is only used if Neutron does not specify the floating IP pool name in port binding responses.
- extension_sync_interval¶
- Type:
integer
- Default:
600
- Minimum Value:
0
Integer value representing the number of seconds to wait before querying Neutron for extensions. After this number of seconds the next time Nova needs to create a resource in Neutron it will requery Neutron for the extensions that it has loaded. Setting value to 0 will refresh the extensions with no wait.
- physnets¶
- Type:
list
- Default:
[]
List of physnets present on this host.
For each physnet listed, an additional section,
[neutron_physnet_$PHYSNET]
, will be added to the configuration file. Each section must be configured with a single configuration option,numa_nodes
, which should be a list of node IDs for all NUMA nodes this physnet is associated with. For example:[neutron] physnets = foo, bar [neutron_physnet_foo] numa_nodes = 0 [neutron_physnet_bar] numa_nodes = 0,1
Any physnet that is not listed using this option will be treated as having no particular NUMA node affinity.
Tunnelled networks (VXLAN, GRE, …) cannot be accounted for in this way and are instead configured using the
[neutron_tunnel]
group. For example:[neutron_tunnel] numa_nodes = 1
Related options:
[neutron_tunnel] numa_nodes
can be used to configure NUMA affinity for all tunneled networks[neutron_physnet_$PHYSNET] numa_nodes
must be configured for each value of$PHYSNET
specified by this option
- http_retries¶
- Type:
integer
- Default:
3
- Minimum Value:
0
Number of times neutronclient should retry on any failed http call.
0 means connection is attempted only once. Setting it to any positive integer means that on failure connection is retried that many times e.g. setting it to 3 means total attempts to connect will be 4.
Possible values:
Any integer value. 0 means connection is attempted only once
- service_metadata_proxy¶
- Type:
boolean
- Default:
False
When set to True, this option indicates that Neutron will be used to proxy metadata requests and resolve instance ids. Otherwise, the instance ID must be passed to the metadata request in the ‘X-Instance-ID’ header.
Related options:
metadata_proxy_shared_secret
- Type:
string
- Default:
''
This option holds the shared secret string used to validate proxy requests to Neutron metadata requests. In order to be used, the ‘X-Metadata-Provider-Signature’ header must be supplied in the request.
Related options:
service_metadata_proxy
- cafile¶
- Type:
string
- Default:
<None>
PEM encoded Certificate Authority to use when verifying HTTPs connections.
- certfile¶
- Type:
string
- Default:
<None>
PEM encoded client certificate cert file
- keyfile¶
- Type:
string
- Default:
<None>
PEM encoded client certificate key file
- insecure¶
- Type:
boolean
- Default:
False
Verify HTTPS connections.
- timeout¶
- Type:
integer
- Default:
<None>
Timeout value for http requests
- collect_timing¶
- Type:
boolean
- Default:
False
Collect per-API call timing information.
- split_loggers¶
- Type:
boolean
- Default:
False
Log requests to multiple loggers.
- auth_type¶
- Type:
unknown type
- Default:
<None>
Authentication type to load
¶ Group
Name
neutron
auth_plugin
- auth_section¶
- Type:
unknown type
- Default:
<None>
Config Section from which to load plugin specific options
- auth_url¶
- Type:
unknown type
- Default:
<None>
Authentication URL
- system_scope¶
- Type:
unknown type
- Default:
<None>
Scope for system operations
- domain_id¶
- Type:
unknown type
- Default:
<None>
Domain ID to scope to
- domain_name¶
- Type:
unknown type
- Default:
<None>
Domain name to scope to
- project_id¶
- Type:
unknown type
- Default:
<None>
Project ID to scope to
- project_name¶
- Type:
unknown type
- Default:
<None>
Project name to scope to
- project_domain_id¶
- Type:
unknown type
- Default:
<None>
Domain ID containing project
- project_domain_name¶
- Type:
unknown type
- Default:
<None>
Domain name containing project
- trust_id¶
- Type:
unknown type
- Default:
<None>
ID of the trust to use as a trustee use
- default_domain_id¶
- Type:
unknown type
- Default:
<None>
Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication.
- default_domain_name¶
- Type:
unknown type
- Default:
<None>
Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication.
- user_id¶
- Type:
unknown type
- Default:
<None>
User ID
- username¶
- Type:
unknown type
- Default:
<None>
Username
¶ Group
Name
neutron
user-name
neutron
user_name
- user_domain_id¶
- Type:
unknown type
- Default:
<None>
User’s domain id
- user_domain_name¶
- Type:
unknown type
- Default:
<None>
User’s domain name
- password¶
- Type:
unknown type
- Default:
<None>
User’s password
- tenant_id¶
- Type:
unknown type
- Default:
<None>
Tenant ID
- tenant_name¶
- Type:
unknown type
- Default:
<None>
Tenant Name
- service_type¶
- Type:
string
- Default:
network
The default service_type for endpoint URL discovery.
- service_name¶
- Type:
string
- Default:
<None>
The default service_name for endpoint URL discovery.
- valid_interfaces¶
- Type:
list
- Default:
['internal', 'public']
List of interfaces, in order of preference, for endpoint URL.
- region_name¶
- Type:
string
- Default:
<None>
The default region_name for endpoint URL discovery.
- endpoint_override¶
- Type:
string
- Default:
<None>
Always use this endpoint URL for requests for this client. NOTE: The unversioned endpoint should be specified here; to request a particular API version, use the version, min-version, and/or max-version options.
- connect_retries¶
- Type:
integer
- Default:
<None>
The maximum number of retries that should be attempted for connection errors.
- connect_retry_delay¶
- Type:
floating point
- Default:
<None>
Delay (in seconds) between two retries for connection errors. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used.
- status_code_retries¶
- Type:
integer
- Default:
<None>
The maximum number of retries that should be attempted for retriable HTTP status codes.
- status_code_retry_delay¶
- Type:
floating point
- Default:
<None>
Delay (in seconds) between two retries for retriable status codes. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used.
- retriable_status_codes¶
- Type:
list
- Default:
<None>
List of retriable HTTP status codes that should be retried. If not set default to [503]
notifications¶
Most of the actions in Nova which manipulate the system state generate notifications which are posted to the messaging component (e.g. RabbitMQ) and can be consumed by any service outside the OpenStack. More technical details at https://docs.openstack.org/nova/latest/reference/notifications.html
- notify_on_state_change¶
- Type:
string
- Default:
<None>
- Valid Values:
<None>, vm_state, vm_and_task_state
If set, send compute.instance.update notifications on instance state changes.
Please refer to https://docs.openstack.org/nova/latest/reference/notifications.html for additional information on notifications.
Possible values
- <None>
no notifications
- vm_state
Notifications are sent with VM state transition information in the
old_state
andstate
fields. Theold_task_state
andnew_task_state
fields will be set to the current task_state of the instance- vm_and_task_state
Notifications are sent with VM and task state transition information
¶ Group
Name
DEFAULT
notify_on_state_change
- default_level¶
- Type:
string
- Default:
INFO
- Valid Values:
DEBUG, INFO, WARN, ERROR, CRITICAL
Default notification level for outgoing notifications.
¶ Group
Name
DEFAULT
default_notification_level
- notification_format¶
- Type:
string
- Default:
unversioned
- Valid Values:
both, versioned, unversioned
Specifies which notification format shall be emitted by nova.
The versioned notification interface are in feature parity with the legacy interface and the versioned interface is actively developed so new consumers should used the versioned interface.
However, the legacy interface is heavily used by ceilometer and other mature OpenStack components so it remains the default.
Note that notifications can be completely disabled by setting
driver=noop
in the[oslo_messaging_notifications]
group.The list of versioned notifications is visible in https://docs.openstack.org/nova/latest/reference/notifications.html
Possible values
- both
Both the legacy unversioned and the new versioned notifications are emitted
- versioned
Only the new versioned notifications are emitted
- unversioned
Only the legacy unversioned notifications are emitted
¶ Group
Name
DEFAULT
notification_format
- versioned_notifications_topics¶
- Type:
list
- Default:
['versioned_notifications']
Specifies the topics for the versioned notifications issued by nova.
The default value is fine for most deployments and rarely needs to be changed. However, if you have a third-party service that consumes versioned notifications, it might be worth getting a topic for that service. Nova will send a message containing a versioned notification payload to each topic queue in this list.
The list of versioned notifications is visible in https://docs.openstack.org/nova/latest/reference/notifications.html
- bdms_in_notifications¶
- Type:
boolean
- Default:
False
If enabled, include block device information in the versioned notification payload. Sending block device information is disabled by default as providing that information can incur some overhead on the system since the information may need to be loaded from the database.
os_vif_linux_bridge¶
- iptables_top_regex¶
- Type:
string
- Default:
''
Regular expression to match the iptables rule that should always be on the top.
¶ Group
Name
DEFAULT
iptables_top_regex
- iptables_bottom_regex¶
- Type:
string
- Default:
''
Regular expression to match the iptables rule that should always be on the bottom.
¶ Group
Name
DEFAULT
iptables_bottom_regex
- iptables_drop_action¶
- Type:
string
- Default:
DROP
The table that iptables to jump to when a packet is to be dropped.
¶ Group
Name
DEFAULT
iptables_drop_action
- forward_bridge_interface¶
- Type:
multi-valued
- Default:
all
An interface that bridges can forward to. If this is set to all then all traffic will be forwarded. Can be specified multiple times.
¶ Group
Name
DEFAULT
forward_bridge_interface
- vlan_interface¶
- Type:
string
- Default:
<None>
VLANs will bridge into this interface if set
¶ Group
Name
DEFAULT
vlan_interface
- flat_interface¶
- Type:
string
- Default:
<None>
FlatDhcp will bridge into this interface if set
¶ Group
Name
DEFAULT
flat_interface
- network_device_mtu¶
- Type:
integer
- Default:
1500
MTU setting for network interface.
¶ Group
Name
DEFAULT
network_device_mtu
os_vif_noop¶
os_vif_ovs¶
- network_device_mtu¶
- Type:
integer
- Default:
1500
MTU setting for network interface.
¶ Group
Name
DEFAULT
network_device_mtu
- ovs_vsctl_timeout¶
- Type:
integer
- Default:
120
Amount of time, in seconds, that ovs_vsctl should wait for a response from the database. 0 is to wait forever.
¶ Group
Name
DEFAULT
ovs_vsctl_timeout
- ovsdb_connection¶
- Type:
string
- Default:
tcp:127.0.0.1:6640
The connection string for the OVSDB backend. When executing commands using the native or vsctl ovsdb interface drivers this config option defines the ovsdb endpoint used.
- ovsdb_interface¶
- Type:
string
- Default:
native
- Valid Values:
vsctl, native
The interface for interacting with the OVSDB
Warning
This option is deprecated for removal since 2.2.0. Its value may be silently ignored in the future.
- Reason:
os-vif has supported ovsdb access via python bindings since Stein (1.15.0), starting in Victoria (2.2.0) the ovs-vsctl driver is now deprecated for removal and in future releases it will be be removed.
- isolate_vif¶
- Type:
boolean
- Default:
False
Controls if VIF should be isolated when plugged to the ovs bridge. This should only be set to True when using the neutron ovs ml2 agent.
- per_port_bridge¶
- Type:
boolean
- Default:
False
Controls if VIF should be plugged into a per-port bridge. This is experimental and controls the plugging behavior when not using hybrid-plug.This is only used on linux and should be set to false in all other cases such as ironic smartnic ports.
- default_qos_type¶
- Type:
string
- Default:
linux-noop
- Valid Values:
linux-htb, linux-hfsc, linux-sfq, linux-codel, linux-fq_codel, linux-noop
- The default qos type to apply to ovs ports.
linux-noop is the default. ovs will not modify the qdisc on the port if linux-noop is specified. This allows operators to manage QOS out of band of OVS. For more information see the ovs man pages https://manpages.debian.org/testing/openvswitch-common/ovs-vswitchd.conf.db.5.en.html#type~4
Note: This will only be set when a port is first created on the ovs bridge to ensure that the qos type can be managed via neutron if required for bandwidth limiting and other use-cases.
oslo_concurrency¶
- disable_process_locking¶
- Type:
boolean
- Default:
False
Enables or disables inter-process locks.
- lock_path¶
- Type:
string
- Default:
<None>
Directory to use for lock files. For security, the specified directory should only be writable by the user running the processes that need locking. Defaults to environment variable OSLO_LOCK_PATH. If external locks are used, a lock path must be set.
oslo_limit¶
- endpoint_id¶
- Type:
string
- Default:
<None>
The service’s endpoint id which is registered in Keystone.
- cafile¶
- Type:
string
- Default:
<None>
PEM encoded Certificate Authority to use when verifying HTTPs connections.
- certfile¶
- Type:
string
- Default:
<None>
PEM encoded client certificate cert file
- keyfile¶
- Type:
string
- Default:
<None>
PEM encoded client certificate key file
- insecure¶
- Type:
boolean
- Default:
False
Verify HTTPS connections.
- timeout¶
- Type:
integer
- Default:
<None>
Timeout value for http requests
- collect_timing¶
- Type:
boolean
- Default:
False
Collect per-API call timing information.
- split_loggers¶
- Type:
boolean
- Default:
False
Log requests to multiple loggers.
- auth_url¶
- Type:
unknown type
- Default:
<None>
Authentication URL
- system_scope¶
- Type:
unknown type
- Default:
<None>
Scope for system operations
- domain_id¶
- Type:
unknown type
- Default:
<None>
Domain ID to scope to
- domain_name¶
- Type:
unknown type
- Default:
<None>
Domain name to scope to
- project_id¶
- Type:
unknown type
- Default:
<None>
Project ID to scope to
- project_name¶
- Type:
unknown type
- Default:
<None>
Project name to scope to
- project_domain_id¶
- Type:
unknown type
- Default:
<None>
Domain ID containing project
- project_domain_name¶
- Type:
unknown type
- Default:
<None>
Domain name containing project
- trust_id¶
- Type:
unknown type
- Default:
<None>
ID of the trust to use as a trustee use
- default_domain_id¶
- Type:
unknown type
- Default:
<None>
Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication.
- default_domain_name¶
- Type:
unknown type
- Default:
<None>
Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication.
- user_id¶
- Type:
unknown type
- Default:
<None>
User ID
- username¶
- Type:
unknown type
- Default:
<None>
Username
¶ Group
Name
oslo_limit
user-name
oslo_limit
user_name
- user_domain_id¶
- Type:
unknown type
- Default:
<None>
User’s domain id
- user_domain_name¶
- Type:
unknown type
- Default:
<None>
User’s domain name
- password¶
- Type:
unknown type
- Default:
<None>
User’s password
- tenant_id¶
- Type:
unknown type
- Default:
<None>
Tenant ID
- tenant_name¶
- Type:
unknown type
- Default:
<None>
Tenant Name
- service_type¶
- Type:
string
- Default:
<None>
The default service_type for endpoint URL discovery.
- service_name¶
- Type:
string
- Default:
<None>
The default service_name for endpoint URL discovery.
- valid_interfaces¶
- Type:
list
- Default:
<None>
List of interfaces, in order of preference, for endpoint URL.
- region_name¶
- Type:
string
- Default:
<None>
The default region_name for endpoint URL discovery.
- endpoint_override¶
- Type:
string
- Default:
<None>
Always use this endpoint URL for requests for this client. NOTE: The unversioned endpoint should be specified here; to request a particular API version, use the version, min-version, and/or max-version options.
- version¶
- Type:
string
- Default:
<None>
Minimum Major API version within a given Major API version for endpoint URL discovery. Mutually exclusive with min_version and max_version
- min_version¶
- Type:
string
- Default:
<None>
The minimum major version of a given API, intended to be used as the lower bound of a range with max_version. Mutually exclusive with version. If min_version is given with no max_version it is as if max version is “latest”.
- max_version¶
- Type:
string
- Default:
<None>
The maximum major version of a given API, intended to be used as the upper bound of a range with min_version. Mutually exclusive with version.
- connect_retries¶
- Type:
integer
- Default:
<None>
The maximum number of retries that should be attempted for connection errors.
- connect_retry_delay¶
- Type:
floating point
- Default:
<None>
Delay (in seconds) between two retries for connection errors. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used.
- status_code_retries¶
- Type:
integer
- Default:
<None>
The maximum number of retries that should be attempted for retriable HTTP status codes.
- status_code_retry_delay¶
- Type:
floating point
- Default:
<None>
Delay (in seconds) between two retries for retriable status codes. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used.
- retriable_status_codes¶
- Type:
list
- Default:
<None>
List of retriable HTTP status codes that should be retried. If not set default to [503]
oslo_messaging_amqp¶
- container_name¶
- Type:
string
- Default:
<None>
Name for the AMQP container. must be globally unique. Defaults to a generated UUID
¶ Group
Name
amqp1
container_name
- idle_timeout¶
- Type:
integer
- Default:
0
Timeout for inactive connections (in seconds)
¶ Group
Name
amqp1
idle_timeout
- ssl¶
- Type:
boolean
- Default:
False
Attempt to connect via SSL. If no other ssl-related parameters are given, it will use the system’s CA-bundle to verify the server’s certificate.
- ssl_ca_file¶
- Type:
string
- Default:
''
CA certificate PEM file used to verify the server’s certificate
¶ Group
Name
amqp1
ssl_ca_file
- ssl_cert_file¶
- Type:
string
- Default:
''
Self-identifying certificate PEM file for client authentication
¶ Group
Name
amqp1
ssl_cert_file
- ssl_key_file¶
- Type:
string
- Default:
''
Private key PEM file used to sign ssl_cert_file certificate (optional)
¶ Group
Name
amqp1
ssl_key_file
- ssl_key_password¶
- Type:
string
- Default:
<None>
Password for decrypting ssl_key_file (if encrypted)
¶ Group
Name
amqp1
ssl_key_password
- ssl_verify_vhost¶
- Type:
boolean
- Default:
False
By default SSL checks that the name in the server’s certificate matches the hostname in the transport_url. In some configurations it may be preferable to use the virtual hostname instead, for example if the server uses the Server Name Indication TLS extension (rfc6066) to provide a certificate per virtual host. Set ssl_verify_vhost to True if the server’s SSL certificate uses the virtual host name instead of the DNS name.
- sasl_mechanisms¶
- Type:
string
- Default:
''
Space separated list of acceptable SASL mechanisms
¶ Group
Name
amqp1
sasl_mechanisms
- sasl_config_dir¶
- Type:
string
- Default:
''
Path to directory that contains the SASL configuration
¶ Group
Name
amqp1
sasl_config_dir
- sasl_config_name¶
- Type:
string
- Default:
''
Name of configuration file (without .conf suffix)
¶ Group
Name
amqp1
sasl_config_name
- sasl_default_realm¶
- Type:
string
- Default:
''
SASL realm to use if no realm present in username
- connection_retry_interval¶
- Type:
integer
- Default:
1
- Minimum Value:
1
Seconds to pause before attempting to re-connect.
- connection_retry_backoff¶
- Type:
integer
- Default:
2
- Minimum Value:
0
Increase the connection_retry_interval by this many seconds after each unsuccessful failover attempt.
- connection_retry_interval_max¶
- Type:
integer
- Default:
30
- Minimum Value:
1
Maximum limit for connection_retry_interval + connection_retry_backoff
- link_retry_delay¶
- Type:
integer
- Default:
10
- Minimum Value:
1
Time to pause between re-connecting an AMQP 1.0 link that failed due to a recoverable error.
- default_reply_retry¶
- Type:
integer
- Default:
0
- Minimum Value:
-1
The maximum number of attempts to re-send a reply message which failed due to a recoverable error.
- default_reply_timeout¶
- Type:
integer
- Default:
30
- Minimum Value:
5
The deadline for an rpc reply message delivery.
- default_send_timeout¶
- Type:
integer
- Default:
30
- Minimum Value:
5
The deadline for an rpc cast or call message delivery. Only used when caller does not provide a timeout expiry.
- default_notify_timeout¶
- Type:
integer
- Default:
30
- Minimum Value:
5
The deadline for a sent notification message delivery. Only used when caller does not provide a timeout expiry.
- default_sender_link_timeout¶
- Type:
integer
- Default:
600
- Minimum Value:
1
The duration to schedule a purge of idle sender links. Detach link after expiry.
- addressing_mode¶
- Type:
string
- Default:
dynamic
Indicates the addressing mode used by the driver. Permitted values: ‘legacy’ - use legacy non-routable addressing ‘routable’ - use routable addresses ‘dynamic’ - use legacy addresses if the message bus does not support routing otherwise use routable addressing
- pseudo_vhost¶
- Type:
boolean
- Default:
True
Enable virtual host support for those message buses that do not natively support virtual hosting (such as qpidd). When set to true the virtual host name will be added to all message bus addresses, effectively creating a private ‘subnet’ per virtual host. Set to False if the message bus supports virtual hosting using the ‘hostname’ field in the AMQP 1.0 Open performative as the name of the virtual host.
- server_request_prefix¶
- Type:
string
- Default:
exclusive
address prefix used when sending to a specific server
¶ Group
Name
amqp1
server_request_prefix
- broadcast_prefix¶
- Type:
string
- Default:
broadcast
address prefix used when broadcasting to all servers
¶ Group
Name
amqp1
broadcast_prefix
- group_request_prefix¶
- Type:
string
- Default:
unicast
address prefix when sending to any server in group
¶ Group
Name
amqp1
group_request_prefix
- rpc_address_prefix¶
- Type:
string
- Default:
openstack.org/om/rpc
Address prefix for all generated RPC addresses
- notify_address_prefix¶
- Type:
string
- Default:
openstack.org/om/notify
Address prefix for all generated Notification addresses
- multicast_address¶
- Type:
string
- Default:
multicast
Appended to the address prefix when sending a fanout message. Used by the message bus to identify fanout messages.
- unicast_address¶
- Type:
string
- Default:
unicast
Appended to the address prefix when sending to a particular RPC/Notification server. Used by the message bus to identify messages sent to a single destination.
- anycast_address¶
- Type:
string
- Default:
anycast
Appended to the address prefix when sending to a group of consumers. Used by the message bus to identify messages that should be delivered in a round-robin fashion across consumers.
- default_notification_exchange¶
- Type:
string
- Default:
<None>
Exchange name used in notification addresses. Exchange name resolution precedence: Target.exchange if set else default_notification_exchange if set else control_exchange if set else ‘notify’
- default_rpc_exchange¶
- Type:
string
- Default:
<None>
Exchange name used in RPC addresses. Exchange name resolution precedence: Target.exchange if set else default_rpc_exchange if set else control_exchange if set else ‘rpc’
- reply_link_credit¶
- Type:
integer
- Default:
200
- Minimum Value:
1
Window size for incoming RPC Reply messages.
- rpc_server_credit¶
- Type:
integer
- Default:
100
- Minimum Value:
1
Window size for incoming RPC Request messages
- notify_server_credit¶
- Type:
integer
- Default:
100
- Minimum Value:
1
Window size for incoming Notification messages
- pre_settled¶
- Type:
multi-valued
- Default:
rpc-cast
- Default:
rpc-reply
Send messages of this type pre-settled. Pre-settled messages will not receive acknowledgement from the peer. Note well: pre-settled messages may be silently discarded if the delivery fails. Permitted values: ‘rpc-call’ - send RPC Calls pre-settled ‘rpc-reply’- send RPC Replies pre-settled ‘rpc-cast’ - Send RPC Casts pre-settled ‘notify’ - Send Notifications pre-settled
oslo_messaging_kafka¶
- kafka_max_fetch_bytes¶
- Type:
integer
- Default:
1048576
Max fetch bytes of Kafka consumer
- kafka_consumer_timeout¶
- Type:
floating point
- Default:
1.0
Default timeout(s) for Kafka consumers
- pool_size¶
- Type:
integer
- Default:
10
Pool Size for Kafka Consumers
Warning
This option is deprecated for removal. Its value may be silently ignored in the future.
- Reason:
Driver no longer uses connection pool.
- conn_pool_min_size¶
- Type:
integer
- Default:
2
The pool size limit for connections expiration policy
Warning
This option is deprecated for removal. Its value may be silently ignored in the future.
- Reason:
Driver no longer uses connection pool.
- conn_pool_ttl¶
- Type:
integer
- Default:
1200
The time-to-live in sec of idle connections in the pool
Warning
This option is deprecated for removal. Its value may be silently ignored in the future.
- Reason:
Driver no longer uses connection pool.
- consumer_group¶
- Type:
string
- Default:
oslo_messaging_consumer
Group id for Kafka consumer. Consumers in one group will coordinate message consumption
- producer_batch_timeout¶
- Type:
floating point
- Default:
0.0
Upper bound on the delay for KafkaProducer batching in seconds
- producer_batch_size¶
- Type:
integer
- Default:
16384
Size of batch for the producer async send
- compression_codec¶
- Type:
string
- Default:
none
- Valid Values:
none, gzip, snappy, lz4, zstd
The compression codec for all data generated by the producer. If not set, compression will not be used. Note that the allowed values of this depend on the kafka version
- enable_auto_commit¶
- Type:
boolean
- Default:
False
Enable asynchronous consumer commits
- max_poll_records¶
- Type:
integer
- Default:
500
The maximum number of records returned in a poll call
- security_protocol¶
- Type:
string
- Default:
PLAINTEXT
- Valid Values:
PLAINTEXT, SASL_PLAINTEXT, SSL, SASL_SSL
Protocol used to communicate with brokers
- sasl_mechanism¶
- Type:
string
- Default:
PLAIN
Mechanism when security protocol is SASL
- ssl_cafile¶
- Type:
string
- Default:
''
CA certificate PEM file used to verify the server certificate
- ssl_client_cert_file¶
- Type:
string
- Default:
''
Client certificate PEM file used for authentication.
- ssl_client_key_file¶
- Type:
string
- Default:
''
Client key PEM file used for authentication.
- ssl_client_key_password¶
- Type:
string
- Default:
''
Client key password file used for authentication.
oslo_messaging_notifications¶
- driver¶
- Type:
multi-valued
- Default:
''
The Drivers(s) to handle sending notifications. Possible values are messaging, messagingv2, routing, log, test, noop
¶ Group
Name
DEFAULT
notification_driver
- transport_url¶
- Type:
string
- Default:
<None>
A URL representing the messaging driver to use for notifications. If not set, we fall back to the same configuration used for RPC.
¶ Group
Name
DEFAULT
notification_transport_url
- topics¶
- Type:
list
- Default:
['notifications']
AMQP topic used for OpenStack notifications.
¶ Group
Name
rpc_notifier2
topics
DEFAULT
notification_topics
- retry¶
- Type:
integer
- Default:
-1
The maximum number of attempts to re-send a notification message which failed to be delivered due to a recoverable error. 0 - No retry, -1 - indefinite
oslo_messaging_rabbit¶
- amqp_durable_queues¶
- Type:
boolean
- Default:
False
Use durable queues in AMQP. If rabbit_quorum_queue is enabled, queues will be durable and this value will be ignored.
- amqp_auto_delete¶
- Type:
boolean
- Default:
False
Auto-delete queues in AMQP.
¶ Group
Name
DEFAULT
amqp_auto_delete
- ssl¶
- Type:
boolean
- Default:
False
Connect over SSL.
¶ Group
Name
oslo_messaging_rabbit
rabbit_use_ssl
- ssl_version¶
- Type:
string
- Default:
''
SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some distributions.
¶ Group
Name
oslo_messaging_rabbit
kombu_ssl_version
- ssl_key_file¶
- Type:
string
- Default:
''
SSL key file (valid only if SSL enabled).
¶ Group
Name
oslo_messaging_rabbit
kombu_ssl_keyfile
- ssl_cert_file¶
- Type:
string
- Default:
''
SSL cert file (valid only if SSL enabled).
¶ Group
Name
oslo_messaging_rabbit
kombu_ssl_certfile
- ssl_ca_file¶
- Type:
string
- Default:
''
SSL certification authority file (valid only if SSL enabled).
¶ Group
Name
oslo_messaging_rabbit
kombu_ssl_ca_certs
- ssl_enforce_fips_mode¶
- Type:
boolean
- Default:
False
Global toggle for enforcing the OpenSSL FIPS mode. This feature requires Python support. This is available in Python 3.9 in all environments and may have been backported to older Python versions on select environments. If the Python executable used does not support OpenSSL FIPS mode, an exception will be raised.
- heartbeat_in_pthread¶
- Type:
boolean
- Default:
False
Run the health check heartbeat thread through a native python thread by default. If this option is equal to False then the health check heartbeat will inherit the execution model from the parent process. For example if the parent process has monkey patched the stdlib by using eventlet/greenlet then the heartbeat will be run through a green thread. This option should be set to True only for the wsgi services.
- kombu_reconnect_delay¶
- Type:
floating point
- Default:
1.0
- Minimum Value:
0.0
- Maximum Value:
4.5
How long to wait (in seconds) before reconnecting in response to an AMQP consumer cancel notification.
¶ Group
Name
DEFAULT
kombu_reconnect_delay
- kombu_compression¶
- Type:
string
- Default:
<None>
EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression will not be used. This option may not be available in future versions.
- kombu_missing_consumer_retry_timeout¶
- Type:
integer
- Default:
60
How long to wait a missing client before abandoning to send it its replies. This value should not be longer than rpc_response_timeout.
¶ Group
Name
oslo_messaging_rabbit
kombu_reconnect_timeout
- kombu_failover_strategy¶
- Type:
string
- Default:
round-robin
- Valid Values:
round-robin, shuffle
Determines how the next RabbitMQ node is chosen in case the one we are currently connected to becomes unavailable. Takes effect only if more than one RabbitMQ node is provided in config.
- rabbit_login_method¶
- Type:
string
- Default:
AMQPLAIN
- Valid Values:
PLAIN, AMQPLAIN, EXTERNAL, RABBIT-CR-DEMO
The RabbitMQ login method.
¶ Group
Name
DEFAULT
rabbit_login_method
- rabbit_retry_interval¶
- Type:
integer
- Default:
1
How frequently to retry connecting with RabbitMQ.
- rabbit_retry_backoff¶
- Type:
integer
- Default:
2
How long to backoff for between retries when connecting to RabbitMQ.
¶ Group
Name
DEFAULT
rabbit_retry_backoff
- rabbit_interval_max¶
- Type:
integer
- Default:
30
Maximum interval of RabbitMQ connection retries. Default is 30 seconds.
- rabbit_ha_queues¶
- Type:
boolean
- Default:
False
Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change this option, you must wipe the RabbitMQ database. In RabbitMQ 3.0, queue mirroring is no longer controlled by the x-ha-policy argument when declaring a queue. If you just want to make sure that all queues (except those with auto-generated names) are mirrored across all nodes, run: “rabbitmqctl set_policy HA ‘^(?!amq.).*’ ‘{“ha-mode”: “all”}’ “
¶ Group
Name
DEFAULT
rabbit_ha_queues
- rabbit_quorum_queue¶
- Type:
boolean
- Default:
False
Use quorum queues in RabbitMQ (x-queue-type: quorum). The quorum queue is a modern queue type for RabbitMQ implementing a durable, replicated FIFO queue based on the Raft consensus algorithm. It is available as of RabbitMQ 3.8.0. If set this option will conflict with the HA queues (
rabbit_ha_queues
) aka mirrored queues, in other words the HA queues should be disabled. Quorum queues are also durable by default so the amqp_durable_queues option is ignored when this option is enabled.
- rabbit_transient_quorum_queue¶
- Type:
boolean
- Default:
False
Use quorum queues for transients queues in RabbitMQ. Enabling this option will then make sure those queues are also using quorum kind of rabbit queues, which are HA by default.
- rabbit_quorum_delivery_limit¶
- Type:
integer
- Default:
0
Each time a message is redelivered to a consumer, a counter is incremented. Once the redelivery count exceeds the delivery limit the message gets dropped or dead-lettered (if a DLX exchange has been configured) Used only when rabbit_quorum_queue is enabled, Default 0 which means dont set a limit.
- rabbit_quorum_max_memory_length¶
- Type:
integer
- Default:
0
By default all messages are maintained in memory if a quorum queue grows in length it can put memory pressure on a cluster. This option can limit the number of messages in the quorum queue. Used only when rabbit_quorum_queue is enabled, Default 0 which means dont set a limit.
¶ Group
Name
oslo_messaging_rabbit
rabbit_quroum_max_memory_length
- rabbit_quorum_max_memory_bytes¶
- Type:
integer
- Default:
0
By default all messages are maintained in memory if a quorum queue grows in length it can put memory pressure on a cluster. This option can limit the number of memory bytes used by the quorum queue. Used only when rabbit_quorum_queue is enabled, Default 0 which means dont set a limit.
¶ Group
Name
oslo_messaging_rabbit
rabbit_quroum_max_memory_bytes
- rabbit_transient_queues_ttl¶
- Type:
integer
- Default:
1800
- Minimum Value:
0
Positive integer representing duration in seconds for queue TTL (x-expires). Queues which are unused for the duration of the TTL are automatically deleted. The parameter affects only reply and fanout queues. Setting 0 as value will disable the x-expires. If doing so, make sure you have a rabbitmq policy to delete the queues or you deployment will create an infinite number of queue over time.
- rabbit_qos_prefetch_count¶
- Type:
integer
- Default:
0
Specifies the number of messages to prefetch. Setting to zero allows unlimited messages.
- heartbeat_timeout_threshold¶
- Type:
integer
- Default:
60
Number of seconds after which the Rabbit broker is considered down if heartbeat’s keep-alive fails (0 disables heartbeat).
- heartbeat_rate¶
- Type:
integer
- Default:
3
How often times during the heartbeat_timeout_threshold we check the heartbeat.
- direct_mandatory_flag¶
- Type:
boolean
- Default:
True
(DEPRECATED) Enable/Disable the RabbitMQ mandatory flag for direct send. The direct send is used as reply, so the MessageUndeliverable exception is raised in case the client queue does not exist.MessageUndeliverable exception will be used to loop for a timeout to lets a chance to sender to recover.This flag is deprecated and it will not be possible to deactivate this functionality anymore
Warning
This option is deprecated for removal. Its value may be silently ignored in the future.
- Reason:
Mandatory flag no longer deactivable.
- enable_cancel_on_failover¶
- Type:
boolean
- Default:
False
Enable x-cancel-on-ha-failover flag so that rabbitmq server will cancel and notify consumerswhen queue is down
- use_queue_manager¶
- Type:
boolean
- Default:
False
Should we use consistant queue names or random ones
- hostname¶
- Type:
string
- Default:
np0038741231
Hostname used by queue manager
- processname¶
- Type:
string
- Default:
sphinx-build
Process name used by queue manager
- rabbit_stream_fanout¶
- Type:
boolean
- Default:
False
Use stream queues in RabbitMQ (x-queue-type: stream). Streams are a new persistent and replicated data structure (“queue type”) in RabbitMQ which models an append-only log with non-destructive consumer semantics. It is available as of RabbitMQ 3.9.0. If set this option will replace all fanout queues with only one stream queue.
oslo_middleware¶
- max_request_body_size¶
- Type:
integer
- Default:
114688
The maximum body size for each request, in bytes.
¶ Group
Name
DEFAULT
osapi_max_request_body_size
DEFAULT
max_request_body_size
- enable_proxy_headers_parsing¶
- Type:
boolean
- Default:
False
Whether the application is behind a proxy or not. This determines if the middleware should parse the headers or not.
- http_basic_auth_user_file¶
- Type:
string
- Default:
/etc/htpasswd
HTTP basic auth password file.
oslo_policy¶
- enforce_scope¶
- Type:
boolean
- Default:
True
This option controls whether or not to enforce scope when evaluating policies. If
True
, the scope of the token used in the request is compared to thescope_types
of the policy being enforced. If the scopes do not match, anInvalidScope
exception will be raised. IfFalse
, a message will be logged informing operators that policies are being invoked with mismatching scope.
- enforce_new_defaults¶
- Type:
boolean
- Default:
True
This option controls whether or not to use old deprecated defaults when evaluating policies. If
True
, the old deprecated defaults are not going to be evaluated. This means if any existing token is allowed for old defaults but is disallowed for new defaults, it will be disallowed. It is encouraged to enable this flag along with theenforce_scope
flag so that you can get the benefits of new defaults andscope_type
together. IfFalse
, the deprecated policy check string is logically OR’d with the new policy check string, allowing for a graceful upgrade experience between releases with new policies, which is the default behavior.
- policy_file¶
- Type:
string
- Default:
policy.yaml
The relative or absolute path of a file that maps roles to permissions for a given service. Relative paths must be specified in relation to the configuration file setting this option.
¶ Group
Name
DEFAULT
policy_file
- policy_default_rule¶
- Type:
string
- Default:
default
Default rule. Enforced when a requested rule is not found.
¶ Group
Name
DEFAULT
policy_default_rule
- policy_dirs¶
- Type:
multi-valued
- Default:
policy.d
Directories where policy configuration files are stored. They can be relative to any directory in the search path defined by the config_dir option, or absolute paths. The file defined by policy_file must exist for these directories to be searched. Missing or empty directories are ignored.
¶ Group
Name
DEFAULT
policy_dirs
- remote_content_type¶
- Type:
string
- Default:
application/x-www-form-urlencoded
- Valid Values:
application/x-www-form-urlencoded, application/json
Content Type to send and receive data for REST based policy check
- remote_ssl_verify_server_crt¶
- Type:
boolean
- Default:
False
server identity verification for REST based policy check
- remote_ssl_ca_crt_file¶
- Type:
string
- Default:
<None>
Absolute path to ca cert file for REST based policy check
- remote_ssl_client_crt_file¶
- Type:
string
- Default:
<None>
Absolute path to client cert for REST based policy check
- remote_ssl_client_key_file¶
- Type:
string
- Default:
<None>
Absolute path client key file REST based policy check
oslo_reports¶
- log_dir¶
- Type:
string
- Default:
<None>
Path to a log directory where to create a file
- file_event_handler¶
- Type:
string
- Default:
<None>
The path to a file to watch for changes to trigger the reports, instead of signals. Setting this option disables the signal trigger for the reports. If application is running as a WSGI application it is recommended to use this instead of signals.
- file_event_handler_interval¶
- Type:
integer
- Default:
1
How many seconds to wait between polls when file_event_handler is set
oslo_versionedobjects¶
- fatal_exception_format_errors¶
- Type:
boolean
- Default:
False
Make exception message format errors fatal
pci¶
- alias¶
- Type:
multi-valued
- Default:
''
An alias for a PCI passthrough device requirement.
This allows users to specify the alias in the extra specs for a flavor, without needing to repeat all the PCI property requirements.
This should be configured for the
nova-api
service and, assuming you wish to use move operations, for eachnova-compute
service.Possible Values:
A dictionary of JSON values which describe the aliases. For example:
alias = { "name": "QuickAssist", "product_id": "0443", "vendor_id": "8086", "device_type": "type-PCI", "numa_policy": "required" }
This defines an alias for the Intel QuickAssist card. (multi valued). Valid key values are :
name
Name of the PCI alias.
product_id
Product ID of the device in hexadecimal.
vendor_id
Vendor ID of the device in hexadecimal.
device_type
Type of PCI device. Valid values are:
type-PCI
,type-PF
andtype-VF
. Note that"device_type": "type-PF"
must be specified if you wish to passthrough a device that supports SR-IOV in its entirety.numa_policy
Required NUMA affinity of device. Valid values are:
legacy
,preferred
andrequired
.resource_class
The optional Placement resource class name that is used to track the requested PCI devices in Placement. It can be a standard resource class from the
os-resource-classes
lib. Or it can be an arbitrary string. If it is an non-standard resource class then Nova will normalize it to a proper Placement resource class by making it upper case, replacing any consecutive character outside of[A-Z0-9_]
with a single ‘_’, and prefixing the name withCUSTOM_
if not yet prefixed. The maximum allowed length is 255 character including the prefix. Ifresource_class
is not provided Nova will generate it fromvendor_id
andproduct_id
values of the alias in the form ofCUSTOM_PCI_{vendor_id}_{product_id}
. Theresource_class
requested in the alias is matched against theresource_class
defined in the[pci]device_spec
. This field can only be used only if[filter_scheduler]pci_in_placement
is enabled.traits
An optional comma separated list of Placement trait names requested to be present on the resource provider that fulfills this alias. Each trait can be a standard trait from
os-traits
lib or it can be an arbitrary string. If it is a non-standard trait then Nova will normalize the trait name by making it upper case, replacing any consecutive character outside of[A-Z0-9_]
with a single ‘_’, and prefixing the name withCUSTOM_
if not yet prefixed. The maximum allowed length of a trait name is 255 character including the prefix. Every trait intraits
requested in the alias ensured to be in the list of traits provided in thetraits
field of the[pci]device_spec
when scheduling the request. This field can only be used only if[filter_scheduler]pci_in_placement
is enabled.
Supports multiple aliases by repeating the option (not by specifying a list value):
alias = { "name": "QuickAssist-1", "product_id": "0443", "vendor_id": "8086", "device_type": "type-PCI", "numa_policy": "required" } alias = { "name": "QuickAssist-2", "product_id": "0444", "vendor_id": "8086", "device_type": "type-PCI", "numa_policy": "required" }
¶ Group
Name
DEFAULT
pci_alias
- device_spec¶
- Type:
multi-valued
- Default:
''
Specify the PCI devices available to VMs.
Possible values:
A JSON dictionary which describe a PCI device. It should take the following format:
["vendor_id": "<id>",] ["product_id": "<id>",] ["address": "[[[[<domain>]:]<bus>]:][<slot>][.[<function>]]" | "devname": "<name>",] {"<tag>": "<tag_value>",}
Where
[
indicates zero or one occurrences,{
indicates zero or multiple occurrences, and|
mutually exclusive options. Note that any missing fields are automatically wildcarded.Valid key values are :
vendor_id
Vendor ID of the device in hexadecimal.
product_id
Product ID of the device in hexadecimal.
address
PCI address of the device. Both traditional glob style and regular expression syntax is supported. Please note that the address fields are restricted to the following maximum values:
domain - 0xFFFF
bus - 0xFF
slot - 0x1F
function - 0x7
devname
Device name of the device (for e.g. interface name). Not all PCI devices have a name.
<tag>
Additional
<tag>
and<tag_value>
used for specifying PCI devices. Supported<tag>
values are :physical_network
trusted
remote_managed
- a VF is managed remotely by an off-path networking backend. May have boolean-like string values case-insensitive values: “true” or “false”. By default, “false” is assumed for all devices. Using this option requires a networking service backend capable of handling those devices. PCI devices are also required to have a PCI VPD capability with a card serial number (either on a VF itself on its corresponding PF), otherwise they will be ignored and not available for allocation.resource_class
- optional Placement resource class name to be used to track the matching PCI devices in Placement when [pci]report_in_placement is True. It can be a standard resource class from theos-resource-classes
lib. Or can be any string. In that case Nova will normalize it to a proper Placement resource class by making it upper case, replacing any consecutive character outside of[A-Z0-9_]
with a single ‘_’, and prefixing the name withCUSTOM_
if not yet prefixed. The maximum allowed length is 255 character including the prefix. Ifresource_class
is not provided Nova will generate it from the PCI device’svendor_id
andproduct_id
in the form ofCUSTOM_PCI_{vendor_id}_{product_id}
. Theresource_class
can be requested from a[pci]alias
traits
- optional comma separated list of Placement trait names to report on the resource provider that will represent the matching PCI device. Each trait can be a standard trait fromos-traits
lib or can be any string. If it is not a standard trait then Nova will normalize the trait name by making it upper case, replacing any consecutive character outside of[A-Z0-9_]
with a single ‘_’, and prefixing the name withCUSTOM_
if not yet prefixed. The maximum allowed length of a trait name is 255 character including the prefix. Any trait fromtraits
can be requested from a[pci]alias
.
Valid examples are:
device_spec = {"devname":"eth0", "physical_network":"physnet"} device_spec = {"address":"*:0a:00.*"} device_spec = {"address":":0a:00.", "physical_network":"physnet1"} device_spec = {"vendor_id":"1137", "product_id":"0071"} device_spec = {"vendor_id":"1137", "product_id":"0071", "address": "0000:0a:00.1", "physical_network":"physnet1"} device_spec = {"address":{"domain": ".*", "bus": "02", "slot": "01", "function": "[2-7]"}, "physical_network":"physnet1"} device_spec = {"address":{"domain": ".*", "bus": "02", "slot": "0[1-2]", "function": ".*"}, "physical_network":"physnet1"} device_spec = {"devname": "eth0", "physical_network":"physnet1", "trusted": "true"} device_spec = {"vendor_id":"a2d6", "product_id":"15b3", "remote_managed": "true"} device_spec = {"vendor_id":"a2d6", "product_id":"15b3", "address": "0000:82:00.0", "physical_network":"physnet1", "remote_managed": "true"} device_spec = {"vendor_id":"1002", "product_id":"6929", "address": "0000:82:00.0", "resource_class": "PGPU", "traits": "HW_GPU_API_VULKAN,my-awesome-gpu"}
The following are invalid, as they specify mutually exclusive options:
device_spec = {"devname":"eth0", "physical_network":"physnet", "address":"*:0a:00.*"}
The following example is invalid because it specifies the
remote_managed
tag for a PF - it will result in an error during config validation at the Nova Compute service startup:device_spec = {"address": "0000:82:00.0", "product_id": "a2d6", "vendor_id": "15b3", "physical_network": null, "remote_managed": "true"}
A JSON list of JSON dictionaries corresponding to the above format. For example:
device_spec = [{"product_id":"0001", "vendor_id":"8086"}, {"product_id":"0002", "vendor_id":"8086"}]
¶ Group
Name
pci
passthrough_whitelist
DEFAULT
pci_passthrough_whitelist
- report_in_placement¶
- Type:
boolean
- Default:
False
Enable PCI resource inventory reporting to Placement. If it is enabled then the nova-compute service will report PCI resource inventories to Placement according to the [pci]device_spec configuration and the PCI devices reported by the hypervisor. Once it is enabled it cannot be disabled any more. In a future release the default of this config will be change to True.
Related options:
[pci]device_spec: to define which PCI devices nova are allowed to track and assign to guests.
placement¶
- cafile¶
- Type:
string
- Default:
<None>
PEM encoded Certificate Authority to use when verifying HTTPs connections.
- certfile¶
- Type:
string
- Default:
<None>
PEM encoded client certificate cert file
- keyfile¶
- Type:
string
- Default:
<None>
PEM encoded client certificate key file
- insecure¶
- Type:
boolean
- Default:
False
Verify HTTPS connections.
- timeout¶
- Type:
integer
- Default:
<None>
Timeout value for http requests
- collect_timing¶
- Type:
boolean
- Default:
False
Collect per-API call timing information.
- split_loggers¶
- Type:
boolean
- Default:
False
Log requests to multiple loggers.
- auth_type¶
- Type:
unknown type
- Default:
<None>
Authentication type to load
¶ Group
Name
placement
auth_plugin
- auth_section¶
- Type:
unknown type
- Default:
<None>
Config Section from which to load plugin specific options
- auth_url¶
- Type:
unknown type
- Default:
<None>
Authentication URL
- system_scope¶
- Type:
unknown type
- Default:
<None>
Scope for system operations
- domain_id¶
- Type:
unknown type
- Default:
<None>
Domain ID to scope to
- domain_name¶
- Type:
unknown type
- Default:
<None>
Domain name to scope to
- project_id¶
- Type:
unknown type
- Default:
<None>
Project ID to scope to
- project_name¶
- Type:
unknown type
- Default:
<None>
Project name to scope to
- project_domain_id¶
- Type:
unknown type
- Default:
<None>
Domain ID containing project
- project_domain_name¶
- Type:
unknown type
- Default:
<None>
Domain name containing project
- trust_id¶
- Type:
unknown type
- Default:
<None>
ID of the trust to use as a trustee use
- default_domain_id¶
- Type:
unknown type
- Default:
<None>
Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication.
- default_domain_name¶
- Type:
unknown type
- Default:
<None>
Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication.
- user_id¶
- Type:
unknown type
- Default:
<None>
User ID
- username¶
- Type:
unknown type
- Default:
<None>
Username
¶ Group
Name
placement
user-name
placement
user_name
- user_domain_id¶
- Type:
unknown type
- Default:
<None>
User’s domain id
- user_domain_name¶
- Type:
unknown type
- Default:
<None>
User’s domain name
- password¶
- Type:
unknown type
- Default:
<None>
User’s password
- tenant_id¶
- Type:
unknown type
- Default:
<None>
Tenant ID
- tenant_name¶
- Type:
unknown type
- Default:
<None>
Tenant Name
- service_type¶
- Type:
string
- Default:
placement
The default service_type for endpoint URL discovery.
- service_name¶
- Type:
string
- Default:
<None>
The default service_name for endpoint URL discovery.
- valid_interfaces¶
- Type:
list
- Default:
['internal', 'public']
List of interfaces, in order of preference, for endpoint URL.
- region_name¶
- Type:
string
- Default:
<None>
The default region_name for endpoint URL discovery.
- endpoint_override¶
- Type:
string
- Default:
<None>
Always use this endpoint URL for requests for this client. NOTE: The unversioned endpoint should be specified here; to request a particular API version, use the version, min-version, and/or max-version options.
- connect_retries¶
- Type:
integer
- Default:
<None>
The maximum number of retries that should be attempted for connection errors.
- connect_retry_delay¶
- Type:
floating point
- Default:
<None>
Delay (in seconds) between two retries for connection errors. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used.
- status_code_retries¶
- Type:
integer
- Default:
<None>
The maximum number of retries that should be attempted for retriable HTTP status codes.
- status_code_retry_delay¶
- Type:
floating point
- Default:
<None>
Delay (in seconds) between two retries for retriable status codes. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used.
- retriable_status_codes¶
- Type:
list
- Default:
<None>
List of retriable HTTP status codes that should be retried. If not set default to [503]
privsep¶
Configuration options for the oslo.privsep daemon. Note that this group name can be changed by the consuming service. Check the service’s docs to see if this is the case.
- user¶
- Type:
string
- Default:
<None>
User that the privsep daemon should run as.
- group¶
- Type:
string
- Default:
<None>
Group that the privsep daemon should run as.
- capabilities¶
- Type:
unknown type
- Default:
[]
List of Linux capabilities retained by the privsep daemon.
- thread_pool_size¶
- Type:
integer
- Default:
multiprocessing.cpu_count()
- Minimum Value:
1
This option has a sample default set, which means that its actual default value may vary from the one documented above.
The number of threads available for privsep to concurrently run processes. Defaults to the number of CPU cores in the system.
- helper_command¶
- Type:
string
- Default:
<None>
Command to invoke to start the privsep daemon if not using the “fork” method. If not specified, a default is generated using “sudo privsep-helper” and arguments designed to recreate the current configuration. This command must accept suitable –privsep_context and –privsep_sock_path arguments.
- logger_name¶
- Type:
string
- Default:
oslo_privsep.daemon
Logger name to use for this privsep context. By default all contexts log with oslo_privsep.daemon.
profiler¶
- enabled¶
- Type:
boolean
- Default:
False
Enable the profiling for all services on this node.
Default value is False (fully disable the profiling feature).
Possible values:
True: Enables the feature
False: Disables the feature. The profiling cannot be started via this project operations. If the profiling is triggered by another project, this project part will be empty.
¶ Group
Name
profiler
profiler_enabled
- trace_sqlalchemy¶
- Type:
boolean
- Default:
False
Enable SQL requests profiling in services.
Default value is False (SQL requests won’t be traced).
Possible values:
True: Enables SQL requests profiling. Each SQL query will be part of the trace and can the be analyzed by how much time was spent for that.
False: Disables SQL requests profiling. The spent time is only shown on a higher level of operations. Single SQL queries cannot be analyzed this way.
- trace_requests¶
- Type:
boolean
- Default:
False
Enable python requests package profiling.
Supported drivers: jaeger+otlp
Default value is False.
Possible values:
True: Enables requests profiling.
False: Disables requests profiling.
- hmac_keys¶
- Type:
string
- Default:
SECRET_KEY
Secret key(s) to use for encrypting context data for performance profiling.
This string value should have the following format: <key1>[,<key2>,…<keyn>], where each key is some random string. A user who triggers the profiling via the REST API has to set one of these keys in the headers of the REST API call to include profiling results of this node for this particular project.
Both “enabled” flag and “hmac_keys” config options should be set to enable profiling. Also, to generate correct profiling information across all services at least one key needs to be consistent between OpenStack projects. This ensures it can be used from client side to generate the trace, containing information from all possible resources.
- connection_string¶
- Type:
string
- Default:
messaging://
Connection string for a notifier backend.
Default value is
messaging://
which sets the notifier to oslo_messaging.Examples of possible values:
messaging://
- use oslo_messaging driver for sending spans.redis://127.0.0.1:6379
- use redis driver for sending spans.mongodb://127.0.0.1:27017
- use mongodb driver for sending spans.elasticsearch://127.0.0.1:9200
- use elasticsearch driver for sending spans.jaeger://127.0.0.1:6831
- use jaeger tracing as driver for sending spans.
- es_doc_type¶
- Type:
string
- Default:
notification
Document type for notification indexing in elasticsearch.
- es_scroll_time¶
- Type:
string
- Default:
2m
This parameter is a time value parameter (for example: es_scroll_time=2m), indicating for how long the nodes that participate in the search will maintain relevant resources in order to continue and support it.
- es_scroll_size¶
- Type:
integer
- Default:
10000
Elasticsearch splits large requests in batches. This parameter defines maximum size of each batch (for example: es_scroll_size=10000).
- socket_timeout¶
- Type:
floating point
- Default:
0.1
Redissentinel provides a timeout option on the connections. This parameter defines that timeout (for example: socket_timeout=0.1).
- sentinel_service_name¶
- Type:
string
- Default:
mymaster
Redissentinel uses a service name to identify a master redis service. This parameter defines the name (for example:
sentinal_service_name=mymaster
).
- filter_error_trace¶
- Type:
boolean
- Default:
False
Enable filter traces that contain error/exception to a separated place.
Default value is set to False.
Possible values:
True: Enable filter traces that contain error/exception.
False: Disable the filter.
profiler_jaeger¶
- service_name_prefix¶
- Type:
string
- Default:
<None>
Set service name prefix to Jaeger service name.
- process_tags¶
- Type:
dict
- Default:
{}
Set process tracer tags.
profiler_otlp¶
- service_name_prefix¶
- Type:
string
- Default:
<None>
Set service name prefix to OTLP exporters.
quota¶
Quota options allow to manage quotas in openstack deployment.
- instances¶
- Type:
integer
- Default:
10
- Minimum Value:
-1
The number of instances allowed per project.
Possible Values
A positive integer or 0.
-1 to disable the quota.
¶ Group
Name
DEFAULT
quota_instances
- cores¶
- Type:
integer
- Default:
20
- Minimum Value:
-1
The number of instance cores or vCPUs allowed per project.
Possible values:
A positive integer or 0.
-1 to disable the quota.
¶ Group
Name
DEFAULT
quota_cores
- ram¶
- Type:
integer
- Default:
51200
- Minimum Value:
-1
The number of megabytes of instance RAM allowed per project.
Possible values:
A positive integer or 0.
-1 to disable the quota.
¶ Group
Name
DEFAULT
quota_ram
- metadata_items¶
- Type:
integer
- Default:
128
- Minimum Value:
-1
The number of metadata items allowed per instance.
Users can associate metadata with an instance during instance creation. This metadata takes the form of key-value pairs.
Possible values:
A positive integer or 0.
-1 to disable the quota.
¶ Group
Name
DEFAULT
quota_metadata_items
- injected_files¶
- Type:
integer
- Default:
5
- Minimum Value:
-1
The number of injected files allowed.
File injection allows users to customize the personality of an instance by injecting data into it upon boot. Only text file injection is permitted: binary or ZIP files are not accepted. During file injection, any existing files that match specified files are renamed to include
.bak
extension appended with a timestamp.Possible values:
A positive integer or 0.
-1 to disable the quota.
¶ Group
Name
DEFAULT
quota_injected_files
- injected_file_content_bytes¶
- Type:
integer
- Default:
10240
- Minimum Value:
-1
The number of bytes allowed per injected file.
Possible values:
A positive integer or 0.
-1 to disable the quota.
¶ Group
Name
DEFAULT
quota_injected_file_content_bytes
- injected_file_path_length¶
- Type:
integer
- Default:
255
- Minimum Value:
-1
The maximum allowed injected file path length.
Possible values:
A positive integer or 0.
-1 to disable the quota.
¶ Group
Name
DEFAULT
quota_injected_file_path_length
- key_pairs¶
- Type:
integer
- Default:
100
- Minimum Value:
-1
The maximum number of key pairs allowed per user.
Users can create at least one key pair for each project and use the key pair for multiple instances that belong to that project.
Possible values:
A positive integer or 0.
-1 to disable the quota.
¶ Group
Name
DEFAULT
quota_key_pairs
- server_groups¶
- Type:
integer
- Default:
10
- Minimum Value:
-1
The maximum number of server groups per project.
Server groups are used to control the affinity and anti-affinity scheduling policy for a group of servers or instances. Reducing the quota will not affect any existing group, but new servers will not be allowed into groups that have become over quota.
Possible values:
A positive integer or 0.
-1 to disable the quota.
¶ Group
Name
DEFAULT
quota_server_groups
- server_group_members¶
- Type:
integer
- Default:
10
- Minimum Value:
-1
The maximum number of servers per server group.
Possible values:
A positive integer or 0.
-1 to disable the quota.
¶ Group
Name
DEFAULT
quota_server_group_members
- driver¶
- Type:
string
- Default:
nova.quota.DbQuotaDriver
- Valid Values:
nova.quota.DbQuotaDriver, nova.quota.NoopQuotaDriver, nova.quota.UnifiedLimitsDriver
Provides abstraction for quota checks. Users can configure a specific driver to use for quota checks.
Possible values
- nova.quota.DbQuotaDriver
(deprecated) Stores quota limit information in the database and relies on the
quota_*
configuration options for default quota limit values. Counts quota usage on-demand.- nova.quota.NoopQuotaDriver
Ignores quota and treats all resources as unlimited.
- nova.quota.UnifiedLimitsDriver
Uses Keystone unified limits to store quota limit information and relies on resource usage counting from Placement. Counts quota usage on-demand. Resources missing unified limits in Keystone will be treated as a quota limit of 0, so it is important to ensure all resources have registered limits in Keystone. The
nova-manage limits migrate_to_unified_limits
command can be used to copy existing quota limits from the Nova database to Keystone unified limits via the Keystone API. Alternatively, unified limits can be created manually using the OpenStackClient or by calling the Keystone API directly.
- recheck_quota¶
- Type:
boolean
- Default:
True
Recheck quota after resource creation to prevent allowing quota to be exceeded.
This defaults to True (recheck quota after resource creation) but can be set to False to avoid additional load if allowing quota to be exceeded because of racing requests is considered acceptable. For example, when set to False, if a user makes highly parallel REST API requests to create servers, it will be possible for them to create more servers than their allowed quota during the race. If their quota is 10 servers, they might be able to create 50 during the burst. After the burst, they will not be able to create any more servers but they will be able to keep their 50 servers until they delete them.
The initial quota check is done before resources are created, so if multiple parallel requests arrive at the same time, all could pass the quota check and create resources, potentially exceeding quota. When recheck_quota is True, quota will be checked a second time after resources have been created and if the resource is over quota, it will be deleted and OverQuota will be raised, usually resulting in a 403 response to the REST API user. This makes it impossible for a user to exceed their quota with the caveat that it will, however, be possible for a REST API user to be rejected with a 403 response in the event of a collision close to reaching their quota limit, even if the user has enough quota available when they made the request.
- count_usage_from_placement¶
- Type:
boolean
- Default:
False
Enable the counting of quota usage from the placement service.
Starting in Train, it is possible to count quota usage for cores and ram from the placement service and instances from the API database instead of counting from cell databases.
This works well if there is only one Nova deployment running per placement deployment. However, if an operator is running more than one Nova deployment sharing a placement deployment, they should not set this option to True because currently the placement service has no way to partition resource providers per Nova deployment. When this option is left as the default or set to False, Nova will use the legacy counting method to count quota usage for instances, cores, and ram from its cell databases.
Note that quota usage behavior related to resizes will be affected if this option is set to True. Placement resource allocations are claimed on the destination while holding allocations on the source during a resize, until the resize is confirmed or reverted. During this time, when the server is in VERIFY_RESIZE state, quota usage will reflect resource consumption on both the source and the destination. This can be beneficial as it reserves space for a revert of a downsize, but it also means quota usage will be inflated until a resize is confirmed or reverted.
Behavior will also be different for unscheduled servers in ERROR state. A server in ERROR state that has never been scheduled to a compute host will not have placement allocations, so it will not consume quota usage for cores and ram.
Behavior will be different for servers in SHELVED_OFFLOADED state. A server in SHELVED_OFFLOADED state will not have placement allocations, so it will not consume quota usage for cores and ram. Note that because of this, it will be possible for a request to unshelve a server to be rejected if the user does not have enough quota available to support the cores and ram needed by the server to be unshelved.
The
populate_queued_for_delete
andpopulate_user_id
online data migrations must be completed before usage can be counted from placement. Until the data migration is complete, the system will fall back to legacy quota usage counting from cell databases depending on the result of an EXISTS database query during each quota check, if this configuration option is set to True. Operators who want to avoid the performance hit from the EXISTS queries should wait to set this configuration option to True until after they have completed their online data migrations vianova-manage db online_data_migrations
.
remote_debug¶
- host¶
- Type:
host address
- Default:
<None>
Debug host (IP or name) to connect to.
This command line parameter is used when you want to connect to a nova service via a debugger running on a different host.
Note that using the remote debug option changes how nova uses the eventlet library to support async IO. This could result in failures that do not occur under normal operation. Use at your own risk.
Possible Values:
IP address of a remote host as a command line parameter to a nova service. For example:
nova-compute --config-file /etc/nova/nova.conf --remote_debug-host <IP address of the debugger>
- port¶
- Type:
port number
- Default:
<None>
- Minimum Value:
0
- Maximum Value:
65535
Debug port to connect to.
This command line parameter allows you to specify the port you want to use to connect to a nova service via a debugger running on different host.
Note that using the remote debug option changes how nova uses the eventlet library to support async IO. This could result in failures that do not occur under normal operation. Use at your own risk.
Possible Values:
Port number you want to use as a command line parameter to a nova service. For example:
nova-compute --config-file /etc/nova/nova.conf --remote_debug-host <IP address of the debugger> --remote_debug-port <port debugger is listening on>.
scheduler¶
- max_attempts¶
- Type:
integer
- Default:
3
- Minimum Value:
1
The maximum number of schedule attempts.
This is the maximum number of attempts that will be made for a given instance build/move operation. It limits the number of alternate hosts returned by the scheduler. When that list of hosts is exhausted, a
MaxRetriesExceeded
exception is raised and the instance is set to an error state.Possible values:
A positive integer, where the integer corresponds to the max number of attempts that can be made when building or moving an instance.
- discover_hosts_in_cells_interval¶
- Type:
integer
- Default:
-1
- Minimum Value:
-1
Periodic task interval.
This value controls how often (in seconds) the scheduler should attempt to discover new hosts that have been added to cells. If negative (the default), no automatic discovery will occur.
Deployments where compute nodes come and go frequently may want this enabled, where others may prefer to manually discover hosts when one is added to avoid any overhead from constantly checking. If enabled, every time this runs, we will select any unmapped hosts out of each cell database on every run.
Possible values:
An integer, where the integer corresponds to periodic task interval in seconds. 0 uses the default interval (60 seconds). A negative value disables periodic tasks.
- max_placement_results¶
- Type:
integer
- Default:
1000
- Minimum Value:
1
The maximum number of placement results to request.
This setting determines the maximum limit on results received from the placement service during a scheduling operation. It effectively limits the number of hosts that may be considered for scheduling requests that match a large number of candidates.
A value of 1 (the minimum) will effectively defer scheduling to the placement service strictly on “will it fit” grounds. A higher value will put an upper cap on the number of results the scheduler will consider during the filtering and weighing process. Large deployments may need to set this lower than the total number of hosts available to limit memory consumption, network traffic, etc. of the scheduler.
Possible values:
An integer, where the integer corresponds to the number of placement results to return.
- workers¶
- Type:
integer
- Default:
<None>
- Minimum Value:
0
Number of workers for the nova-scheduler service.
Defaults to the number of CPUs available.
Possible values:
An integer, where the integer corresponds to the number of worker processes.
- query_placement_for_routed_network_aggregates¶
- Type:
boolean
- Default:
False
Enable the scheduler to filter compute hosts affined to routed network segment aggregates.
See https://docs.openstack.org/neutron/latest/admin/config-routed-networks.html for details.
- limit_tenants_to_placement_aggregate¶
- Type:
boolean
- Default:
False
Restrict tenants to specific placement aggregates.
This setting causes the scheduler to look up a host aggregate with the metadata key of
filter_tenant_id
set to the project of an incoming request, and request results from placement be limited to that aggregate. Multiple tenants may be added to a single aggregate by appending a serial number to the key, such asfilter_tenant_id:123
.The matching aggregate UUID must be mirrored in placement for proper operation. If no host aggregate with the tenant id is found, or that aggregate does not match one in placement, the result will be the same as not finding any suitable hosts for the request.
Possible values:
A boolean value.
Related options:
[scheduler] placement_aggregate_required_for_tenants
- placement_aggregate_required_for_tenants¶
- Type:
boolean
- Default:
False
Require a placement aggregate association for all tenants.
This setting, when limit_tenants_to_placement_aggregate=True, will control whether or not a tenant with no aggregate affinity will be allowed to schedule to any available node. If aggregates are used to limit some tenants but not all, then this should be False. If all tenants should be confined via aggregate, then this should be True to prevent them from receiving unrestricted scheduling to any available node.
Possible values:
A boolean value.
Related options:
[scheduler] placement_aggregate_required_for_tenants
- query_placement_for_image_type_support¶
- Type:
boolean
- Default:
False
Use placement to determine host support for the instance’s image type.
This setting causes the scheduler to ask placement only for compute hosts that support the
disk_format
of the image used in the request.Possible values:
A boolean value.
- enable_isolated_aggregate_filtering¶
- Type:
boolean
- Default:
False
Restrict use of aggregates to instances with matching metadata.
This setting allows the scheduler to restrict hosts in aggregates based on matching required traits in the aggregate metadata and the instance flavor/image. If an aggregate is configured with a property with key
trait:$TRAIT_NAME
and valuerequired
, the instance flavor extra_specs and/or image metadata must also containtrait:$TRAIT_NAME=required
to be eligible to be scheduled to hosts in that aggregate. More technical details at https://docs.openstack.org/nova/latest/reference/isolate-aggregates.htmlPossible values:
A boolean value.
- image_metadata_prefilter¶
- Type:
boolean
- Default:
False
Use placement to filter hosts based on image metadata.
This setting causes the scheduler to transform well known image metadata properties into placement required traits to filter host based on image metadata. This feature requires host support and is currently supported by the following compute drivers:
libvirt.LibvirtDriver
(since Ussuri (21.0.0))
Possible values:
A boolean value.
Related options:
[compute] compute_driver
serial_console¶
The serial console feature allows you to connect to a guest in case a graphical console like VNC, RDP or SPICE is not available. This is only currently supported for the libvirt, Ironic and hyper-v drivers.
- enabled¶
- Type:
boolean
- Default:
False
Enable the serial console feature.
In order to use this feature, the service
nova-serialproxy
needs to run. This service is typically executed on the controller node.
- port_range¶
- Type:
string
- Default:
10000:20000
A range of TCP ports a guest can use for its backend.
Each instance which gets created will use one port out of this range. If the range is not big enough to provide another port for an new instance, this instance won’t get launched.
Possible values:
Each string which passes the regex
^\d+:\d+$
For example10000:20000
. Be sure that the first port number is lower than the second port number and that both are in range from 0 to 65535.
- base_url¶
- Type:
URI
- Default:
ws://127.0.0.1:6083/
The URL an end user would use to connect to the
nova-serialproxy
service.The
nova-serialproxy
service is called with this token enriched URL and establishes the connection to the proper instance.Related options:
The IP address must be identical to the address to which the
nova-serialproxy
service is listening (see optionserialproxy_host
in this section).The port must be the same as in the option
serialproxy_port
of this section.If you choose to use a secured websocket connection, then start this option with
wss://
instead of the unsecuredws://
. The optionscert
andkey
in the[DEFAULT]
section have to be set for that.
- proxyclient_address¶
- Type:
string
- Default:
127.0.0.1
The IP address to which proxy clients (like
nova-serialproxy
) should connect to get the serial console of an instance.This is typically the IP address of the host of a
nova-compute
service.
- serialproxy_host¶
- Type:
string
- Default:
0.0.0.0
The IP address which is used by the
nova-serialproxy
service to listen for incoming requests.The
nova-serialproxy
service listens on this IP address for incoming connection requests to instances which expose serial console.Related options:
Ensure that this is the same IP address which is defined in the option
base_url
of this section or use0.0.0.0
to listen on all addresses.
- serialproxy_port¶
- Type:
port number
- Default:
6083
- Minimum Value:
0
- Maximum Value:
65535
The port number which is used by the
nova-serialproxy
service to listen for incoming requests.The
nova-serialproxy
service listens on this port number for incoming connection requests to instances which expose serial console.Related options:
Ensure that this is the same port number which is defined in the option
base_url
of this section.
service_user¶
Configuration options for service to service authentication using a service token. These options allow sending a service token along with the user’s token when contacting external REST APIs.
- send_service_user_token¶
- Type:
boolean
- Default:
False
When True, if sending a user token to a REST API, also send a service token.
Nova often reuses the user token provided to the nova-api to talk to other REST APIs, such as Cinder, Glance and Neutron. It is possible that while the user token was valid when the request was made to Nova, the token may expire before it reaches the other service. To avoid any failures, and to make it clear it is Nova calling the service on the user’s behalf, we include a service token along with the user token. Should the user’s token have expired, a valid service token ensures the REST API request will still be accepted by the keystone middleware.
- cafile¶
- Type:
string
- Default:
<None>
PEM encoded Certificate Authority to use when verifying HTTPs connections.
- certfile¶
- Type:
string
- Default:
<None>
PEM encoded client certificate cert file
- keyfile¶
- Type:
string
- Default:
<None>
PEM encoded client certificate key file
- insecure¶
- Type:
boolean
- Default:
False
Verify HTTPS connections.
- timeout¶
- Type:
integer
- Default:
<None>
Timeout value for http requests
- collect_timing¶
- Type:
boolean
- Default:
False
Collect per-API call timing information.
- split_loggers¶
- Type:
boolean
- Default:
False
Log requests to multiple loggers.
- auth_type¶
- Type:
unknown type
- Default:
<None>
Authentication type to load
¶ Group
Name
service_user
auth_plugin
- auth_section¶
- Type:
unknown type
- Default:
<None>
Config Section from which to load plugin specific options
- auth_url¶
- Type:
unknown type
- Default:
<None>
Authentication URL
- system_scope¶
- Type:
unknown type
- Default:
<None>
Scope for system operations
- domain_id¶
- Type:
unknown type
- Default:
<None>
Domain ID to scope to
- domain_name¶
- Type:
unknown type
- Default:
<None>
Domain name to scope to
- project_id¶
- Type:
unknown type
- Default:
<None>
Project ID to scope to
- project_name¶
- Type:
unknown type
- Default:
<None>
Project name to scope to
- project_domain_id¶
- Type:
unknown type
- Default:
<None>
Domain ID containing project
- project_domain_name¶
- Type:
unknown type
- Default:
<None>
Domain name containing project
- trust_id¶
- Type:
unknown type
- Default:
<None>
ID of the trust to use as a trustee use
- default_domain_id¶
- Type:
unknown type
- Default:
<None>
Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication.
- default_domain_name¶
- Type:
unknown type
- Default:
<None>
Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication.
- user_id¶
- Type:
unknown type
- Default:
<None>
User ID
- username¶
- Type:
unknown type
- Default:
<None>
Username
¶ Group
Name
service_user
user-name
service_user
user_name
- user_domain_id¶
- Type:
unknown type
- Default:
<None>
User’s domain id
- user_domain_name¶
- Type:
unknown type
- Default:
<None>
User’s domain name
- password¶
- Type:
unknown type
- Default:
<None>
User’s password
- tenant_id¶
- Type:
unknown type
- Default:
<None>
Tenant ID
- tenant_name¶
- Type:
unknown type
- Default:
<None>
Tenant Name
spice¶
SPICE console feature allows you to connect to a guest virtual machine. SPICE is a replacement for fairly limited VNC protocol.
Following requirements must be met in order to use SPICE:
Virtualization driver must be libvirt
spice.enabled set to True
vnc.enabled set to False
update html5proxy_base_url
update server_proxyclient_address
- enabled¶
- Type:
boolean
- Default:
False
Enable SPICE related features.
Related options:
VNC must be explicitly disabled to get access to the SPICE console. Set the enabled option to False in the [vnc] section to disable the VNC console.
- agent_enabled¶
- Type:
boolean
- Default:
True
Enable the SPICE guest agent support on the instances.
The Spice agent works with the Spice protocol to offer a better guest console experience. However, the Spice console can still be used without the Spice Agent. With the Spice agent installed the following features are enabled:
Copy & Paste of text and images between the guest and client machine
Automatic adjustment of resolution when the client screen changes - e.g. if you make the Spice console full screen the guest resolution will adjust to match it rather than letterboxing.
Better mouse integration - The mouse can be captured and released without needing to click inside the console or press keys to release it. The performance of mouse movement is also improved.
- image_compression¶
- Type:
string
- Default:
<None>
- Valid Values:
auto_glz, auto_lz, quic, glz, lz, off
- Advanced Option:
Intended for advanced users and not used by the majority of users, and might have a significant effect on stability and/or performance.
Configure the SPICE image compression (lossless).
Possible values
- auto_glz
enable image compression mode to choose between glz and quic algorithm, based on image properties
- auto_lz
enable image compression mode to choose between lz and quic algorithm, based on image properties
- quic
enable image compression based on the SFALIC algorithm
- glz
enable image compression using lz with history based global dictionary
- lz
enable image compression with the Lempel-Ziv algorithm
- off
disable image compression
- jpeg_compression¶
- Type:
string
- Default:
<None>
- Valid Values:
auto, never, always
- Advanced Option:
Intended for advanced users and not used by the majority of users, and might have a significant effect on stability and/or performance.
Configure the SPICE wan image compression (lossy for slow links).
Possible values
- auto
enable JPEG image compression automatically
- never
disable JPEG image compression
- always
enable JPEG image compression
- zlib_compression¶
- Type:
string
- Default:
<None>
- Valid Values:
auto, never, always
- Advanced Option:
Intended for advanced users and not used by the majority of users, and might have a significant effect on stability and/or performance.
Configure the SPICE wan image compression (lossless for slow links).
Possible values
- auto
enable zlib image compression automatically
- never
disable zlib image compression
- always
enable zlib image compression
- playback_compression¶
- Type:
boolean
- Default:
<None>
- Advanced Option:
Intended for advanced users and not used by the majority of users, and might have a significant effect on stability and/or performance.
Enable the SPICE audio stream compression (using celt).
- streaming_mode¶
- Type:
string
- Default:
<None>
- Valid Values:
filter, all, off
- Advanced Option:
Intended for advanced users and not used by the majority of users, and might have a significant effect on stability and/or performance.
Configure the SPICE video stream detection and (lossy) compression.
Possible values
- filter
SPICE server adds additional filters to decide if video streaming should be activated
- all
any fast-refreshing window can be encoded into a video stream
- off
no video detection and (lossy) compression is performed
- html5proxy_base_url¶
- Type:
URI
- Default:
http://127.0.0.1:6082/spice_auto.html
Location of the SPICE HTML5 console proxy.
End user would use this URL to connect to the nova-spicehtml5proxy` service. This service will forward request to the console of an instance.
In order to use SPICE console, the service
nova-spicehtml5proxy
should be running. This service is typically launched on the controller node.Possible values:
Must be a valid URL of the form:
http://host:port/spice_auto.html
where host is the node runningnova-spicehtml5proxy
and the port is typically 6082. Consider not using default value as it is not well defined for any real deployment.
Related options:
This option depends on
html5proxy_host
andhtml5proxy_port
options. The access URL returned by the compute node must have the host and port where thenova-spicehtml5proxy
service is listening.
- server_listen¶
- Type:
string
- Default:
127.0.0.1
The address where the SPICE server running on the instances should listen.
Typically, the
nova-spicehtml5proxy
proxy client runs on the controller node and connects over the private network to this address on the compute node(s).Possible values:
IP address to listen on.
- server_proxyclient_address¶
- Type:
string
- Default:
127.0.0.1
The address used by
nova-spicehtml5proxy
client to connect to instance console.Typically, the
nova-spicehtml5proxy
proxy client runs on the controller node and connects over the private network to this address on the compute node(s).Possible values:
Any valid IP address on the compute node.
Related options:
This option depends on the
server_listen
option. The proxy client must be able to access the address specified inserver_listen
using the value of this option.
- html5proxy_host¶
- Type:
host address
- Default:
0.0.0.0
IP address or a hostname on which the
nova-spicehtml5proxy
service listens for incoming requests.Related options:
This option depends on the
html5proxy_base_url
option. Thenova-spicehtml5proxy
service must be listening on a host that is accessible from the HTML5 client.
- html5proxy_port¶
- Type:
port number
- Default:
6082
- Minimum Value:
0
- Maximum Value:
65535
Port on which the
nova-spicehtml5proxy
service listens for incoming requests.Related options:
This option depends on the
html5proxy_base_url
option. Thenova-spicehtml5proxy
service must be listening on a port that is accessible from the HTML5 client.
upgrade_levels¶
upgrade_levels options are used to set version cap for RPC messages sent between different nova services.
By default all services send messages using the latest version they know about.
The compute upgrade level is an important part of rolling upgrades where old and new nova-compute services run side by side.
The other options can largely be ignored, and are only kept to help with a possible future backport issue.
- compute¶
- Type:
string
- Default:
<None>
Compute RPC API version cap.
By default, we always send messages using the most recent version the client knows about.
Where you have old and new compute services running, you should set this to the lowest deployed version. This is to guarantee that all services never send messages that one of the compute nodes can’t understand. Note that we only support upgrading from release N to release N+1.
Set this option to “auto” if you want to let the compute RPC module automatically determine what version to use based on the service versions in the deployment.
Possible values:
By default send the latest version the client knows about
‘auto’: Automatically determines what version to use based on the service versions in the deployment.
A string representing a version number in the format ‘N.N’; for example, possible values might be ‘1.12’ or ‘2.0’.
An OpenStack release name, in lower case, such as ‘mitaka’ or ‘liberty’.
- scheduler¶
- Type:
string
- Default:
<None>
Scheduler RPC API version cap.
Possible values:
By default send the latest version the client knows about
A string representing a version number in the format ‘N.N’; for example, possible values might be ‘1.12’ or ‘2.0’.
An OpenStack release name, in lower case, such as ‘mitaka’ or ‘liberty’.
- conductor¶
- Type:
string
- Default:
<None>
Conductor RPC API version cap.
Possible values:
By default send the latest version the client knows about
A string representing a version number in the format ‘N.N’; for example, possible values might be ‘1.12’ or ‘2.0’.
An OpenStack release name, in lower case, such as ‘mitaka’ or ‘liberty’.
- baseapi¶
- Type:
string
- Default:
<None>
Base API RPC API version cap.
Possible values:
By default send the latest version the client knows about
A string representing a version number in the format ‘N.N’; for example, possible values might be ‘1.12’ or ‘2.0’.
An OpenStack release name, in lower case, such as ‘mitaka’ or ‘liberty’.
vault¶
- root_token_id¶
- Type:
string
- Default:
<None>
root token for vault
- approle_role_id¶
- Type:
string
- Default:
<None>
AppRole role_id for authentication with vault
- approle_secret_id¶
- Type:
string
- Default:
<None>
AppRole secret_id for authentication with vault
- kv_mountpoint¶
- Type:
string
- Default:
secret
Mountpoint of KV store in Vault to use, for example: secret
- kv_path¶
- Type:
string
- Default:
<None>
Path relative to root of KV store in Vault to use.
- kv_version¶
- Type:
integer
- Default:
2
Version of KV store in Vault to use, for example: 2
- vault_url¶
- Type:
string
- Default:
http://127.0.0.1:8200
Use this endpoint to connect to Vault, for example: “http://127.0.0.1:8200”
- ssl_ca_crt_file¶
- Type:
string
- Default:
<None>
Absolute path to ca cert file
- use_ssl¶
- Type:
boolean
- Default:
False
SSL Enabled/Disabled
- namespace¶
- Type:
string
- Default:
<None>
Vault Namespace to use for all requests to Vault. Vault Namespaces feature is available only in Vault Enterprise
vendordata_dynamic_auth¶
Options within this group control the authentication of the vendordata subsystem of the metadata API server (and config drive) with external systems.
- cafile¶
- Type:
string
- Default:
<None>
PEM encoded Certificate Authority to use when verifying HTTPs connections.
- certfile¶
- Type:
string
- Default:
<None>
PEM encoded client certificate cert file
- keyfile¶
- Type:
string
- Default:
<None>
PEM encoded client certificate key file
- insecure¶
- Type:
boolean
- Default:
False
Verify HTTPS connections.
- timeout¶
- Type:
integer
- Default:
<None>
Timeout value for http requests
- collect_timing¶
- Type:
boolean
- Default:
False
Collect per-API call timing information.
- split_loggers¶
- Type:
boolean
- Default:
False
Log requests to multiple loggers.
- auth_type¶
- Type:
unknown type
- Default:
<None>
Authentication type to load
¶ Group
Name
vendordata_dynamic_auth
auth_plugin
- auth_section¶
- Type:
unknown type
- Default:
<None>
Config Section from which to load plugin specific options
- auth_url¶
- Type:
unknown type
- Default:
<None>
Authentication URL
- system_scope¶
- Type:
unknown type
- Default:
<None>
Scope for system operations
- domain_id¶
- Type:
unknown type
- Default:
<None>
Domain ID to scope to
- domain_name¶
- Type:
unknown type
- Default:
<None>
Domain name to scope to
- project_id¶
- Type:
unknown type
- Default:
<None>
Project ID to scope to
- project_name¶
- Type:
unknown type
- Default:
<None>
Project name to scope to
- project_domain_id¶
- Type:
unknown type
- Default:
<None>
Domain ID containing project
- project_domain_name¶
- Type:
unknown type
- Default:
<None>
Domain name containing project
- trust_id¶
- Type:
unknown type
- Default:
<None>
ID of the trust to use as a trustee use
- default_domain_id¶
- Type:
unknown type
- Default:
<None>
Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication.
- default_domain_name¶
- Type:
unknown type
- Default:
<None>
Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication.
- user_id¶
- Type:
unknown type
- Default:
<None>
User ID
- username¶
- Type:
unknown type
- Default:
<None>
Username
¶ Group
Name
vendordata_dynamic_auth
user-name
vendordata_dynamic_auth
user_name
- user_domain_id¶
- Type:
unknown type
- Default:
<None>
User’s domain id
- user_domain_name¶
- Type:
unknown type
- Default:
<None>
User’s domain name
- password¶
- Type:
unknown type
- Default:
<None>
User’s password
- tenant_id¶
- Type:
unknown type
- Default:
<None>
Tenant ID
- tenant_name¶
- Type:
unknown type
- Default:
<None>
Tenant Name
vmware¶
Related options: Following options must be set in order to launch VMware-based virtual machines.
compute_driver: Must use vmwareapi.VMwareVCDriver.
vmware.host_username
vmware.host_password
vmware.cluster_name
- integration_bridge¶
- Type:
string
- Default:
<None>
This option should be configured only when using the NSX-MH Neutron plugin. This is the name of the integration bridge on the ESXi server or host. This should not be set for any other Neutron plugin. Hence the default value is not set.
Possible values:
Any valid string representing the name of the integration bridge
- console_delay_seconds¶
- Type:
integer
- Default:
<None>
- Minimum Value:
0
Set this value if affected by an increased network latency causing repeated characters when typing in a remote console.
- serial_port_service_uri¶
- Type:
string
- Default:
<None>
Identifies the remote system where the serial port traffic will be sent.
This option adds a virtual serial port which sends console output to a configurable service URI. At the service URI address there will be virtual serial port concentrator that will collect console logs. If this is not set, no serial ports will be added to the created VMs.
Possible values:
Any valid URI
- serial_port_proxy_uri¶
- Type:
URI
- Default:
<None>
Identifies a proxy service that provides network access to the serial_port_service_uri.
Possible values:
Any valid URI (The scheme is ‘telnet’ or ‘telnets’.)
Related options:
This option is ignored if serial_port_service_uri is not specified.
serial_port_service_uri
- serial_log_dir¶
- Type:
string
- Default:
/opt/vmware/vspc
Specifies the directory where the Virtual Serial Port Concentrator is storing console log files. It should match the ‘serial_log_dir’ config value of VSPC.
- host_ip¶
- Type:
host address
- Default:
<None>
Hostname or IP address for connection to VMware vCenter host.
- host_port¶
- Type:
port number
- Default:
443
- Minimum Value:
0
- Maximum Value:
65535
Port for connection to VMware vCenter host.
- host_username¶
- Type:
string
- Default:
<None>
Username for connection to VMware vCenter host.
- host_password¶
- Type:
string
- Default:
<None>
Password for connection to VMware vCenter host.
- ca_file¶
- Type:
string
- Default:
<None>
Specifies the CA bundle file to be used in verifying the vCenter server certificate.
- insecure¶
- Type:
boolean
- Default:
False
If true, the vCenter server certificate is not verified. If false, then the default CA truststore is used for verification.
Related options:
ca_file: This option is ignored if “ca_file” is set.
- cluster_name¶
- Type:
string
- Default:
<None>
Name of a VMware Cluster ComputeResource.
- datastore_regex¶
- Type:
string
- Default:
<None>
Regular expression pattern to match the name of datastore.
The datastore_regex setting specifies the datastores to use with Compute. For example, datastore_regex=”nas.*” selects all the data stores that have a name starting with “nas”.
NOTE: If no regex is given, it just picks the datastore with the most freespace.
Possible values:
Any matching regular expression to a datastore must be given
- task_poll_interval¶
- Type:
floating point
- Default:
0.5
Time interval in seconds to poll remote tasks invoked on VMware VC server.
- api_retry_count¶
- Type:
integer
- Default:
10
- Minimum Value:
0
Number of times VMware vCenter server API must be retried on connection failures, e.g. socket error, etc.
- vnc_port¶
- Type:
port number
- Default:
5900
- Minimum Value:
0
- Maximum Value:
65535
This option specifies VNC starting port.
Every VM created by ESX host has an option of enabling VNC client for remote connection. Above option ‘vnc_port’ helps you to set default starting port for the VNC client.
Possible values:
Any valid port number within 5900 -(5900 + vnc_port_total)
Related options:
Below options should be set to enable VNC client.
vnc.enabled = True
vnc_port_total
- vnc_port_total¶
- Type:
integer
- Default:
10000
- Minimum Value:
0
Total number of VNC ports.
- vnc_keymap¶
- Type:
string
- Default:
en-us
Keymap for VNC.
The keyboard mapping (keymap) determines which keyboard layout a VNC session should use by default.
Possible values:
A keyboard layout which is supported by the underlying hypervisor on this node. This is usually an ‘IETF language tag’ (for example ‘en-us’).
- use_linked_clone¶
- Type:
boolean
- Default:
True
This option enables/disables the use of linked clone.
The ESX hypervisor requires a copy of the VMDK file in order to boot up a virtual machine. The compute driver must download the VMDK via HTTP from the OpenStack Image service to a datastore that is visible to the hypervisor and cache it. Subsequent virtual machines that need the VMDK use the cached version and don’t have to copy the file again from the OpenStack Image service.
If set to false, even with a cached VMDK, there is still a copy operation from the cache location to the hypervisor file directory in the shared datastore. If set to true, the above copy operation is avoided as it creates copy of the virtual machine that shares virtual disks with its parent VM.
- connection_pool_size¶
- Type:
integer
- Default:
10
- Minimum Value:
10
This option sets the http connection pool size
The connection pool size is the maximum number of connections from nova to vSphere. It should only be increased if there are warnings indicating that the connection pool is full, otherwise, the default should suffice.
- pbm_enabled¶
- Type:
boolean
- Default:
False
This option enables or disables storage policy based placement of instances.
Related options:
pbm_default_policy
- pbm_wsdl_location¶
- Type:
string
- Default:
<None>
This option specifies the PBM service WSDL file location URL.
Setting this will disable storage policy based placement of instances.
Possible values:
Any valid file path e.g file:///opt/SDK/spbm/wsdl/pbmService.wsdl
- pbm_default_policy¶
- Type:
string
- Default:
<None>
This option specifies the default policy to be used.
If pbm_enabled is set and there is no defined storage policy for the specific request, then this policy will be used.
Possible values:
Any valid storage policy such as VSAN default storage policy
Related options:
pbm_enabled
- maximum_objects¶
- Type:
integer
- Default:
100
- Minimum Value:
0
This option specifies the limit on the maximum number of objects to return in a single result.
A positive value will cause the operation to suspend the retrieval when the count of objects reaches the specified limit. The server may still limit the count to something less than the configured value. Any remaining objects may be retrieved with additional requests.
- cache_prefix¶
- Type:
string
- Default:
<None>
This option adds a prefix to the folder where cached images are stored
This is not the full path - just a folder prefix. This should only be used when a datastore cache is shared between compute nodes.
Note: This should only be used when the compute nodes are running on same host or they have a shared file system.
Possible values:
Any string representing the cache prefix to the folder
vnc¶
Virtual Network Computer (VNC) can be used to provide remote desktop console access to instances for tenants and/or administrators.
- enabled¶
- Type:
boolean
- Default:
True
Enable VNC related features.
Guests will get created with graphical devices to support this. Clients (for example Horizon) can then establish a VNC connection to the guest.
¶ Group
Name
DEFAULT
vnc_enabled
- server_listen¶
- Type:
host address
- Default:
127.0.0.1
The IP address or hostname on which an instance should listen to for incoming VNC connection requests on this node.
- server_proxyclient_address¶
- Type:
host address
- Default:
127.0.0.1
Private, internal IP address or hostname of VNC console proxy.
The VNC proxy is an OpenStack component that enables compute service users to access their instances through VNC clients.
This option sets the private address to which proxy clients, such as
nova-novncproxy
, should connect to.
- novncproxy_base_url¶
- Type:
URI
- Default:
http://127.0.0.1:6080/vnc_auto.html
Public address of noVNC VNC console proxy.
The VNC proxy is an OpenStack component that enables compute service users to access their instances through VNC clients. noVNC provides VNC support through a websocket-based client.
This option sets the public base URL to which client systems will connect. noVNC clients can use this address to connect to the noVNC instance and, by extension, the VNC sessions.
If using noVNC >= 1.0.0, you should use
vnc_lite.html
instead ofvnc_auto.html
.Related options:
novncproxy_host
novncproxy_port
¶ Group
Name
DEFAULT
novncproxy_base_url
- novncproxy_host¶
- Type:
string
- Default:
0.0.0.0
IP address that the noVNC console proxy should bind to.
The VNC proxy is an OpenStack component that enables compute service users to access their instances through VNC clients. noVNC provides VNC support through a websocket-based client.
This option sets the private address to which the noVNC console proxy service should bind to.
Related options:
novncproxy_port
novncproxy_base_url
¶ Group
Name
DEFAULT
novncproxy_host
- novncproxy_port¶
- Type:
port number
- Default:
6080
- Minimum Value:
0
- Maximum Value:
65535
Port that the noVNC console proxy should bind to.
The VNC proxy is an OpenStack component that enables compute service users to access their instances through VNC clients. noVNC provides VNC support through a websocket-based client.
This option sets the private port to which the noVNC console proxy service should bind to.
Related options:
novncproxy_host
novncproxy_base_url
¶ Group
Name
DEFAULT
novncproxy_port
- auth_schemes¶
- Type:
list
- Default:
['none']
The authentication schemes to use with the compute node.
Control what RFB authentication schemes are permitted for connections between the proxy and the compute host. If multiple schemes are enabled, the first matching scheme will be used, thus the strongest schemes should be listed first.
Related options:
[vnc]vencrypt_client_key
,[vnc]vencrypt_client_cert
: must also be set
- vencrypt_client_key¶
- Type:
string
- Default:
<None>
The path to the client certificate PEM file (for x509)
The fully qualified path to a PEM file containing the private key which the VNC proxy server presents to the compute node during VNC authentication.
Related options:
vnc.auth_schemes
: must includevencrypt
vnc.vencrypt_client_cert
: must also be set
- vencrypt_client_cert¶
- Type:
string
- Default:
<None>
The path to the client key file (for x509)
The fully qualified path to a PEM file containing the x509 certificate which the VNC proxy server presents to the compute node during VNC authentication.
Related options:
vnc.auth_schemes
: must includevencrypt
vnc.vencrypt_client_key
: must also be set
- vencrypt_ca_certs¶
- Type:
string
- Default:
<None>
The path to the CA certificate PEM file
The fully qualified path to a PEM file containing one or more x509 certificates for the certificate authorities used by the compute node VNC server.
Related options:
vnc.auth_schemes
: must includevencrypt
workarounds¶
A collection of workarounds used to mitigate bugs or issues found in system tools (e.g. Libvirt or QEMU) or Nova itself under certain conditions. These should only be enabled in exceptional circumstances. All options are linked against bug IDs, where more information on the issue can be found.
- disable_rootwrap¶
- Type:
boolean
- Default:
False
Use sudo instead of rootwrap.
Allow fallback to sudo for performance reasons.
For more information, refer to the bug report:
Possible values:
True: Use sudo instead of rootwrap
False: Use rootwrap as usual
Interdependencies to other options:
Any options that affect ‘rootwrap’ will be ignored.
- disable_libvirt_livesnapshot¶
- Type:
boolean
- Default:
False
Disable live snapshots when using the libvirt driver.
Live snapshots allow the snapshot of the disk to happen without an interruption to the guest, using coordination with a guest agent to quiesce the filesystem.
When using libvirt 1.2.2 live snapshots fail intermittently under load (likely related to concurrent libvirt/qemu operations). This config option provides a mechanism to disable live snapshot, in favor of cold snapshot, while this is resolved. Cold snapshot causes an instance outage while the guest is going through the snapshotting process.
For more information, refer to the bug report:
Possible values:
True: Live snapshot is disabled when using libvirt
False: Live snapshots are always used when snapshotting (as long as there is a new enough libvirt and the backend storage supports it)
Warning
This option is deprecated for removal since 19.0.0. Its value may be silently ignored in the future.
- Reason:
This option was added to work around issues with libvirt 1.2.2. We no longer support this version of libvirt, which means this workaround is no longer necessary. It will be removed in a future release.
- handle_virt_lifecycle_events¶
- Type:
boolean
- Default:
True
Enable handling of events emitted from compute drivers.
Many compute drivers emit lifecycle events, which are events that occur when, for example, an instance is starting or stopping. If the instance is going through task state changes due to an API operation, like resize, the events are ignored.
This is an advanced feature which allows the hypervisor to signal to the compute service that an unexpected state change has occurred in an instance and that the instance can be shutdown automatically. Unfortunately, this can race in some conditions, for example in reboot operations or when the compute service or when host is rebooted (planned or due to an outage). If such races are common, then it is advisable to disable this feature.
Care should be taken when this feature is disabled and ‘sync_power_state_interval’ is set to a negative value. In this case, any instances that get out of sync between the hypervisor and the Nova database will have to be synchronized manually.
For more information, refer to the bug report: https://bugs.launchpad.net/bugs/1444630
Interdependencies to other options:
If
sync_power_state_interval
is negative and this feature is disabled, then instances that get out of sync between the hypervisor and the Nova database will have to be synchronized manually.
- disable_group_policy_check_upcall¶
- Type:
boolean
- Default:
False
Disable the server group policy check upcall in compute.
In order to detect races with server group affinity policy, the compute service attempts to validate that the policy was not violated by the scheduler. It does this by making an upcall to the API database to list the instances in the server group for one that it is booting, which violates our api/cell isolation goals. Eventually this will be solved by proper affinity guarantees in the scheduler and placement service, but until then, this late check is needed to ensure proper affinity policy.
Operators that desire api/cell isolation over this check should enable this flag, which will avoid making that upcall from compute.
Related options:
[filter_scheduler]/track_instance_changes also relies on upcalls from the compute service to the scheduler service.
- enable_numa_live_migration¶
- Type:
boolean
- Default:
False
Enable live migration of instances with NUMA topologies.
Live migration of instances with NUMA topologies when using the libvirt driver is only supported in deployments that have been fully upgraded to Train. In previous versions, or in mixed Stein/Train deployments with a rolling upgrade in progress, live migration of instances with NUMA topologies is disabled by default when using the libvirt driver. This includes live migration of instances with CPU pinning or hugepages. CPU pinning and huge page information for such instances is not currently re-calculated, as noted in bug #1289064. This means that if instances were already present on the destination host, the migrated instance could be placed on the same dedicated cores as these instances or use hugepages allocated for another instance. Alternately, if the host platforms were not homogeneous, the instance could be assigned to non-existent cores or be inadvertently split across host NUMA nodes.
Despite these known issues, there may be cases where live migration is necessary. By enabling this option, operators that are aware of the issues and are willing to manually work around them can enable live migration support for these instances.
Related options:
compute_driver
: Only the libvirt driver is affected.
Warning
This option is deprecated for removal since 20.0.0. Its value may be silently ignored in the future.
- Reason:
This option was added to mitigate known issues when live migrating instances with a NUMA topology with the libvirt driver. Those issues are resolved in Train. Clouds using the libvirt driver and fully upgraded to Train support NUMA-aware live migration. This option will be removed in a future release.
- ensure_libvirt_rbd_instance_dir_cleanup¶
- Type:
boolean
- Default:
False
Ensure the instance directory is removed during clean up when using rbd.
When enabled this workaround will ensure that the instance directory is always removed during cleanup on hosts using
[libvirt]/images_type=rbd
. This avoids the following bugs with evacuation and revert resize clean up that lead to the instance directory remaining on the host:https://bugs.launchpad.net/nova/+bug/1414895
https://bugs.launchpad.net/nova/+bug/1761062
Both of these bugs can then result in
DestinationDiskExists
errors being raised if the instances ever attempt to return to the host.Warning
Operators will need to ensure that the instance directory itself, specified by
[DEFAULT]/instances_path
, is not shared between computes before enabling this workaround otherwise the console.log, kernels, ramdisks and any additional files being used by the running instance will be lost.Related options:
compute_driver
(libvirt)[libvirt]/images_type
(rbd)instances_path
- disable_fallback_pcpu_query¶
- Type:
boolean
- Default:
False
Disable fallback request for VCPU allocations when using pinned instances.
Starting in Train, compute nodes using the libvirt virt driver can report
PCPU
inventory and will use this for pinned instances. The scheduler will automatically translate requests using the legacy CPU pinning-related flavor extra specs,hw:cpu_policy
andhw:cpu_thread_policy
, their image metadata property equivalents, and the emulator threads pinning flavor extra spec,hw:emulator_threads_policy
, to new placement requests. However, compute nodes require additional configuration in order to reportPCPU
inventory and this configuration may not be present immediately after an upgrade. To ensure pinned instances can be created without this additional configuration, the scheduler will make a second request to placement for old-styleVCPU
-based allocations and fallback to these allocation candidates if necessary. This has a slight performance impact and is not necessary on new or upgraded deployments where the new configuration has been set on all hosts. By setting this option, the second lookup is disabled and the scheduler will only requestPCPU
-based allocations.Warning
This option is deprecated for removal since 20.0.0. Its value may be silently ignored in the future.
- never_download_image_if_on_rbd¶
- Type:
boolean
- Default:
False
When booting from an image on a ceph-backed compute node, if the image does not already reside on the ceph cluster (as would be the case if glance is also using the same cluster), nova will download the image from glance and upload it to ceph itself. If using multiple ceph clusters, this may cause nova to unintentionally duplicate the image in a non-COW-able way in the local ceph deployment, wasting space.
For more information, refer to the bug report:
https://bugs.launchpad.net/nova/+bug/1858877
Enabling this option will cause nova to refuse to boot an instance if it would require downloading the image from glance and uploading it to ceph itself.
Related options:
compute_driver
(libvirt)[libvirt]/images_type
(rbd)
- reserve_disk_resource_for_image_cache¶
- Type:
boolean
- Default:
False
If it is set to True then the libvirt driver will reserve DISK_GB resource for the images stored in the image cache. If the
DEFAULT.instances_path
is on different disk partition than the image cache directory then the driver will not reserve resource for the cache.Such disk reservation is done by a periodic task in the resource tracker that runs every
update_resources_interval
seconds. So the reservation is not updated immediately when an image is cached.Related options:
- libvirt_disable_apic¶
- Type:
boolean
- Default:
False
With some kernels initializing the guest apic can result in a kernel hang that renders the guest unusable. This happens as a result of a kernel bug. In most cases the correct fix it to update the guest image kernel to one that is patched however in some cases this is not possible. This workaround allows the emulation of an apic to be disabled per host however it is not recommended to use outside of a CI or developer cloud.
- wait_for_vif_plugged_event_during_hard_reboot¶
- Type:
list
- Default:
[]
The libvirt virt driver implements power on and hard reboot by tearing down every vif of the instance being rebooted then plug them again. By default nova does not wait for network-vif-plugged event from neutron before it lets the instance run. This can cause the instance to requests the IP via DHCP before the neutron backend has a chance to set up the networking backend after the vif plug.
This flag defines which vifs nova expects network-vif-plugged events from during hard reboot. The possible values are neutron port vnic types:
normal
direct
macvtap
baremetal
direct-physical
virtio-forwarder
smart-nic
vdpa
accelerator-direct
accelerator-direct-physical
remote-managed
Adding a
vnic_type
to this configuration makes Nova wait for a network-vif-plugged event for each of the instance’s vifs having the specificvnic_type
before unpausing the instance, similarly to how new instance creation works.Please note that not all neutron networking backends send plug time events, for certain
vnic_type
therefore this config is empty by default.The ml2/ovs and the networking-odl backends are known to send plug time events for ports with
normal
vnic_type
so it is safe to addnormal
to this config if you are using only those backends in the compute host.The neutron in-tree SRIOV backend does not reliably send network-vif-plugged event during plug time for ports with
direct
vnic_type and never sends that event for port withdirect-physical
vnic_type during plug time. For othervnic_type
and backend pairs, please consult the developers of the backend.Related options:
- enable_qemu_monitor_announce_self¶
- Type:
boolean
- Default:
False
If it is set to True the libvirt driver will try as a best effort to send the announce-self command to the QEMU monitor so that it generates RARP frames to update network switches in the post live migration phase on the destination.
Please note that this causes the domain to be considered tainted by libvirt.
Related options:
DEFAULT.compute_driver
(libvirt)
- qemu_monitor_announce_self_count¶
- Type:
integer
- Default:
3
- Minimum Value:
1
The total number of times to send the announce_self command to the QEMU monitor when enable_qemu_monitor_announce_self is enabled.
Related options:
- qemu_monitor_announce_self_interval¶
- Type:
integer
- Default:
1
- Minimum Value:
1
The number of seconds to wait before re-sending the announce_self command to the QEMU monitor.
Related options:
- disable_compute_service_check_for_ffu¶
- Type:
boolean
- Default:
False
If this is set, the normal safety check for old compute services will be treated as a warning instead of an error. This is only to be enabled to facilitate a Fast-Forward upgrade where new control services are being started before compute nodes have been able to update their service record. In an FFU, the service records in the database will be more than one version old until the compute nodes start up, but control services need to be online first.
- unified_limits_count_pcpu_as_vcpu¶
- Type:
boolean
- Default:
False
When using unified limits, use VCPU + PCPU for VCPU quota usage.
If the deployment is configured to use unified limits via
[quota]driver=nova.quota.UnifiedLimitsDriver
, by default VCPU resources are counted independently from PCPU resources, consistent with how they are represented in the placement service.Legacy quota behavior counts PCPU as VCPU and returns the sum of VCPU + PCPU usage as the usage count for VCPU. Operators relying on the aggregation of VCPU and PCPU resource usage counts should set this option to True.
Related options:
- skip_cpu_compare_on_dest¶
- Type:
boolean
- Default:
False
With the libvirt driver, during live migration, skip comparing guest CPU with the destination host. When using QEMU >= 2.9 and libvirt >= 4.4.0, libvirt will do the correct thing with respect to checking CPU compatibility on the destination host during live migration.
- skip_cpu_compare_at_startup¶
- Type:
boolean
- Default:
False
This will skip the CPU comparison call at the startup of Compute service and lets libvirt handle it.
- skip_hypervisor_version_check_on_lm¶
- Type:
boolean
- Default:
False
When this is enabled, it will skip version-checking of hypervisors during live migration.
- skip_reserve_in_use_ironic_nodes¶
- Type:
boolean
- Default:
False
This may be useful if you use the Ironic driver, but don’t have automatic cleaning enabled in Ironic. Nova, by default, will mark Ironic nodes as reserved as soon as they are in use. When you free the Ironic node (by deleting the nova instance) it takes a while for Nova to un-reserve that Ironic node in placement. Usually this is a good idea, because it avoids placement providing an Ironic as a valid candidate when it is still being cleaned. However, if you don’t use automatic cleaning, it can cause an extra delay before and Ironic node is available for building a new Nova instance.
- disable_deep_image_inspection¶
- Type:
boolean
- Default:
False
This disables the additional deep image inspection that the compute node does when downloading from glance. This includes backing-file, data-file, and known-features detection before passing the image to qemu-img. Generally, this inspection should be enabled for maximum safety, but this workaround option allows disabling it if there is a compatibility concern.
wsgi¶
Options under this group are used to configure WSGI (Web Server Gateway Interface). WSGI is used to serve API requests.
- api_paste_config¶
- Type:
string
- Default:
api-paste.ini
This option represents a file name for the paste.deploy config for nova-api.
Possible values:
A string representing file name for the paste.deploy config.
¶ Group
Name
DEFAULT
api_paste_config
- wsgi_log_format¶
- Type:
string
- Default:
%(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f
It represents a python format string that is used as the template to generate log lines. The following values can be formatted into it: client_ip, date_time, request_line, status_code, body_length, wall_seconds.
This option is used for building custom request loglines when running nova-api under eventlet. If used under uwsgi or apache, this option has no effect.
Possible values:
‘%(client_ip)s “%(request_line)s” status: %(status_code)s ‘ ‘len: %(body_length)s time: %(wall_seconds).7f’ (default)
Any formatted string formed by specific values.
¶ Group
Name
DEFAULT
wsgi_log_format
Warning
This option is deprecated for removal since 16.0.0. Its value may be silently ignored in the future.
- Reason:
This option only works when running nova-api under eventlet, and encodes very eventlet specific pieces of information. Starting in Pike the preferred model for running nova-api is under uwsgi or apache mod_wsgi.
- secure_proxy_ssl_header¶
- Type:
string
- Default:
<None>
This option specifies the HTTP header used to determine the protocol scheme for the original request, even if it was removed by a SSL terminating proxy.
Possible values:
None (default) - the request scheme is not influenced by any HTTP headers
Valid HTTP header, like
HTTP_X_FORWARDED_PROTO
WARNING: Do not set this unless you know what you are doing.
Make sure ALL of the following are true before setting this (assuming the values from the example above):
Your API is behind a proxy.
Your proxy strips the X-Forwarded-Proto header from all incoming requests. In other words, if end users include that header in their requests, the proxy will discard it.
Your proxy sets the X-Forwarded-Proto header and sends it to API, but only for requests that originally come in via HTTPS.
If any of those are not true, you should keep this setting set to None.
¶ Group
Name
DEFAULT
secure_proxy_ssl_header
- ssl_ca_file¶
- Type:
string
- Default:
<None>
This option allows setting path to the CA certificate file that should be used to verify connecting clients.
Possible values:
String representing path to the CA certificate file.
Related options:
enabled_ssl_apis
¶ Group
Name
DEFAULT
ssl_ca_file
- ssl_cert_file¶
- Type:
string
- Default:
<None>
This option allows setting path to the SSL certificate of API server.
Possible values:
String representing path to the SSL certificate.
Related options:
enabled_ssl_apis
¶ Group
Name
DEFAULT
ssl_cert_file
- ssl_key_file¶
- Type:
string
- Default:
<None>
This option specifies the path to the file where SSL private key of API server is stored when SSL is in effect.
Possible values:
String representing path to the SSL private key.
Related options:
enabled_ssl_apis
¶ Group
Name
DEFAULT
ssl_key_file
- tcp_keepidle¶
- Type:
integer
- Default:
600
- Minimum Value:
0
This option sets the value of TCP_KEEPIDLE in seconds for each server socket. It specifies the duration of time to keep connection active. TCP generates a KEEPALIVE transmission for an application that requests to keep connection active. Not supported on OS X.
Related options:
keep_alive
¶ Group
Name
DEFAULT
tcp_keepidle
- default_pool_size¶
- Type:
integer
- Default:
1000
- Minimum Value:
0
This option specifies the size of the pool of greenthreads used by wsgi. It is possible to limit the number of concurrent connections using this option.
¶ Group
Name
DEFAULT
wsgi_default_pool_size
- max_header_line¶
- Type:
integer
- Default:
16384
- Minimum Value:
0
This option specifies the maximum line size of message headers to be accepted. max_header_line may need to be increased when using large tokens (typically those generated by the Keystone v3 API with big service catalogs).
Since TCP is a stream based protocol, in order to reuse a connection, the HTTP has to have a way to indicate the end of the previous response and beginning of the next. Hence, in a keep_alive case, all messages must have a self-defined message length.
¶ Group
Name
DEFAULT
max_header_line
- keep_alive¶
- Type:
boolean
- Default:
True
This option allows using the same TCP connection to send and receive multiple HTTP requests/responses, as opposed to opening a new one for every single request/response pair. HTTP keep-alive indicates HTTP connection reuse.
Possible values:
True : reuse HTTP connection.
False : closes the client socket connection explicitly.
Related options:
tcp_keepidle
¶ Group
Name
DEFAULT
wsgi_keep_alive
- client_socket_timeout¶
- Type:
integer
- Default:
900
- Minimum Value:
0
This option specifies the timeout for client connections’ socket operations. If an incoming connection is idle for this number of seconds it will be closed. It indicates timeout on individual read/writes on the socket connection. To wait forever set to 0.
¶ Group
Name
DEFAULT
client_socket_timeout
zvm¶
zvm options allows cloud administrator to configure related z/VM hypervisor driver to be used within an OpenStack deployment.
zVM options are used when the compute_driver is set to use zVM (compute_driver=zvm.ZVMDriver)
- cloud_connector_url¶
- Type:
URI
- Default:
http://zvm.example.org:8080/
This option has a sample default set, which means that its actual default value may vary from the one documented above.
URL to be used to communicate with z/VM Cloud Connector.
- ca_file¶
- Type:
string
- Default:
<None>
CA certificate file to be verified in httpd server with TLS enabled
A string, it must be a path to a CA bundle to use.
- image_tmp_path¶
- Type:
string
- Default:
$state_path/images
This option has a sample default set, which means that its actual default value may vary from the one documented above.
The path at which images will be stored (snapshot, deploy, etc).
Images used for deploy and images captured via snapshot need to be stored on the local disk of the compute host. This configuration identifies the directory location.
- Possible values:
A file system path on the host running the compute service.
- reachable_timeout¶
- Type:
integer
- Default:
300
Timeout (seconds) to wait for an instance to start.
The z/VM driver relies on communication between the instance and cloud connector. After an instance is created, it must have enough time to wait for all the network info to be written into the user directory. The driver will keep rechecking network status to the instance with the timeout value, If setting network failed, it will notify the user that starting the instance failed and put the instance in ERROR state. The underlying z/VM guest will then be deleted.
- Possible Values:
Any positive integer. Recommended to be at least 300 seconds (5 minutes), but it will vary depending on instance and system load. A value of 0 is used for debug. In this case the underlying z/VM guest will not be deleted when the instance is marked in ERROR state.