Ocata Series Release Notes¶
15.1.5-28¶
Security Issues¶
OSSA-2019-003: Nova Server Resource Faults Leak External Exception Details (CVE-2019-14433)
This release contains a security fix for bug 1837877 where users without the admin role can be exposed to sensitive error details in the server resource fault
message
.There is a behavior change where non-nova exceptions will only record the exception class name in the fault
message
field which is exposed to all users, regardless of the admin role.The fault
details
, which are only exposed to users with the admin role, will continue to include the traceback and also include the exception value which for non-nova exceptions is what used to be exposed in the faultmessage
field. Meaning, the information that admins could see for server faults is still available, but the exception value may be indetails
rather thanmessage
now.
Bug Fixes¶
When testing whether direct IO is possible on the backing storage for an instance, Nova now uses a block size of 4096 bytes instead of 512 bytes, avoiding issues when the underlying block device has sectors larger than 512 bytes. See bug https://launchpad.net/bugs/1801702 for details.
15.1.5¶
Security Issues¶
The ‘AMD-SSBD’ and ‘AMD-NO-SSB’ flags have been added to the list of available choices for the
[libvirt]/cpu_model_extra_flags
config option. These are important for proper mitigation of security issues in AMD CPUs. For more information see https://www.redhat.com/archives/libvir-list/2018-June/msg01111.html
Other Notes¶
instance.shutdown.end
versioned notification will have an emptyip_addresses
field since the network resources associated with the instance are deallocated before this notification is sent, which is actually more accurate. Consumers should rely on the instance.shutdown.start notification if they need the network information for the instance when it is being deleted.
15.1.4¶
Security Issues¶
A new policy rule,
os_compute_api:servers:create:zero_disk_flavor
, has been introduced which defaults torule:admin_or_owner
for backward compatibility, but can be configured to make the compute API enforce that server create requests using a flavor with zero root disk must be volume-backed or fail with a403 HTTPForbidden
error.Allowing image-backed servers with a zero root disk flavor can be potentially hazardous if users are allowed to upload their own images, since an instance created with a zero root disk flavor gets its size from the image, which can be unexpectedly large and exhaust local disk on the compute host. See https://bugs.launchpad.net/nova/+bug/1739646 for more details.
While this is introduced in a backward-compatible way, the default will be changed to
rule:admin_api
in a subsequent release. It is advised that you communicate this change to your users before turning on enforcement since it will result in a compute API behavior change.
15.1.3¶
Security Issues¶
The ‘SSBD’ and ‘VIRT-SSBD’ cpu flags have been added to the list of available choices for the
[libvirt]/cpu_model_extra_flags
config option. These are important for proper mitigation of the Spectre 3a and 4 CVEs. Note that the use of either of these flags require updated packages below nova, including libvirt, qemu (specifically >=2.9.0 for virt-ssbd), linux, and system firmware. For more information see https://www.us-cert.gov/ncas/alerts/TA18-141A
15.1.1¶
Prelude¶
This release includes fixes for security vulnerabilities.
Upgrade Notes¶
Starting in Ocata, there is a behavior change where aggregate-based overcommit ratios will no longer be honored during scheduling for the FilterScheduler. Instead, overcommit values must be set on a per-compute-node basis in the Nova configuration files.
If you have been relying on per-aggregate overcommit, during your upgrade, you must change to using per-compute-node overcommit ratios in order for your scheduling behavior to stay consistent. Otherwise, you may notice increased NoValidHost scheduling failures as the aggregate-based overcommit is no longer being considered.
You can safely remove the AggregateCoreFilter, AggregateRamFilter, and AggregateDiskFilter from your
[filter_scheduler]enabled_filters
and you do not need to replace them with any other core/ram/disk filters. The placement query in the FilterScheduler takes care of the core/ram/disk filtering, so CoreFilter, RamFilter, and DiskFilter are redundant.Please see the mailing list thread for more information: http://lists.openstack.org/pipermail/openstack-operators/2018-January/014748.html
Security Issues¶
[CVE-2017-18191] Swapping encrypted volumes can lead to data loss and a possible compute host DOS attack.
Bug Fixes¶
The
delete_host
command has been added innova-manage cell_v2
to delete a host from a cell (host mappings). Theforce
option has been added innova-manage cell_v2 delete_cell
. If theforce
option is specified, a cell can be deleted even if the cell has hosts.
If scheduling fails during rebuild the server instance will go to ERROR state and a fault will be recorded. Bug 1744325
The libvirt driver now allows specifying individual CPU feature flags for guests, via a new configuration attribute
[libvirt]/cpu_model_extra_flags
– only withcustom
as the[libvirt]/cpu_model
. Refer to its documentation innova.conf
for usage details.One of the motivations for this is to alleviate the performance degradation (caused as a result of applying the “Meltdown” CVE fixes) for guests running with certain Intel-based virtual CPU models. This guest performance impact is reduced by exposing the CPU feature flag ‘PCID’ (“Process-Context ID”) to the guest CPU, assuming that it is available in the physical hardware itself.
Note that besides
custom
, Nova’s libvirt driver has two other CPU modes:host-model
(which is the default), andhost-passthrough
. Refer to the[libvirt]/cpu_model_extra_flags
documentation for what to do when you are using either of those CPU modes in context of ‘PCID’.
15.1.0¶
Known Issues¶
Nova does not support running the nova-api service under mod_wsgi or uwsgi in Ocata. There are some experimental scripts that have been available for years which allow you do to this, but doing so in Ocata results in possible failures to list and show instance details in a cells v2 setup. See bug 1661360 for details.
Upgrade Notes¶
This release contains a schema migration for the
nova_api
database in order to address bug 1738094:https://bugs.launchpad.net/nova/+bug/1738094
The migration is optional and can be postponed if you have not been affected by the bug. The bug manifests itself through “Data too long for column ‘spec’” database errors.
Bug Fixes¶
The fix for OSSA-2017-005 (CVE-2017-16239) was too far-reaching in that rebuilds can now fail based on scheduling filters that should not apply to rebuild. For example, a rebuild of an instance on a disabled compute host could fail whereas it would not before the fix for CVE-2017-16239. Similarly, rebuilding an instance on a host that is at capacity for vcpu, memory or disk could fail since the scheduler filters would treat it as a new build request even though the rebuild is not claiming new resources.
Therefore this release contains a fix for those regressions in scheduling behavior on rebuild while maintaining the original fix for CVE-2017-16239.
Note
The fix relies on a
RUN_ON_REBUILD
variable which is checked for all scheduler filters during a rebuild. The reasoning behind the value for that variable depends on each filter. If you have out-of-tree scheduler filters, you will likely need to assess whether or not they need to override the default value (False) for the new variable.
Fixes bug 1695861 in which the aggregate API accepted requests that have availability zone names including ‘:’. With this fix, a creation of an availabilty zone whose name includes ‘:’ results in a
400 BadRequest
error response.
This release includes a fix for bug 1733886 which was a regression introduced in the 2.36 API microversion where the
force
parameter was missing from thePUT /os-quota-sets/{tenant_id}
API request schema so users could not force quota updates with microversion 2.36 or later. The bug is now fixed so that theforce
parameter can once again be specified during quota updates. There is no new microversion for this change since it is an admin-only API.
15.0.8¶
Security Issues¶
OSSA-2017-005: Nova Filter Scheduler bypass through rebuild action
By rebuilding an instance, an authenticated user may be able to circumvent the FilterScheduler bypassing imposed filters (for example, the ImagePropertiesFilter or the IsolatedHostsFilter). All setups using the FilterScheduler (or CachingScheduler) are affected.
The fix is in the nova-api and nova-conductor services.
15.0.7¶
Bug Fixes¶
Correctly allow the use of a custom scheduler driver by using the name of the custom driver entry point in the
[scheduler]/driver
config option. You must also update the entry point insetup.cfg
.
Physical network name will be retrieved from a multi-segment network. The current implementation will retrieve the physical network name for the first segment that provides it. This is mostly intended to support a combination of vxlan and vlan segments. Additional work will be required to support a case of multiple vlan segments associated with different physical networks.
15.0.5¶
Bug Fixes¶
Includes the fix for bug 1673613 which could cause issues when upgrading and running
nova-manage cell_v2 simple_cell_setup
ornova-manage cell_v2 map_cell0
where the database connection is read from config and has special characters in the URL.
Fixes bug 1691545 in which there was a significant increase in database connections because of the way connections to cell databases were being established. With this fix, objects related to database connections are cached in the API service and reused to prevent new connections being established for every communication with cell databases.
15.0.2¶
Prelude¶
This release includes fixes for security vulnerabilities.
Security Issues¶
[CVE-2017-7214] Failed notification payload is dumped in logs with auth secrets
15.0.1¶
Prelude¶
The 15.0.1 Ocata release contains fixes for several high severity, high impact bugs. If you have not yet upgraded to 15.0.0, it is recommended to upgrade directly to 15.0.1.
Known Issues¶
There is a known regression in Ocata reported in bug 1671648 where server build failures on a compute node are not retried on another compute node. The fix for this bug is being worked and will be provided shortly in a 15.0.2 release.
Critical Issues¶
Bug 1670627 is fixed. This bug led to potential over-quota errors after several failed server build attempts, resulting in quota usage to reach the limit even though the servers were deleted.
Unfortunately the
nova-manage project quota_usage_refresh
command will not reset the usages to fix this problem once encountered.If the project should not have any outstanding resource usage, then one possible workaround is to delete the existing quota usage for the project:
``nova quota-delete --tenant <tenant_id>``
That will cleanup the
project_user_quotas
,quota_usages
andreservations
tables for the given project in thenova
database and reset the quota limits for the project back to the defaults defined in nova.conf.
Bug Fixes¶
Fixes bug 1670522 which was a regression in the 15.0.0 Ocata release. For compute nodes running the libvirt driver with
virt_type
not set to “kvm” or “qemu”, i.e. “xen”, creating servers will fail by default if libvirt >= 1.3.3 and QEMU >= 2.7.0 without this fix.
Bug 1665263 is fixed. This was a regression where
instance.delete.start
andinstance.delete.end
notifications were not emitted when deleting an instance inERROR
state due to a failed build.
15.0.0¶
Prelude¶
Neutron is now the default configuration for new deployments.
The 15.0.0 release includes many new features and bug fixes. It is difficult to cover all the changes that have been introduced. Please at least read the upgrade section which describes the required actions to upgrade your cloud from 14.0.0 (Newton) to 15.0.0 (Ocata).
That said, a few major changes are worth mentioning. This is not an exhaustive list:
The latest API microversion supported for Ocata is v2.42. Details on REST API microversions added since the 14.0.0 Newton release can be found in the REST API Version History page.
The Nova FilterScheduler driver is now able to make scheduling decisions based on the new Placement RESTful API endpoint that becomes mandatory in Ocata. Accordingly, the compute nodes will refuse to start if you do not amend the configuration to add the
[placement]
section so they can provide their resource usage. For the moment, only CPU, RAM and disk resource usage are verified by the Placement API, but we plan to add more resource classes in the next release. You will find further details in the features and upgrade sections below, and the Placement API page.Ocata contains a lot of new CellsV2 functions, but not all of it is fully ready for production. All deployments must set up their existing nodes as a cell, with database connection and MQ transport_url config items matching that cell. In a subsequent release, additional cells will be fully supported, as will a migration path for CellsV1 users. By default, an Ocata deployment now needs to configure at least one new “Cell V2” (not to be confused with the first version of cells). In Newton, it was possible to deploy a single cell V2 and schedule on it but this was optional. Now in Ocata, single CellsV2 deployments are mandatory. More details to be found when reading the release notes below.
There is a new nova-status command that gives operators a better view of their cloud. In particular, a new subcommand called “upgrade” allows operators to run a pre-flight check on their deployment before upgrading. This helps them to proactively identify potential upgrade issues that could occur.
New Features¶
Updates the network metadata that is passed to configdrive by the Ironic virt driver. The metadata now includes network information about port groups and their associated ports. It will be used to configure port groups on the baremetal instance side.
Adding aarch64 to the list of supported architectures for NUMA and hugepage features. This requires libvirt>=1.2.7 for NUMA, libvirt>=1.2.8 for hugepage and qemu v2.1.0 for both.
OSProfiler support was added. This cross-project profiling library allows to trace various OpenStack requests through all OpenStack services that support it. To initiate OpenStack request tracing –profile <HMAC_KEY> option needs to be added to the CLI command. This key needs to present one of the secret keys defined in nova.conf configuration file with hmac_keys option under the [profiler] configuration section. To enable or disable Nova profiling the appropriate enabled option under the same section needs to be set either to True or False. By default Nova will trace all API and RPC requests, but there is an opportunity to trace DB requests as well. For this purpose trace_sqlalchemy option needs to be set to True. As a prerequisite OSProfiler library and its storage backend needs to be installed to the environment. If so (and if profiling is enabled in nova.conf) the trace can be generated via following command, for instance - $ nova –profile SECRET_KEY boot –image <image> –flavor <flavor> <name>. At the end of output there will be message with <trace_id>, and to plot nice HTML graphs the following command should be used - $ osprofiler trace show <trace_id> –html –out result.html
The following versioned swap volume notifications have been added in the compute manager:
instance.volume_swap.start
instance.volume_swap.end
instance.volume_swap.error
Support for archiving all deleted rows from the database has been added to the
nova-manage db archive_deleted_rows
command. The--until-complete
option will continuously run the process until no more rows are available for archiving.
Virtuozzo hypervisor now supports ephemeral disks for containers.
Support versioned notifications for flavor operations like create, delete, update access and update extra_specs.
The Hyper-V driver now supports the following quota flavor extra specs, allowing to specify IO limits applied for each of the instance local disks, individually.
quota:disk_total_bytes_sec
quota:disk_total_iops_sec - those are normalized IOPS, thus each IO request is accounted for as 1 normalized IO if the size of the request is less than or equal to a predefined base size (8KB).
Also, the following Cinder front-end QoS specs are now supported for SMB Cinder backends:
total_bytes_sec
total_iops_sec - normalized IOPS
The Hyper-V driver now uses os-brick for volume related operations, introducing the following new features:
Attaching volumes over fibre channel on a passthrough basis.
Improved iSCSI MPIO support, by connecting to multiple iSCSI targets/portals when available and allowing using a predefined list of initiator HBAs.
Adds trigger crash dump support to ironic virt driver. This feature requires the Ironic service to support API version 1.29 or later. It also requires python-ironicclient >= 1.11.0.
Adds soft reboot support to Ironic virt driver. If hardware driver in Ironic doesn’t support soft reboot, hard reboot is tried. This feature requires the Ironic service to support API version 1.27 or later. It also requires python-ironicclient >= 1.10.0.
Adds soft power off support to Ironic virt driver. This feature requires the Ironic service to support API version 1.27 or later. It also requires python-ironicclient >= 1.10.0.
Virtuozzo hypervisor now supports libvirt callback to set admin password. Requires libvirt>=2.0.0.
The XenServer compute driver now supports hot-plugging virtual network interfaces.
The same policy rule (os_compute_api:os-server-groups) was being used for all actions (show, index, delete, create) for server_groups REST APIs. It was thus impossible to provide different RBAC for specific actions based on roles. To address this changes were made to have separate policy rules for each action. The original rule (os_compute_api:os-server-groups) is left unchanged for backward compatibility.
The libvirt driver now has a
live_migration_scheme
configuration option which should be used where thelive_migration_uri
would previously have been configured with non-default scheme.
The nova Hyper-V driver can now plug OVS VIFs. This means that neutron-ovs-agent can be used as an L2 agent instead of neutron-hyperv-agent. In order to plug OVS VIFs, the configuration option “vswitch_name” from the “hyperv” section must be set to the vSwitch which has the OVS extension enabled. Hot-plugging is only supported on Windows / Hyper-V Server 2016 + Generation 2 VMs. Older Hyper-V versions only support attaching vNICs while the VM is turned off.
The nova Hyper-V driver now supports adding PCI passthrough devices to Hyper-V instances (discrete device assignment). This feature has been introduced in Windows / Hyper-V Server 2016 and offers the possibility to attach some of the host’s PCI devices (e.g.: GPU devices) directly to Hyper-V instances. In order to benefit from this feature, Hyper-V compute nodes must support SR-IOV and must have assignable PCI devices. This can easily be checked by running the following powershell commands:
Start-BitsTransfer https://raw.githubusercontent.com/Microsoft/Virtualization-Documentation/master/hyperv-samples/benarm-powershell/DDA/survey-dda.ps1 .\survey-dda.ps1
The script above will print a list of assignable PCI devices available on the host, and if the host supports SR-IOV.
If the host supports this feature and it has at least an assignable PCI device, the host must be configured to allow those PCI devices to be assigned to VMs. For information on how to do this, follow this guide [1].
After the compute nodes have been configured, the nova-api, nova-scheduler, and the nova-compute services will have to be configured next [2].
[1] https://blogs.technet.microsoft.com/heyscriptingguy/2016/07/14/passing-through-devices-to-hyper-v-vms-by-using-discrete-device-assignment/ [2] http://docs.openstack.org/admin-guide/compute-pci-passthrough.html
Added boot order support in the Hyper-V driver. The HyperVDriver can now set the requested boot order for instances that are Generation 2 VMs (the given image has the property “hw_machine_type=hyperv-gen2”). For Generation 1 VMs, the spawned VM’s boot order is changed only if the given image is an ISO, booting from ISO first.
The nova Hyper-V driver now supports symmetric NUMA topologies. This means that all the NUMA nodes in the NUMA topology must have the same amount of vCPUs and memory. It can easily be requested by having the flavor extra_spec “hw:numa_nodes”, or the image property “hw_numa_nodes”. An instance with NUMA topology cannot have dynamic memory enabled. Thus, if an instance requires a NUMA topology, it will be spawned without dynamic memory, regardless of the value set in the “dynamic_memory_ratio” config option in the compute node’s “nova.conf” file. In order to benefit from this feature, the host’s NUMA spanning must be disabled. Hyper-V does not guarantee CPU pinning, thus, the nova Hyper-V driver will not spawn instances with the flavor extra_spec “hw:cpu_policy” or image property “hw_cpu_policy” set to “dedicated”.
Added support for Hyper-V VMs with UEFI Secure Boot enabled. In order to create such VMs, there are a couple of things to consider:
Images should be prepared for Generation 2 VMs. The image property “hw_machine_type=hyperv-gen2” is mandatory.
The guest OS type must be specified in order to properly spawn the VMs. It can be specifed through the image property “os_type”, and the acceptable values are “windows” or “linux”.
The UEFI Secure Boot feature can be requested through the image property “os_secure_boot” (acceptable values: “disabled”, “optional”, “required”) or flavor extra spec “os:secure_boot” (acceptable values: “disabled”, “required”). The flavor extra spec will take precedence. If the image property and the flavor extra spec values are conflicting, then an exception is raised.
This feature is supported on Windows / Hyper-V Server 2012 R2 for Windows guests, and Windows / Hyper-V Server 2016 for both Windows and Linux guests.
Encryption provider constants have been introduced detailing the supported encryption formats such as LUKs along with their associated in-tree provider implementations. These constants should now be used to identify an encryption provider implementation for a given encryption format.
Adds serial console support to Ironic driver. Nova now supports serial console to Ironic bare metals for Ironic
socat
console driver. In order to use this feature, serial console must be configured in Nova and the Ironicsocat
console driver must be used and configured in Ironic. Ironic serial console configuration is documented in http://docs.openstack.org/developer/ironic/deploy/console.html.
Live migration is supported for both Virtuozzo containers and virtual machines when using virt_type=parallels.
The following legacy notifications have been transformed to a new versioned payload:
aggregate.create
aggregate.delete
instance.create
instance.finish_resize
instance.power_off
instance.resume
instance.shelve_offload
instance.shutdown
instance.snapshot
instance.unpause
instance.unshelve
Every versioned notification has a sample file stored under doc/notification_samples directory. Consult http://docs.openstack.org/developer/nova/notifications.html for more information.
A new
nova-status upgrade check
CLI is provided for checking the readiness of a deployment when preparing to upgrade to the latest release. The tool is written to handle both fresh installs and upgrades from an earlier release, for example upgrading from the 14.0.3 Newton release. There can be multiple checks performed with varying degrees of success. More details on the command and how to interpret results are in the nova-status man page.
All deployments will function as a single-cell environment. Multiple v2 cells are technically possible, but should only be used for testing as many other things will not work across cell boundaries yet. For details on cells v2 and the setup required for Nova with cells v2, see the cells documentation. [1]
Added microversion v2.40 which introduces pagination support for usage with the help of new optional parameters ‘limit’ and ‘marker’. If ‘limit’ isn’t provided, it will default to the configurable max limit which is currently 1000.
/os-simple-tenant-usage?limit={limit}&marker={instance_uuid} /os-simple-tenant-usage/{tenant}?limit={limit}&marker={instance_uuid}
Older microversions will not accept these new paging query parameters, but they will start to silently limit by the max limit to encourage the adoption of this new microversion, and circumvent the existing possibility DoS-like usage requests on systems with thousands of instances.
Enhance pci.passthrough_whitelist to support regular expression syntax. The ‘address’ field can be regular expression syntax. The old pci.passthrough_whitelist, glob sytnax, is still valid config.
A new Placement API microversion 1.3 is added with support for filtering the list of resource providers to include only those resource providers which are members of any of the aggregates listed by uuid in the member_of query parameter. The parameter is used when making a GET /resource_providers request. The value of the parameter uses the in: syntax to provide a list of aggregate uuids as follows:
/resource_providers?member_of=in:09c931b0-c0d7-4e80-8e01-9e6511db8259,f8ab4fa2-804f-402e-b675-7918bd04b173
If other filtering query parameters are present, the results are a boolean AND of all the filters.
A new Placement API microversion 1.4 is added. Users may now query the Placement REST API for resource providers that have the ability to meet a set of requested resource amounts. The GET /resource_providers API call can have a “resources” query string parameter supplied that indicates the requested amounts of various resources that a provider must have the capacity to serve. The “resources” query string parameter takes the form:
?resources=$RESOURCE_CLASS_NAME:$AMOUNT,$RESOURCE_CLASS_NAME:$AMOUNT
For instance, if the user wishes to see resource providers that can service a request for 2 vCPUs, 1024 MB of RAM and 50 GB of disk space, the user can issue a request of:
``GET /resource_providers?resources=VCPU:2,MEMORY_MB:1024,DISK_GB:50``
The placement API is only available to admin users.
A new administrator-only resource endpoint was added to the OpenStack Placement REST API for managing custom resource classes. Custom resource classes are specific to a deployment and represent types of quantitative resources that are not interoperable between OpenStack clouds. See the Placement REST API Version History documentation for usage details.
nova-scheduler process is now calling the placement API in order to get a list of valid destinations before calling the filters. That works only if all your compute nodes are fully upgraded to Ocata. If some nodes are not upgraded, the scheduler will still lookup from the DB instead which is less performant.
A new 2.41 microversion was added to the Compute API. Users specifying this microversion will now see the ‘uuid’ attribute of aggregates when calling the os-aggregates REST API endpoint.
As new hosts are added to Nova, the nova-manage cell_v2 discover_hosts command must be run in order to map them into their cell. For deployments with proper automation, this is a trivial extra step in that process. However, for smaller or non-automated deployments, there is a new configuration variable for the scheduler process which will perform this discovery periodically. By setting scheduler.discover_hosts_in_cells_interval to a positive value, the scheduler will handle this for you. Note that this process involves listing all hosts in all cells, and is likely to be too heavyweight for large deployments to run all the time.
VLAN tags associated with instance network interfaces are now exposed via the metadata API and instance config drives and can be consumed by the instance. This is an extension of the device tagging mechanism added in past releases. This is useful for instances utilizing SR-IOV physical functions (PFs). The VLAN configuration for the guest’s virtual interfaces associated with these devices cannot be configured inside the guest OS from the host, but nonetheless must be configured with the VLAN tags of the device to ensure packet delivery. This feature makes this possible.
Note
VLAN tags are currently only supported via the Libvirt driver.
Added support for Keystone middleware feature where if service token is sent along with the user token, then it will ignore the expiration of user token. This helps deal with issues of user tokens expiring during long running operations, such as live-migration where nova tries to access Cinder and Neutron at the end of the operation using the user token that has expired. In order to use this functionality a service user needs to be created. Add service user configurations in
nova.conf
underservice_user
group and setsend_service_user_token
flag toTrue
. The minimum Keytone API version 3.8 and Keystone middleware version 4.12.0 is required to use this functionality. This only currently works with Nova - Cinder and Nova - Neutron API interactions.
The vendordata metadata system now caches boot time roles. Some external vendordata services want to provide metadata based on the role of the user who started the instance. It would be confusing if the metadata returned changed later if the role of the user changed, so we cache the boot time roles and then pass those to the external vendordata service.
The vendordata metadata system now supports a hard failure mode. This can be enabled using the
api.vendordata_dynamic_failure_fatal
configuration option. When enabled, an instance will fail to start if the instance cannot fetch dynamic vendordata.
The nova-manage online_data_migrations command now prints a tabular summary of completed and remaining records. The goal here is to get all your numbers to zero. The previous execution return code behavior is retained for scripting.
When using libvirt driver, vrouter VIFs (OpenContrail) now supports multiqueue mode, which allows to scale network performance across number of vCPUs. To use this feature one needs to create instance with more than 1 vCPU from an image with
hw_vif_multiqueue_enabled
property set totrue
.
A list of valid vif models is extended for Virtuozzo hypervisor (virt_type=parallels) with VIRTIO, RTL8139 and E1000 models.
Known Issues¶
Flavor.projects (access) will not be present in the instance versioned notifications since notifications currently do not lazy-load fields. This limitation is being tracked with bug 1653221.
Ironic nodes that were deleted from ironic’s database during Newton may result in orphaned resource providers causing incorrect scheduling decisions, leading to a reschedule. If this happens, the orphaned resource providers will need to be identified and removed.
The live-migration progress timeout controlled by the configuration option
[libvirt]/live_migration_progress_timeout
has been discovered to frequently cause live-migrations to fail with a progress timeout error, even though the live-migration is still making good progress. To minimize problems caused by these checks we have changed the default to 0, which means do not trigger a timeout. To modify when a live-migration will fail with a timeout error, please now look at[libvirt]/live_migration_completion_timeout
and[libvirt]/live_migration_downtime
.
When generating Libvirt XML to attach network interfaces for the tap, ivs, iovisor, midonet, and vrouter virtual interface types Nova previously generated an empty path attribute to the script element (<script path=’’/>) of the interface.
As of Libvirt 1.3.3 (commit) and later Libvirt no longer accepts an empty path attribute to the script element of the interface. Notably this includes Libvirt 2.0.0 as provided with RHEL 7.3 and CentOS 7.3-1611. The creation of virtual machines with offending interface definitions on a host with Libvirt 1.3.3 or later will result in an error “libvirtError: Cannot find ‘’ in path: No such file or directory”.
Additionally, where virtual machines already exist that were created using earlier versions of Libvirt interactions with these virtual machines via Nova or other utilities (e.g. virsh) may result in similar errors.
To mitigate this issue Nova no longer generates an empty path attribute to the script element when defining an interface. This resolves the issue with regards to virtual machine creation. To resolve the issue with regards to existing virtual machines a change to Libvirt is required, this is being tracked in Bugzilla 1412834
Once fully upgraded, if you create multiple real cells with hosts, the scheduler will utilize them, but those instances will likely be unusable because not all API functions are cells-aware yet.
Listing instances across multiple cells with a sort order will result in barber-pole sorting, striped across the cell boundaries.
Upgrade Notes¶
API configuration options have been moved to the ‘api’ group. They should no longer be included in the ‘DEFAULT’ group. Options affected by this change:
auth_strategy
use_forwarded_for
config_drive_skip_versions
vendordata_providers
vendordata_dynamic_targets
vendordata_dynamic_ssl_certfile
vendordata_dynamic_connect_timeout
vendordata_dynamic_read_timeout
metadata_cache_expiration
vendordata_jsonfile_path
max_limit
(wasosapi_max_limit
)compute_link_prefix
(wasosapi_compute_link_prefix
)glance_link_prefix
(wasosapi_glance_link_prefix
)allow_instance_snapshots
hide_server_address_states
(wasosapi_hide_server_address_states
)fping_path
use_neutron_default_nets
neutron_default_tenant_id
enable_instance_password
The
console_token_ttl
configuration option has been moved to theconsoleauth
group and renamedtoken_ttl
. It should no longer be included in theDEFAULT
group.
To allow access to the versions REST API from diverse origins, CORS support has been added to the ‘oscomputeversions’ pipeline in ‘/etc/nova/api-paste.ini’. Existing deployments that wish to enable support should add the ‘cors’ filter at the start of the ‘oscomputeversions’ pipeline.
The ivs-ctl command has been added to the rootwrap filters in compute.filters. Deployments needing support for BigSwitch no longer need to add the filters manually nor include network.filters at installation.
All pci configuration options have been added to the ‘pci’ group. They should no longer be included in the ‘DEFAULT’ group. These options are as below:
pci_alias (now pci.alias)
pci_passthrough_whitelist (now pci.passthrough_whitelist)
All general scheduler configuration options have been added to the
scheduler
group.scheduler_driver
(nowdriver
)scheduler_host_manager
(nowhost_manager
)scheduler_driver_task_period
(nowperiodic_task_interval
)scheduler_max_attempts
(nowmax_attempts
)
In addition, all filter scheduler configuration options have been added to the
filter_scheduler
group.scheduler_host_subset_size
(nowhost_subset_size
)scheduler_max_instances_per_host
(nowmax_instances_per_host
)scheduler_tracks_instance_changes
(nowtrack_instance_changes
)scheduler_available_filters
(nowavailable_filters
)scheduler_default_filters
(nowenabled_filters
)baremetal_scheduler_default_filters
(nowbaremetal_enabled_filters
)scheduler_use_baremetal_filters
(nowuse_baremetal_filters
)scheduler_weight_classes
(nowweight_classes
)ram_weight_multiplier
disk_weight_multipler
io_ops_weight_multipler
soft_affinity_weight_multiplier
soft_anti_affinity_weight_multiplier
isolated_images
isolated_hosts
restrict_isolated_hosts_to_isolated_images
aggregate_image_properties_isolation_namespace
aggregate_image_properties_isolation_separator
These options should no longer be included in the
DEFAULT
group.
The filter and sort query parameters for server list API are now limited according to whitelists. The whitelists are different for admin and non-admin users.
Filtering
The whitelist for REST API filters for admin users:
access_ip_v4
access_ip_v6
all_tenants
auto_disk_config
availability_zone
config_drive
changes-since
created_at
deleted
description
display_description
display_name
flavor
host
hostname
image
image_ref
ip
ip6
kernel_id
key_name
launch_index
launched_at
limit
locked_by
marker
name
node
not-tags (available in 2.26+)
not-tags-any (available in 2.26+)
power_state
progress
project_id
ramdisk_id
reservation_id
root_device_name
sort_dir
sort_key
status
tags (available in 2.26+)
tags-any (available in 2.26+)
task_state
tenant_id
terminated_at
user_id
uuid
vm_state
For non-admin users, there is a whitelist for filters already. That whitelist is unchanged.
Sorting
The whitelist for sort keys for admin users:
access_ip_v4
access_ip_v6
auto_disk_config
availability_zone
config_drive
created_at
display_description
display_name
host
hostname
image_ref
instance_type_id
kernel_id
key_name
launch_index
launched_at
locked_by
node
power_state
progress
project_id
ramdisk_id
root_device_name
task_state
terminated_at
updated_at
user_id
uuid
vm_state
For non-admin users, the sort key
host
andnode
will be excluded.Other
HTTP Bad Request 400 will be returned for the filters/sort keys which are on joined tables or internal data model attributes. They would previously cause a HTTP Server Internal Error 500, namely:
block_device_mapping
info_cache
metadata
pci_devices
security_groups
services
system_metadata
In order to maintain backward compatibility, filter and sort parameters which are not mapped to the REST API servers resource representation are ignored.
The three configuration options
cpu_allocation_ratio
,ram_allocation_ratio
anddisk_allocation_ratio
for the nova compute are now checked against negative values. If any of these three options is set to negative value then nova compute service will fail to start.
The default value for the
[xenserver]/vif_driver
configuration option has been changed tonova.virt.xenapi.vif.XenAPIOpenVswitchDriver
to match the default configuration of[DEFAULT]/use_neutron=True
.
Support for hw_watchdog_action as a flavor extra spec has been removed. The valid flavor extra spec is hw:watchdog_action and the image property, which takes precedence, is hw_watchdog_action.
The Ironic driver now requires python-ironicclient>=1.9.0, and requires Ironic service to support API version 1.28 or higher. As usual, Ironic should be upgraded before Nova for a smooth upgrade process.
As of Ocata, the minimum version of VMware vCenter that nova compute will interoperate with will be 5.1.0. Deployments using older versions of vCenter should upgrade. Running with vCenter version less than 5.5.0 is also now deprecated and 5.5.0 will become the minimum version in the 16.0.0 Pike release of Nova.
console_public_hostname
console options under theDEFAULT
group have been moved to thexenserver
group.
Following Notifications related configuration options have been moved from the
DEFAULT
group to thenotifications
group:notify_on_state_change
notify_on_api_faults
(wasnotify_api_faults
)default_level
(wasdefault_notification_level
)default_publisher_id
notification_format
When making connections to Ceph-backed volumes via the Libvirt driver, the auth values (rbd_user, rbd_secret_uuid) are now pulled from the backing cinder.conf rather than nova.conf. The nova.conf values are only used if set and the cinder.conf values are not set, but this fallback support is considered accidental and will be removed in the Nova 16.0.0 Pike release. See the Ceph documentation for configuring Cinder for RBD auth.
Nova no longer supports the deprecated Cinder v1 API.
Ocata requires that your deployment have created the cell and host mappings in Newton. If you have not done this, Ocata’s db sync command will fail. Small deployments will want to run nova-manage cell_v2 simple_cell_setup on Newton before upgrading. Operators must create a new database for cell0 before running cell_v2 simple_cell_setup. The simple cell setup command expects the name of the cell0 database to be <main database name>_cell0 as it will create a cell mapping for cell0 based on the main database connection, sync the cell0 database, and associate existing hosts and instances with the single cell.
The nova-network service was deprecated in the 14.0.0 Newton release. In the 15.0.0 Ocata release, nova-network will only work in a Cells v1 deployment. The Neutron networking service is now the default configuration for new deployments based on the
use_neutron
configuration option.
Most quota options have been moved into their own configuration group. The exception is quota_networks as it is an API flag not a quota flag. These options are as below:
quota_instances
(nowinstances
)quota_cores
(nowcores
)quota_ram
(nowram
)quota_floating_ips
(nowfloating_ips
)quota_fixed_ips
(nowfixed_ips
)quota_metadata_items
(nowmetadata_items
)quota_injected_files
(nowinjected_files
)quota_injected_file_content_bytes
(nowinjected_file_content_bytes
)quota_injected_file_path_length
(nowinjected_file_path_length
)quota_security_groups
(nowsecurity_groups
)quota_security_group_rules
(nowsecurity_group_rules
)quota_key_pairs
(nowkey_pairs
)quota_server_groups
(nowserver_groups
)quota_server_group_members
(nowserver_group_members
)reservation_expire
until_refresh
max_age
quota_driver
(nowdriver
)
The nova-all binary to launch all services has been removed after a deprecation period. It was only intended for testing purposes and not production use. Please use the individual Nova binaries to launch services.
The
compute_stats_class
configuration option was deprecated since the 13.0.0 Mitaka release and has been removed. Compute statistics are now always generated from thenova.compute.stats.Stats
class within Nova.
use_glance_v1
option was removed due to plans to remove Glance V1 support during Ocata development.
The deprecated S3 image backend has been removed.
XenServer users must now set the value of xenserver.ovs_integration_bridge before they can use the system. Previously this had a default of “xapi1”, which has now been removed, because it is dependent on the environment. The xapi<n> are internal bridges that are incrementally defined from zero and “xapi1” may not be the correct bridge. Operators should set this config value to the integration bridge used between all guests and the compute host in their environment.
The
scheduler_json_config_location
configuration option has not been used since the 13.0.0 Mitaka release and has been removed.
Configuration options related to the Barbican were deprecated and now completly removed from
barbican
group. These options are available in the Castellan library. Following are the affected options:barbican.catalog_info
barbican.endpoint_template
barbican.os_region_name
driver
configuration option has been removed fromcells
group. There is only one possible driver for cells (CellsRPCDriver), which makes this option redundant.
The deprecated
cert_topic
configuration option has been removed.
fatal_exception_format_errors
configuration option has been removed, as it was only used for internal testing.
Ironic configuration options that were used for a deprecated Identity v2 API have been removed from
ironic
group. Below is the detailed list of removed options:admin_usernale
admin_password
admin_url
admin_tenant_name
The concept that
service manager
were replaceable components was deprecated in Mitaka, so following config options are removed.metadata_manager
compute_manager
console_manager
consoleauth_manager
cert_manager
scheduler_manager
conductor.manager
In mitaka, an online migration was added to migrate older SRIOV parent device information from extra_info to a new column. Since two releases have gone out with that migration, it is removed in Ocata and operators are expetected to have run it as part of either of the previous two releases, if applicable.
Since the Placement service is now mandatory in Ocata, you need to deploy it and amend your compute node configuration with correct placement instructions before restarting nova-compute or the compute node will refuse to start.
If by Newton (14.0.0), you don’t use any of the CoreFilter, RamFilter or DiskFilter, then please modify all your compute node’s configuration by amending either
cpu_allocation_ratio
(if you don’t use CoreFilter) orram_allocation_ratio
(if you don’t use RamFilter) ordisk_allocation_ratio
(if you don’t use DiskFilter) by putting a 9999.0 value for the ratio before upgrading the nova-scheduler to Ocata.
The ‘’use_local’’ option, which made it possible to perform nova-conductor operations locally, has been removed. This legacy mode was introduced to bridge a gap during the transition to the conductor service. It no longer represents a reasonable alternative for deployers.
The deprecated compute config option
snapshot_name_template
has been removed. It is not used anywhere and has no effect on any code, so there is no impact.
The deprecated config option
compute_available_monitors
has been removed from theDEFAULT
config section. Use setuptools entry points to list available monitor plugins.
The following deprecated nova-manage commands have been removed:
nova-manage account scrub
nova-manage fixed *
nova-manage project scrub
nova-manage vpn *
The following deprecated nova-manage commands have been removed:
nova-manage vm list
As of Ocata, the minimum version of Virtuozzo that nova compute will interoperate with will be 7.0.0. Deployments using older versions of Virtuozzo should upgrade.
XenServer plugins have been renamed to include a ‘.py’ extension. Code has been included to handle plugins with and without the extension, but this will be removed in the next release. The plugins with the extension should be deployed on all compute nodes to mitigate any upgrade issues.
Deprecation Notes¶
Implemented microversion v2.39 that deprecates image-metadata proxy API, removes image metadata quota checks for ‘createImage’ and ‘createBackup’ actions. After this version Glance configuration option image_property_quota should be used to control the quota of image metadatas. Also, removes the maxImageMeta field from os-limits API response.
The config options
multi_instance_display_name_template
andnull_kernel
in theDEFAULT
group are now deprecated and may be removed as early as the 16.0.0 release. These options are deprecated to keep API behaviour consistent across deployments.
The
console_driver
config opt in theDEFAULT
group has been deprecated and will be removed in a future release. This option no longer does anything. Previously this option had only two valid, in-tree values:nova.console.xvp.XVPConsoleProxy
andnova.console.fake.FakeConsoleProxy
. The latter of these was only used in tests and has since been replaced.
[libvirt]/live_migration_progress_timeout
has been deprecated as this feature has been found not to work. See bug 1644248 for more details.
The following options, found in
DEFAULT
, were only used for configuring nova-network and are, like nova-network itself, now deprecated.flat_network_bridge
flat_network_dns
flat_interface
vlan_interface
vlan_start
num_networks
vpn_ip
vpn_start
network_size
fixed_range_v6
gateway
gateway_v6
cnt_vpn_clients
fixed_ip_disassociate_timeout
create_unique_mac_address_attempts
teardown_unused_network_gateway
l3_lib
network_driver
multi_host
force_dhcp_release
update_dns_entries
dns_update_periodic_interval
dhcp_domain
use_neutron
auto_assign_floating_ip
floating_ip_dns_manager
instance_dns_manager
instance_dns_domain
The following options, found in
quota
, are also deprecated.floating_ips
fixed_ips
security_groups
security_group_rules
The
remap_vbd_dev
option is deprecated and will be removed in a future release.
The
topic
config options are now deprecated and will be removed in the next release. The deprecated options are as below:cells.topic
compute_topic
conductor.topic
console_topic
consoleauth_topic
network_topic
scheduler_topic
Deprecate the VMware driver’s
wsdl_location
config option. This option pointed to the location of the WSDL files required when using vCenter versions earlier than 5.1. Since the minimum supported version of vCenter is 5.1, there is no longer a need for this option and its value is ignored.
The
[xenserver]/vif_driver
configuration option is deprecated for removal. TheXenAPIOpenVswitchDriver
vif driver is used for Neutron and theXenAPIBridgeDriver
vif driver is used for nova-network, which itself is deprecated. In the future, theuse_neutron
configuration option will be used to determine which vif driver to load.
The
live_migration_uri
option in the [libvirt] configuration section is deprecated, and will be removed in a future release. Thelive_migration_scheme
should be used to change scheme used for live migration, andlive_migration_inbound_addr
should be used to change target URI.
The XenServer driver provides support for downloading images via torrents. This feature has not been tested, and it’s not clear whether there’s a clear use case for such a feature. As a result, this feature is now deprecated as are the following config options.
torrent_base_url
torrent_seed_chance
torrent_seed_duration
torrent_max_last_accessed
torrent_listen_port_start
torrent_listen_port_end
torrent_download_stall_cutoff
torrent_max_seeder_processes_per_host
The direct use of the encryption provider classes such as nova.volume.encryptors.luks.LuksEncryptor is now deprecated and will be blocked in the Pike release of Nova. The use of out of tree encryption provider classes is also deprecated and will be blocked in the Pike release of Nova.
Nova network was deprecated in Newton and is no longer supported for regular deployments in Ocata. The network service binary will now refuse to start, except in the special case of CellsV1 where it is still required to function.
Security Issues¶
OSProfiler support requires passing of trace information between various OpenStack services. This information is securely signed by one of HMAC keys, defined in nova.conf configuration file. To allow cross-project tracing user should use the key, that is common among all OpenStack services he or she wants to trace.
Bug Fixes¶
Prior to Newton, volumes encrypted by the CryptsetupEncryptor and LuksEncryptor encryption providers used a mangled passphrase stripped of leading zeros per hexadecimal. When opening encrypted volumes, LuksEncryptor now attempts to replace these mangled passphrases if detected while CryptsetupEncryptor simply uses the mangled passphrase.
Fixes bug 1662699 which was a regression in the v2.1 API from the
block_device_mapping_v2.boot_index
validation that was performed in the legacy v2 API. With this fix, requests to create a server withboot_index=None
will be treated as ifboot_index
was not specified, which defaults to meaning a non-bootable block device.
The Hyper-V driver no longer accepts cold migrating instances to the same host. Note that this does not apply to resizes, in which case this is still allowed.
The
nova-manage cell_v2 simple_cell_setup
command now creates the default cell0 database connection using the[database]
connection configuration option rather than the[api_database]
connection. The cell0 database schema is the main database, i.e. the instances table, rather than the api database schema. In other words, the cell0 database would be called something likenova_cell0
rather thannova_api_cell0
.
In the context of virtual device role tagging at server create time, the 2.42 microversion restores the tag attribute to networks and block_device_mapping_v2. A bug has caused the tag attribute to no longer be accepted starting with version 2.33 for block_device_mapping_v2 and starting with version 2.37 for networks. In other words, block devices could only be tagged in version 2.32 and network interfaces between versions 2.32 and 2.36 inclusively. Starting with 2.42, both network interfaces and block devices can be tagged again.
To make live-migration consistent with resize, confirm-resize and revert-resize operations, the migration status is changed to ‘error’ instead of ‘failed’ in case of live-migration failure. With this change the periodic task ‘_cleanup_incomplete_migrations’ is now able to remove orphaned instance files from compute nodes in case of live-migration failures. There is no impact since migration status ‘error’ and ‘failed’ refer to the same failed state.
The nova metadata service will now pass a nove service token to the external vendordata server. These options can be configured using various Keystone-related options available in the
vendordata_dynamic_auth
group. A new service token has been created for this purpose. Previously, the requesting user’s keystone token was passed through to the external vendordata server if available, otherwise no token is passed. This resolves issues with scenarios such as cloud-init’s use of the metadata server on first boot to determine configuration information. Refer to the blueprints at http://specs.openstack.org/openstack/nova-specs/specs/ocata/approved/vendordata-reboot-ocata.html for more information.
Other Notes¶
The Placement API can be set to connect to a specific keystone endpoint interface using the
os_interface
option in the[placement]
section insidenova.conf
. This value is not required but can be used if a non-default endpoint interface is desired for connecting to the Placement service. By default, keystoneauth will connect to the “public” endpoint.