Current Series Release Notes¶
30.0.0-132¶
New Features¶
The libvirt driver now supports hw_vif_model=igb image property if the hypervisor has libvirt version 9.3.0 and qemu version 8.0.0 or higher.
The nova scheduler now supports enabling the nova cell discover hosts perodic task on multiple schedulers. In prior release enabling this feature required setting the discover_hosts_in_cells_interval option to a value greater than 0 in at most one scheduler, with the 2025.1 release it was possible to enable the feature on multiple schedulers via the introduction of leader election. This simplifies deployment of nova in kubernetes by allowing the operator to deploy multiple schedulers and have them elect a single leader that will run the discover hosts perodic task.
The 2.98 microversion has been added. This microversion adds support for including image properties as new
properties
subkey under the struct at the existingimage
key in the response forGET /servers/{server_id}
(server show) andGET /servers/detail
(list server –long) APIs. Also the same is included in rebuild case ofPOST /server/{server_id}/action
(server rebuild) API response.
On HCI deployments where Nova is collocated with the Cinder service or the Glance using Cinder backend service, an os-brick shared location can be configured using the
lock_path
in the[os_brick]
configuration section.
Option
novncproxy_base_url
does now respect supplied custom query which might be used to move NoVNC to a subdirectory or pass an extra argument to NoVNC.
The following share attach and share detach versioned notifications have been added to the nova-compute service: * instance.share_attach.start * instance.share_attach.end * instance.share_detach.start * instance.share_detach.end
Support creating servers with RBAC shared security groups by using the new
shared
filter for security groups. See blueprint shared-security-groups for more details.
The
nova-manage limits migrate_to_unified_limits
command will now scan the API and cell databases to detect resource classes that do not have registered limits set in Keystone and report them to the console.The purpose of the flavor scan is to assist operators who are migrating from legacy quotas to unified limits quotas. The current behavior with unified limits is to fail quota checks if resources requested are missing registered limits in Keystone. With flavor scanning in
migrate_to_unified_limits
, operators can easily determine what resource classes for which they need to create registered limits.
New configuration options
[quota]unified_limits_resource_strategy
and[quota]unified_limits_resource_list
have been added to enable operators to specify a list of resources that are either required or ignored to have registered limits set. The default strategy isrequire
and the default resource list containsservers
. The configured list is only used when[quota]driver
is set to theUnifiedLimitsDriver
.When
unified_limits_resource_strategy = require
, if a resource inunified_limits_resource_list
is requested and has no registered limit set, the quota limit for that resource will be considered to be 0 and all requests to allocate that resource will be rejected for being over quota. Any resource not in the list will be considered to have unlimited quota.When
unified_limits_resource_strategy = ignore
, if a resource inunified_limits_resource_list
is requested and has no registered limit set, the quota limit for that resource will be considered to be unlimited and all requests to allocate that resource will be accepted. Any resource not in the list will be considered to have 0 quota.The options should be configured for the nova-api and nova-conductor services. The nova-conductor service performs quota enforcement when
[quota]recheck_quota
isTrue
(the default).The
unified_limits_resource_list
list can also be set to an empty list.
Upgrade Notes¶
[compute]heal_instance_info_cache_interval
now defaults to -1.In the early days of Nova, all networking was internal, then
quantum
, now known asneutron
was introduced. When the networking subsystem was being externalized and neutron was optional Nova still needed to keep track of the ports associated with an instance. To that end, to avoid these expensive calls to an optional service the instance info cache was extended to include network information and a periodic task was introduced to update it in08fa534a0d28fa1be48aef927584161becb936c7
as part of theEssex
release.As we have learned over the years per compute periodic tasks that call other services do not scale well as the number of compute nodes increases. In
ce936ea5f3ae0b4d3b816a7fe42d5f0100b20fca
the os-server-external-events API was introduced. The server external events API allows external systems such as Neutron to trigger cache refreshes on demand, this was part of the Icehouse release. With the introduction of this API, neutron was modified to send network-changed events on a per-port basis as API actions are performed on neutron ports. When that was introduced the default value of[compute]heal_instance_info_cache_interval
was not changed to ensure there was no upgrade impact.In``ba44c155ce1dcefede9741722a0525820d6da2b8`` as part of bug #1751923 the _heal_instance_info_cache periodic task was modified to pass a “force_refresh” forcing Nova to lookup the current state of all ports for the instance from neutron and fully rebuild the info_cache. This has the side effect of making the already poor scaling of this optional periodic task even worse.
In this release, the default behaviour of Nova has been changed to disable the periodic, optimizing for performance, scale, power consumption and typical deployment topologies, where the instance network information is updated by neutron via the external event API as ports are modified. This should significantly reduce the background neutron API load in medium to large clouds. If you have a neutron backend that does not reliably send network-changed event notifications to Nova you can re-enable this periodic task by setting
[compute]heal_instance_info_cache_interval
to a value greater than 0.
Support for Python 3.8 has been removed. Now the minimum python version supported is 3.9 .
When the
[quota]driver
configuration option is set to theUnifiedLimitsDriver
, a limit of-1
in Keystone will now be considered as unlimited and theservers
resource will be considered to be required to have a registered limit set in Keystone because of the values for[quota]unified_limits_resource_strategy
andunified_limits_resource_strategy
.
Deprecation Notes¶
The
[wsgi] secure_proxy_ssl_header
parameter has been deprecated. Use thehttp_proxy_to_wsgi
middleware fromoslo.middleware
instead.
The following volume drivers of the libvirt virt driver have been deprecated and will be removed in a future release. The corresponding volume drivers in cinder were all marked unsupported and will be removed.
Quobyte
SMBFS
Virtuozzo Storage
Bug Fixes¶
Fixes an issue seen when using bare metal (Ironic) instances where an instance could fail to delete. See Bug 2019977 for more details.
Nova now allows to use a hyphen in the
[cinder]catalog_info
service-type field, so in particular the officialblock-storage
type is now valid. Bug 2092194
With this change, operators can now resize the instance flavor swap to a smaller swap size, it can be expand and shrunk down to 0 using the same resize API. For more details see: bug 1552777
Bug #2091033: Fixed calls to libvirt
listDevices()
andlistAllDevices()
from potentially blocking all other greenthreads innova-compute
. Under certain circumstances, it was possible for thenova-compute
service to freeze with all other greenthreads blocked and unable to perform any other activities including logging. This issue has been fixed by wrapping the libvirtlistDevices()
andlistAllDevices()
calls witheventlet.tpool.Proxy
.