2024.1 Series Release Notes¶
14.0.0-17¶
Bug Fixes¶
Fixed an issue updating listeners when using SR-IOV VIP ports.
Fixed a bug in the VIP SR-IOV implementation that would cause load balancer memebers that use the SR-IOV VIP interface to not receive traffic.
Fixed error on update UDP Health Monitor with empty “delay” parameter
Fixed an issue when failing over load balancers using SR-IOV VIP ports.
Fix the issue, when “limit” parameter in request less or equal 0. Now it returns resources according pagination_max_limit as expected, instead of error.
Remove record in amphora_health table on revert. It’s necessary, because record in amphora table for corresponding amphora also deleted. It allows to avoid false positive react of failover threshold due to orphan records in amphora_health table.
Fixed potential AttributeError during listener update when security group rule had no protocol defined (ie. it was null).
Added a validation step in the batch member API request that checks if a member is included multiple times in the list of updated members, this additional check prevents the load balancer from being stuck in PENDING_UPDATE. Duplicate members in the batch member flow triggered an exception in Taskflow. The API now returns 400 (ValidationException) if a member is already present in the body of the request.
Fixed a bug when creating a load balancer and a listener with
allowed_cidrs
with the fully-populated load balancer API, the call was rejected because Octavia could not validate that the IP addresses of theallowed_cidrs
have the same family as the VIP address.
Fix load balancer stuck in PENDING_DELETE if TLS storage unavailable or returns error
Fix error on revert PlugVIPAmphora task, when db_lb is not defined and get_subnet raises NotFound error. It could happen when Amphora creation failed by timeout and before it VIP network was removed. As result revert failed with exception.
14.0.0¶
New Features¶
Octavia Amphora based load balancers now support using SR-IOV virtual functions (VF) on the VIP port(s) of the load balancer. This is enabled by using an Octavia Flavor that includes the ‘sriov_vip’: True setting.
Added support for Rocky Linux controllers in devstack.
Added support for Rocky Linux amphora images. To enable it, users have to build their amphora images with the
OCTAVIA_AMP_BASE_OS=rocky
andOCTAVIA_AMP_DISTRIBUTION_RELEASE_ID=9
parameters.
The new
[task_flow] jobboard_backend_username
option has been added, to support Redis ACL feature.
Previously, redis jobboard driver used only the first host in
[task_flow] jobboard_backend_hosts
when connecting to Redis Sentinel. Now the driver attempts the other hosts as fallbacks.
Now the
[database] connection_recycle_time
option is also used by connections in MySQL persistence driver.
Upgrade Notes¶
You must update the amphora image to support the SR-IOV VIP feature.
Octavia now uses the oslo middleware sizelimit module. It allows to limit the size of the incoming requests in the API. Admins may need to ajust the
[oslo_middleware].max_request_body_size
setting to their needs. The default value formax_request_body_size
is 114688 bytes.
The diskimage-builder elements for amphora image no longer supports Ubuntu Focal.
Bug Fixes¶
Fixed an issue when using certificates with a blank subject or missing CN.
Fixed wrong endpoint information in neutron client configuration.
Fixed a bug that prevented the amphora from being updated by the Amphora Configure API call, the API call was succesfull but the internal flow for updating it failed.
Fixed a potential issue when deleting a load balancer with an amphora that was not fully created, the deletion may have failed when deallocating the VIP port, leaving the load balancer in ERROR state.
Bug fix: The response body of the LB API, when creating a new load balancer, now correctly includes information about the health monitor. Previously, this information was consistently null, despite configuring a health monitor.
Fixed a bug with HTTP/HTTPS health-monitors on pools with ALPN protocols in the amphora-driver. The healthchecks sent by haproxy were flagged as bad requests by the backend servers. Updated haproxy configuration to use ALPN for the heathchecks too.
Fixed an issue with load balancers stuck in a
PENDING_*
state during database outages. Now when a task fails in Octavia, it retries to update theprovisioning_status
of the load balancer until the database is back (or it gives up after a really long timeout - around 2h45)
Fixed an issue when using UDP listeners in dual-stack (IPv4 and IPv6) load balancers, some masquerade rules needed by UDP were not correctly set on the member interfaces.
Fixed a bug when the deprecated settings (
endpoint
,endpoint_type
,ca_certificates_file
) are used in the[neutron]
section of the configuration file. The connection to the neutron service may have used some settings from the[service_auth]
section or used undefined settings.
Fixed a race condition in the members batch update API call, the data passed to the Octavia worker service may have been incorrect when quickly sending successive API calls. Then the load balancer was stuck in PENDING_UPDATE provisioning_status.
Fixed a too long timeout when attempting to start the VRRP service in an unreachable amphora during a failover. A specific shorter timeout should be used during the failovers.
Fixed TLS-HELLO health-monitors in the amphora-driver.
Reduce the duration of the failovers of ACTIVE_STANDBY load balancers. Many updates of an unreachable amphora may have been attempted during a failover, now if an amphora is not reachable at the first update, the other updates are skipped.
Reduce the duration of the failovers of ACTIVE_STANDBY load balancers when both amphorae are unreachable.
Other Notes¶
Amphora images will now be built with nftables by default.
Add fake Amphora stats for when Octavia runs in noop mode / using noop drivers.
Noop certificate manager was added. Now any Octavia certificate operations using noop drivers will be faster (as they won’t be validated).