2023.1 Series Release Notes

2023.1-eom

Bug Fixes

  • Fix load balancer stuck in PENDING_DELETE if TLS storage unavailable or returns error

12.0.1

Upgrade Notes

  • A patch that fixes an issue making the VIP port unreachable because of missing IP rules requires an update of the Amphora image.

Bug Fixes

  • Fixed error on update UDP Health Monitor with empty “delay” parameter

  • Fix the issue, when “limit” parameter in request less or equal 0. Now it returns resources according pagination_max_limit as expected, instead of error.

  • Fixed an issue when deleting the last listener from a load balancer may trigger a failover.

  • Fixed an issue when using certificates with a blank subject or missing CN.

  • The validation for the allowed_cidr parameter only took into account the IP version of the primary VIP. CIDRs which only matched the version of an additonal VIP were rejected. This if fixed and CIDRs are now matched against the IP version of all VIPs.

  • Fixed a bug in amphorav1, the subnet of a member that was being deleted was not immediately unplugged from the amphora, but only during the next update of the members.

  • Fixed an issue when adding or deleting a member, Octavia might have reconfigured the management port of the amphora by adding or removing additional subnets. Octavia no longer updates the management port during those tasks.

  • Fixed a potential race condition in the member batch update API call, the load balancers might not have been locked properly.

  • Fixed a bug in the amphora-agent, an exception was triggered when a LB with both IPv4 and IPv6 VIPs and with a UDP pool had only IPv4 members or only IPv6 members.

  • Fixed a potential issue when deleting a load balancer with an amphora that was not fully created, the deletion may have failed when deallocating the VIP port, leaving the load balancer in ERROR state.

  • Added a validation step in the batch member API request that checks if a member is included multiple times in the list of updated members, this additional check prevents the load balancer from being stuck in PENDING_UPDATE. Duplicate members in the batch member flow triggered an exception in Taskflow. The API now returns 400 (ValidationException) if a member is already present in the body of the request.

  • Fixed a bug when creating a load balancer and a listener with allowed_cidrs with the fully-populated load balancer API, the call was rejected because Octavia could not validate that the IP addresses of the allowed_cidrs have the same family as the VIP address.

  • Fixed the global number of concurrent connections in haproxy when disabling listeners. The connection-limit of disabled listeners was used to compute this value, disabled listeners are now skipped.

  • Bug fix: The response body of the LB API, when creating a new load balancer, now correctly includes information about the health monitor. Previously, this information was consistently null, despite configuring a health monitor.

  • Fixed a bug that didn’t set all the active load balancer Health Monitors ONLINE in populated LB single-create calls.

  • Fixed a bug with HTTP/HTTPS health-monitors on pools with ALPN protocols in the amphora-driver. The healthchecks sent by haproxy were flagged as bad requests by the backend servers. Updated haproxy configuration to use ALPN for the heathchecks too.

  • Fixed a bug that could have made the VIP port unreachable because of the removal of some IP rules in the Amphora. It could have been triggered only when sending a request from a subnet that is not the VIP subnet but that is plugged as a member subnet.

  • Fix a bug that prevented the operating_status of a health-monitor to be set to ONLINE when ipv6 addresses were enclosed within square brackets in controller_ip_port_list.

  • Fixed the issue with session persistence based on source IP not working for IPv6 load balancers. Session persistence now functions properly for IPv4, IPv6 and dual-stack load balancers.

  • Fixed an issue with load balancers stuck in a PENDING_* state during database outages. Now when a task fails in Octavia, it retries to update the provisioning_status of the load balancer until the database is back (or it gives up after a really long timeout - around 2h45)

  • Fixed an issue when using UDP listeners in dual-stack (IPv4 and IPv6) load balancers, some masquerade rules needed by UDP were not correctly set on the member interfaces.

  • Fixed a potential error when plugging a member from a new network after deleting another member and unplugging its network. Octavia may have tried to plug the new network to a new interface but with an already existing name. This fix requires to update the Amphora image.

  • Fixed a bug in octavia-status which reported an incorrect status for the amphorav2 driver when using the default amphora alias.

  • Fixed a bug that didn’t set the correct provisioning_status for unattached pools when creating a fully-populated load balancer.

  • Fixed a race condition in the members batch update API call, the data passed to the Octavia worker service may have been incorrect when quickly sending successive API calls. Then the load balancer was stuck in PENDING_UPDATE provisioning_status.

  • Fixed an SELinux issues with TCP-based health-monitor on UDP pools, some specific monitoring ports were denied by SELinux. The Amphora image now enables the keepalived_connect_any SELinux boolean that allows connections to any ports.

  • Fixed a too long timeout when attempting to start the VRRP service in an unreachable amphora during a failover. A specific shorter timeout should be used during the failovers.

  • Fixed TLS-HELLO health-monitors in the amphora-driver.

  • Fixed a bug with the status of the members of UDP pools in load balancer with IPv4 and IPv6 VIPs. Some members may have been incorrectly reported as DOWN by the Amphora.

  • Fixed the format of log messages related to quota decrement errors. They displayed unhelpful information, they now report the correct resource type for which the error occurs.

  • Fix the issue where nf_conntrack* opts values are lost after rebooting the Amphora VM. more details Story 2010795

  • When plugging a new member subnet, the amphora sends an IP advertisement of the newly allocated IP. It allows the servers on the same L2 network to flush the ARP entries of a previously allocated IP address.

  • Reduce the duration of the failovers of ACTIVE_STANDBY load balancers. Many updates of an unreachable amphora may have been attempted during a failover, now if an amphora is not reachable at the first update, the other updates are skipped.

  • Reduce the duration of the failovers of ACTIVE_STANDBY load balancers when both amphorae are unreachable.

Other Notes

  • Noop certificate manager was added. Now any Octavia certificate operations using noop drivers will be faster (as they won’t be validated).

12.0.0

New Features

  • The new “cpu-pinning” element optimizes the amphora image for better vertical scaling. When an amphora flavor with multiple vCPUs is configured it will configure the kernel to isolate (isolcpus) all vCPUs except the first one. Furthermore, it uninstalls irqbalance and sets the IRQ affinity to the first CPU. That way the other CPUs are free to be used by HAProxy exclusively. A new customized TuneD profile applies some more tweaks for improving network latency. This new feature is disabled by default, but can be enabled by running diskimage-create.sh with the -m option or setting the AMP_ENABLE_CPUPINNING environment variable to 1 before running the script.

  • Amphora agent has been adjusted to complement the vertical scaling optimizations implemented in the new cpu-pinning element. If the flavor uses multiple vCPUs it will configure HAProxy automatically to pin each of its worker threads to an individual CPU that was isolated by the element (all vCPUs starting from the second one).

  • The cpu-pinning element for the amphora image sets the kernel bootarg nohz_full=1-N to enable full dynticks on all CPUs except the first one (on single CPU images this will have no effect). This should reduce kernel noise on those CPUs to a minimum and reduce latency.

Upgrade Notes

  • The Octavia API will now check that the HTTP Accept header, if present, is compatible with the application/json content type. If not the user will get a 406 status code response, Not Acceptable.

  • Amphora vertical scaling optimizations require a new amphora image build with the optional CPU pinning feature enabled in order to become effective.

  • diskimage-create.sh has been updated to build Ubuntu Jammy (22.04) amphora images per default.

  • In order for the full dynticks optimization to become effective a new amphora image needs to be built with the new optional CPU pinning feature enabled.

Deprecation Notes

  • The configuration option user_data_config_drive is deprecated. The nova user_data option is too small to replace the normal file based config_drive provisioning for cloud-init. This option has never been functional in Octavia and will be removed to reduce confusion.

  • Amphora load balancers support single process mode only now. Split listener configuration, which was used up to API version 0.5, has been removed from the codebase.

Security Issues

  • Filter out private information from the taskflow logs when ‘’INFO’’ level messages are enabled and when jobboard is enabled. Logs might have included TLS certificates and private_key. By default, in Octavia only WARNING and above messages are enabled in taskflow and jobboard is disabled.

Bug Fixes

  • The Octavia API will now check that the HTTP Accept header, if present, is compatible with the application/json content type. If not the user will get a 406 status code response, Not Acceptable. This change also ensures that the API responses have a content type of application/json.

  • Fixed the ability to use the ‘text/plain’ mime type with the healthcheck endpoint.

  • Added a filter to hide a bogus ComputeWaitTimeoutException exception when creating an Amphora when job board is disabled. This exception is part of the flow when creating a load balancer or an amphora and should not be shown to the user.

  • The parameters of a taskflow Flow were logged in ‘’INFO’’ level messages by taskflow, it included TLS-enabled listeners and pools parameters, such as certificates and private_key.

  • Fix amphora haproxy_count to return the number of haproxy processes that are running.

  • Fix an authentication error with Barbican when creating a TERMINATED_HTTPS listener with application credential tokens or trust IDs.

  • Fixed a “corrupted global server state file” error in Centos 9 Stream when reloading the state of the servers after restarting haproxy. It also fixed the recovering of the operational state of the servers in haproxy after its restart.

  • Fix a bug when full graph of load balancer is created without listeners if jobboard_enabled=False

  • Fixed a bug that prevented Octavia from creating listeners with the fully-populated load balancer API in SINGLE topology mode.

  • Fixed backwards compatibility issue with the feature that preserves HAProxy server states between reloads. HAProxy version 1.5 or below do not support this feature, so Octavia will not to activate it on amphorae with those versions.

  • Fixed the policy of the legacy admin role, it is still an admin with sRBAC.

  • Removed system scope policies, all the policies are now project scoped.

  • Modified default Keepalived LVS persistence granularity configuration value so it would be ipv6 compatible.

  • Fix an issue with PING health-monitors on Centos 8 Stream. Changes in Centos and systemd prevent an unprivileged user from sending ping requests from a network namespace.

  • Usage of castellan_cert_manager as cert_manager has been significantly improved. Now you can define configuration options for Castellan in octavia.conf and they will be passed properly to Castellan backend. This allows to use allowed Castellan backends as for certificate storage.

  • Fixed SQLAlchemy warnings about the relationship between the Tags object and the other Octavia resources.

Other Notes

  • When a HTTPS termination listener gets configured, Octavia will tweak the HAProxy tune.ssl.cachesize setting to use about half of the available memory (free + buffers + cached) on the amphora minus the memory needed for network sockets based on the global max connections setting. This allows to make better reuse of existing SSL sessions and helps to lower the number of computationally expensive SSL handshakes.

11.0.0

Bug Fixes

  • Fix the rescheduling of taskflow tasks that have been resumed after being interrupted.