Upgrade issues¶
This page documents upgrade issues and notes. These may apply to either of the three upgrade types (charms, OpenStack, series).
The items on this page are distinct from those found on the following pages:
the Various issues page
the Special charm procedures page
The issues are organised by upgrade type:
Charm upgrades¶
rabbitmq-server charm¶
A timing issue has been observed during the upgrade of the rabbitmq-server charm (see bug LP #1912638 for tracking). If it occurs the resulting hook error can be resolved with:
juju resolved rabbitmq-server/N
openstack-dashboard charm: upgrading to revision 294¶
When Horizon is configured with TLS (openstack-dashboard charm option
ssl-cert
) revisions 294 and 295 of the charm have been reported to break
the dashboard (see bug LP #1853173). The solution is to upgrade to a working
revision. A temporary workaround is to disable TLS without upgrading.
Note
Most users will not be impacted by this issue as the recommended approach is to always upgrade to the latest revision.
To upgrade to revision 293:
juju upgrade-charm openstack-dashboard --revision 293
To upgrade to revision 296:
juju upgrade-charm openstack-dashboard --revision 296
To disable TLS:
juju config enforce-ssl=false openstack-dashboard
Multiple charms: option worker-multiplier
¶
Starting with OpenStack Charms 21.04 any charm that supports the
worker-multiplier
configuration option will, upon upgrade, modify the
active number of service workers according to the following: if the option is
not set explicitly the number of workers will be capped at four regardless of
whether the unit is containerised or not. Previously, the cap applied only to
containerised units.
manila-ganesha charm: package updates¶
To fix long-standing issues in the manila-ganesha charm related to Manila exporting shares after restart, the nfs-ganesha Ubuntu package must be updated on all affected units prior to the upgrading of the manila-ganesha charm in OpenStack Charms 21.10.
ceph-radosgw charm: upgrading to channel quincy/stable
¶
Due to a ceph-radosgw charm change in the quincy/stable
channel, URLs
are processed differently by the RADOS Gateway. This will lead to breakage for
an existing product-streams
endpoint, set up by the
glance-simplestreams-sync application, that includes a trailing slash in its
URL.
The glance-simplestreams-sync charm has been fixed in the yoga/stable
channel, but it will not update a pre-existing endpoint. The URL must be
modified (remove the trailing slash) with native OpenStack tooling:
openstack endpoint list --service product-streams
openstack endpoint set --url <new-url> <endpoint-id>
OpenStack upgrades¶
Nova RPC version mismatches: upgrading Neutron and Nova¶
If it is not possible to upgrade Neutron and Nova within the same maintenance window, be mindful that the RPC communication between nova-cloud-controller, nova-compute, and nova-api-metadata is very likely to cause several errors while those services are not running the same version. This is due to the fact that currently those charms do not support RPC version pinning or auto-negotiation.
See bug LP #1825999.
neutron-gateway charm: upgrading from Mitaka to Newton¶
Between the Mitaka and Newton OpenStack releases, the neutron-gateway
charm
added two options, bridge-mappings
and data-port
, which replaced the
(now) deprecated ext-port
option. This was to provide for more control over
how neutron-gateway
can configure external networking. Unfortunately, the
charm was only designed to work with either ext-port
(no longer
recommended) or bridge-mappings
and data-port
.
See bug LP #1809190.
cinder/ceph topology change: upgrading from Newton to Ocata¶
If cinder is directly related to ceph-mon rather than via cinder-ceph then upgrading from Newton to Ocata will result in the loss of some block storage functionality, specifically live migration and snapshotting. To remedy this situation the deployment should migrate to using the cinder-ceph charm. This can be done after the upgrade to Ocata.
Warning
Do not attempt to migrate a deployment with existing volumes to use the cinder-ceph charm prior to Ocata.
The intervention is detailed in the below three steps.
Step 0: Check existing configuration¶
Confirm existing volumes are in an RBD pool called ‘cinder’:
juju run --unit cinder/0 "rbd --name client.cinder -p cinder ls"
Sample output:
volume-b45066d3-931d-406e-a43e-ad4eca12cf34
volume-dd733b26-2c56-4355-a8fc-347a964d5d55
Step 1: Deploy new topology¶
Deploy the cinder-ceph
charm and set the ‘rbd-pool-name’ to match the pool
that any existing volumes are in (see above):
juju deploy --config rbd-pool-name=cinder cinder-ceph
juju add-relation cinder cinder-ceph
juju add-relation cinder-ceph ceph-mon
juju remove-relation cinder ceph-mon
juju add-relation cinder-ceph nova-compute
Step 2: Update volume configuration¶
The existing volumes now need to be updated to associate them with the newly defined cinder-ceph backend:
juju run-action cinder/0 rename-volume-host currenthost='cinder' \
newhost='cinder@cinder-ceph#cinder.volume.drivers.rbd.RBDDriver'
Keystone and Fernet tokens: upgrading from Queens to Rocky¶
Starting with OpenStack Rocky only the Fernet format for authentication tokens is supported. Therefore, prior to upgrading Keystone to Rocky a transition must be made from the legacy format (of UUID) to Fernet.
Fernet support is available upstream (and in the keystone charm) starting with Ocata so the transition can be made on either Ocata, Pike, or Queens.
A keystone charm upgrade will not alter the token format. The charm’s
token-provider
option must be used to make the transition:
juju config keystone token-provider=fernet
This change may result in a minor control plane outage but any running instances will remain unaffected.
The token-provider
option has no effect starting with Rocky, where the
charm defaults to Fernet and where upstream removes support for UUID. See
Keystone Fernet Token Implementation for more information.
Neutron LBaaS: upgrading from Stein to Train¶
As of Train, support for Neutron LBaaS has been retired. The load-balancing services are now provided by Octavia LBaaS. There is no automatic migration path, please review the Octavia LBaaS page for more information.
Designate: upgrading from Stein to Train¶
When upgrading Designate to Train, there is an encoding issue between the designate-producer and memcached that causes the designate-producer to crash. See bug LP #1828534. This can be resolved by restarting the memcached service.
juju run --application=memcached 'sudo systemctl restart memcached'
Ceph BlueStore mistakenly enabled during OpenStack upgrade¶
The Ceph BlueStore storage backend is enabled by default when Ceph Luminous is
detected. Therefore it is possible for a non-BlueStore cloud to acquire
BlueStore by default after an OpenStack upgrade (Luminous first appeared in
Queens). Problems will occur if storage is scaled out without first disabling
BlueStore (set ceph-osd charm option bluestore
to ‘False’). See bug LP
#1885516 for details.
Placement: endpoints not updated in Keystone service catalog¶
When the placement charm is deployed during the upgrade to OpenStack Train (as described in placement charm: OpenStack upgrade to Train) the Keystone service catalog is not updated accordingly. This issue is tracked in bug LP #1928992, which also includes an explicit workaround (comment #4).
Ceph: option require-osd-release
¶
Before upgrading Ceph its require-osd-release
option should be set to the
current Ceph release (e.g. ‘nautilus’ if upgrading to Octopus). Failing to do
so may cause the upgrade to fail, rendering the cluster inoperable.
On any ceph-mon unit, the current value of the option can be queried with:
sudo ceph osd dump | grep require_osd_release
If it needs changing, it can be done manually on any ceph-mon unit. Here the current release is Nautilus:
sudo ceph osd require-osd-release nautilus
In addition, upon completion of the upgrade, the option should be set to the new release. Here the new release is Octopus:
sudo ceph osd require-osd-release octopus
The charms should be able to respond intelligently to these two situations. Bug LP #1929254 is for tracking this effort.
FWaaS: upgrading from Ussuri to Victoria¶
The Firewall-as-a-Service (FWaaS v2) OpenStack project is retired starting with OpenStack Victoria. Consequently, the neutron-api charm will no longer make this service available starting with that OpenStack release. See the 21.10 Release Notes on this topic.
Prior to upgrading to Victoria users of FWaaS should remove any existing firewall groups to avoid the possibility of orphaning active firewalls (see the FWaaS v2 CLI documentation).
Octavia¶
An Octavia upgrade may entail an update of its load balancers (amphorae) as a post-upgrade task. Reasons for doing this include:
API incompatibility between the amphora agent and the new Octavia service
the desire to use features available in the new amphora agent or haproxy
See the upstream documentation on Rotating amphora images.
Series upgrades¶
DNS HA: upgrade to focal¶
DNS HA has been reported to not work on the focal series. See LP #1882508 for more information.
Upgrading while Vault is sealed¶
If a series upgrade is attempted while Vault is sealed then manual intervention will be required (see bugs LP #1886083 and LP #1890106). The vault leader unit (which will be in error) will need to be unsealed and the hook error resolved. The vault charm README has unsealing instructions, and the hook error can be resolved with:
juju resolved vault/N