OpenStack upgrade¶
This document outlines how to upgrade a Juju-deployed OpenStack cloud.
Warning
Upgrading an OpenStack cloud is not risk-free. The procedures outlined in this guide should first be tested in a pre-production environment.
Please read the Upgrades overview page before continuing.
Note
The charms only support single-step OpenStack upgrades (N+1). That is, to upgrade two releases forward you need to upgrade twice. You cannot skip releases when upgrading OpenStack with charms.
It may be worthwhile to read the upstream OpenStack Upgrades guide.
Release Notes¶
The OpenStack Charms Release Notes for the corresponding current and target versions of OpenStack must be consulted for any special instructions. In particular, pay attention to services and/or configuration options that may be retired, deprecated, or changed.
Manual intervention¶
It is intended that the now upgraded charms are able to accommodate all software changes associated with the corresponding OpenStack services to be upgraded. A new charm will also strive to produce a service as similarly configured to the pre-upgraded service as possible.
However, there are still times when intervention on the part of the operator may be needed, such as when an OpenStack service is removed/added/replaced or when a software bug (in the charms or in upstream OpenStack) affecting the upgrade is present. The below resources cover these topics:
the Special charm procedures page
the Upgrade issues page
the Various issues page
Ensure cloud node software is up to date¶
Every machine in the cloud, including containers, should have their software packages updated to ensure that the latest SRUs have been applied. This is done in the usual manner:
sudo apt-get update
sudo apt-get dist-upgrade
Verify the current deployment¶
Confirm that the output for the juju status command of the current deployment is error-free. In addition, if monitoring is in use (e.g. Nagios), ensure that all alerts have been resolved. You may also consider running a battery of operational checks on the cloud.
This step is to make certain that any issues that may appear after the upgrade are not due to pre-existing problems.
Disable unattended-upgrades¶
When performing a service upgrade on a cloud node that hosts multiple principle
charms (e.g. nova-compute and ceph-osd), ensure that unattended-upgrades
is
disabled on the underlying machine for the duration of the upgrade process.
This is to prevent the other services from being upgraded outside of Juju’s
control. On a cloud node run:
sudo dpkg-reconfigure -plow unattended-upgrades
Perform a database backup¶
Before making any changes to cloud services perform a backup of the cloud
database by running the backup
action on any single percona-cluster unit:
juju run-action --wait percona-cluster/0 backup
Now transfer the backup directory to the Juju client with the intention of subsequently storing it somewhere safe. This command will grab all existing backups:
juju scp -- -r percona-cluster/0:/opt/backups/mysql /path/to/local/directory
Permissions may first need to be altered on the remote machine.
Archive old database data¶
During the upgrade, database migrations will be run. This operation can be
optimised by first archiving any stale data (e.g. deleted instances). Do this
by running the archive-data
action on any single nova-cloud-controller
unit:
juju run-action --wait nova-cloud-controller/0 archive-data
This action may need to be run multiple times until the action output reports ‘Nothing was archived’.
Purge old compute service entries¶
Old compute service entries for units which are no longer part of the model should be purged before the upgrade. These entries will show as ‘down’ (and be hosted on machines no longer in the model) in the current list of compute services:
openstack compute service list
To remove a compute service:
openstack compute service delete <service-id>
Subordinate charm applications¶
Applications that are associated with subordinate charms are upgraded along
with their parent application. Subordinate charms do not support the
openstack-origin
configuration option which, as will be shown, is a
pre-requisite for initiating an OpenStack charm payload upgrade.
Upgrade order¶
Generally speaking, the order is determined by the idea of a dependency tree. Those services that have the most potential impact on other services are upgraded first and those services that have the least potential impact on other services are upgraded last.
In the below table charms are listed in the order in which their corresponding OpenStack services should be upgraded. Each service represented by a charm will need to be upgraded individually, and only the packages associated with a charm’s OpenStack service will be updated.
The order provided below is the order used by internal testing.
Order |
Charm |
---|---|
1 |
|
2 |
|
3 |
|
4 |
|
5 |
|
6 |
|
7 |
|
8 |
|
9 |
|
10 |
|
11 |
|
12 |
|
13 |
|
14 |
|
15 |
|
16 |
|
17 |
|
18 |
|
19 |
|
20 |
|
21 |
|
22 |
|
23 |
|
24 |
|
25 |
|
26 |
Important
Services whose software is not included in the Ubuntu Cloud Archive are not represented in the above list. This software is upgraded by the administrator (on the units) using traditional means (e.g. manually via package tools or as part of a series upgrade). Common charms where this applies are ntp, memcached, percona-cluster, rabbitmq-server, mysql-innodb-cluster, and mysql-router.
Note
An Octavia upgrade may entail an update of its load balancers (amphorae) as a post-upgrade task. Reasons for doing this include:
API incompatibility between the amphora agent and the new Octavia service
the desire to use features available in the new amphora agent or haproxy
See the upstream documentation on Rotating amphora images.
Software sources¶
A key part of an OpenStack upgrade is the stipulation of a unit’s software sources. For an upgrade, the latter will naturally reflect a more recent combination of Ubuntu release (series) and OpenStack release. This combination is based on the Ubuntu Cloud Archive and translates to a “cloud archive OpenStack release”. It takes on the following syntax:
<ubuntu series>-<openstack-release>
The value is passed to a charm’s openstack-origin
configuration option. For
example, to select the ‘focal-victoria’ release:
openstack-origin=cloud:focal-victoria
In this way the charm is informed on where to find updates for the packages that it is responsible for.
Note
A few charms use option source
instead of openstack-origin
. See the
next section.
Notes concerning the value of openstack-origin
:
The default is ‘distro’. This denotes an Ubuntu release’s default archive (e.g. in the case of the focal series it corresponds to OpenStack Ussuri). The value of ‘distro’ is therefore invalid in the context of an OpenStack upgrade.
It should normally be the same across all charms.
Its series component must be that of the series currently in use (i.e. a series upgrade and an OpenStack upgrade are two completely separate procedures).
Perform the upgrade¶
There are three methods available for performing an OpenStack service upgrade. The appropriate method is chosen based on the actions supported by the charm. Actions for a charm can be listed with command juju actions <charm-name>.
All-in-one¶
The “all-in-one” method upgrades an application immediately. Although it is the quickest route, it can be harsh when applied in the context of multi-unit applications. This is because all the units are upgraded simultaneously, and is likely to cause a transient service outage. This method must be used if the application has a sole unit.
Attention
The “all-in-one” method should only be used when the charm does not support
the openstack-upgrade
action.
The syntax is:
juju config <openstack-charm> openstack-origin=cloud:<cloud-archive-release>
Charms whose services are not technically part of the OpenStack project will
use the source
charm option instead. The Ceph charms are a classic example:
juju config ceph-mon source=cloud:focal-victoria
Note
The ceph-osd and ceph-mon charms are able to maintain service availability during the upgrade.
So to upgrade Cinder across all units (currently running Focal) from Ussuri to Victoria:
juju config cinder openstack-origin=cloud:focal-victoria
Single-unit¶
The “single-unit” method builds upon the “all-in-one” method by allowing for
the upgrade of individual units in a controlled manner. It requires the
enablement of charm option action-managed-upgrade
and the charm action
openstack-upgrade
.
Attention
The “single-unit” method should only be used when the charm does not
support the pause
and resume
actions.
As a general rule, whenever there is the possibility of upgrading units individually, always upgrade the application leader first. The leader is the unit with a * next to it in the juju status output. It can also be discovered via the CLI:
juju run --application <application-name> is-leader
For example, to upgrade a three-unit glance application from Ussuri to Victoria
where glance/1
is the leader:
juju config glance action-managed-upgrade=True
juju config glance openstack-origin=cloud:focal-victoria
juju run-action --wait glance/1 openstack-upgrade
juju run-action --wait glance/0 openstack-upgrade
juju run-action --wait glance/2 openstack-upgrade
Note
The openstack-upgrade
action is only available for charms whose services
are part of the OpenStack project. For instance, you will need to use the
“all-in-one” method for the Ceph charms.
Paused-single-unit¶
The “paused-single-unit” method extends the “single-unit” method by allowing
for the upgrade of individual units while paused. Additional charm
requirements are the pause
and resume
actions. This method provides
more versatility by allowing a unit to be removed from service, upgraded, and
returned to service. Each of these are distinct events whose timing is chosen
by the operator.
Attention
The “paused-single-unit” method is the recommended OpenStack service upgrade method.
For example, to upgrade a three-unit nova-compute application from Ussuri to
Victoria where nova-compute/0
is the leader:
juju config nova-compute action-managed-upgrade=True
juju config nova-compute openstack-origin=cloud:focal-victoria
juju run-action --wait nova-compute/0 pause
juju run-action --wait nova-compute/0 openstack-upgrade
juju run-action --wait nova-compute/0 resume
juju run-action --wait nova-compute/1 pause
juju run-action --wait nova-compute/1 openstack-upgrade
juju run-action --wait nova-compute/1 resume
juju run-action --wait nova-compute/2 pause
juju run-action --wait nova-compute/2 openstack-upgrade
juju run-action --wait nova-compute/2 resume
In addition, this method also permits a possible hacluster subordinate unit, which typically manages a VIP, to be paused so that client traffic will not flow to the associated parent unit while its upgrade is underway.
Attention
When there is an hacluster subordinate unit then it is recommended to always take advantage of the “pause-single-unit” method’s ability to pause it before upgrading the parent unit.
For example, to upgrade a three-unit keystone application from Ussuri to
Victoria where keystone/2
is the leader:
juju config keystone action-managed-upgrade=True
juju config keystone openstack-origin=cloud:focal-victoria
juju run-action --wait keystone-hacluster/1 pause
juju run-action --wait keystone/2 pause
juju run-action --wait keystone/2 openstack-upgrade
juju run-action --wait keystone/2 resume
juju run-action --wait keystone-hacluster/1 resume
juju run-action --wait keystone-hacluster/2 pause
juju run-action --wait keystone/1 pause
juju run-action --wait keystone/1 openstack-upgrade
juju run-action --wait keystone/1 resume
juju run-action --wait keystone-hacluster/2 resume
juju run-action --wait keystone-hacluster/0 pause
juju run-action --wait keystone/0 pause
juju run-action --wait keystone/0 openstack-upgrade
juju run-action --wait keystone/0 resume
juju run-action --wait keystone-hacluster/0 resume
Warning
The hacluster subordinate unit number may not necessarily match its parent unit number. As in the above example, only for keystone/0 do the unit numbers correspond (i.e. keystone-hacluster/0 is the subordinate unit).
Verify the new deployment¶
Check for errors in juju status output and any monitoring service.