[ English | Deutsch | English (United Kingdom) | español | русский | Indonesia ]
Major upgrades¶
This guide provides information about the upgrade process from Zed or Yoga to 2023.1 for OpenStack-Ansible.
Примечание
You can upgrade between sequential releases or between releases marked as SLURP.
Introduction¶
For upgrades between major versions, the OpenStack-Ansible repository provides
playbooks and scripts to upgrade an environment. The run-upgrade.sh
script runs each upgrade playbook in the correct order, or playbooks can be run
individually if necessary. Alternatively, a deployer can upgrade manually.
For more information about the major upgrade process, see Upgrading by using a script and Upgrading manually.
Предупреждение
The upgrade is always under active development. Test this on a development environment first.
Upgrading by using a script¶
The 2023.1 release series of OpenStack-Ansible contains the code for migrating from Zed or Yoga to 2023.1.
Running the upgrade script¶
To upgrade from Zed or Yoga to
2023.1 by using the upgrade script, perform the
following steps in the openstack-ansible
directory:
Change directory to the repository clone root directory:
# cd /opt/openstack-ansible
Run the following commands:
# git checkout 27.5.1 # ./scripts/run-upgrade.sh
For more information about the steps performed by the script, see Upgrading manually.
Upgrading manually¶
Manual upgrades are useful for scoping the changes in the upgrade process (for example, in very large deployments with strict SLA requirements), or performing other upgrade automation beyond that provided by OpenStack-Ansible.
The steps detailed here match those performed by the run-upgrade.sh
script. You can safely run these steps multiple times.
Preflight checks¶
Before starting with the upgrade, perform preflight health checks to ensure your environment is stable. If any of those checks fail, ensure that the issue is resolved before continuing.
Check out the 2023.1 release¶
Ensure that your OpenStack-Ansible code is on the latest 2023.1 tagged release.
# git checkout 27.5.1
Prepare the shell variables¶
Define these variables to reduce typing when running the remaining upgrade tasks. Because these environments variables are shortcuts, this step is optional. If you prefer, you can reference the files directly during the upgrade.
# cd /opt/openstack-ansible
# export MAIN_PATH="$(pwd)"
# export SCRIPTS_PATH="${MAIN_PATH}/scripts"
Backup the existing OpenStack-Ansible configuration¶
Make a backup of the configuration of the environment:
# source_series_backup_file="/openstack/backup-openstack-ansible-zed.tar.gz" # tar zcf ${source_series_backup_file} /etc/openstack_deploy /etc/ansible/ /usr/local/bin/openstack-ansible.rc
Bootstrap the new Ansible and OSA roles¶
To ensure that there is no currently set ANSIBLE_INVENTORY to override the default inventory location, we unset the environment variable.
# unset ANSIBLE_INVENTORY
Bootstrap Ansible again to ensure that all OpenStack-Ansible role dependencies are in place before you run playbooks from the 2023.1 release.
# ${SCRIPTS_PATH}/bootstrap-ansible.sh
Change to the playbooks directory¶
Change to the playbooks directory to simplify the CLI commands from here on in the procedure, given that most playbooks executed are in this directory.
# cd playbooks
Implement changes to OSA configuration¶
If there have been any OSA variable name changes or environment/inventory changes, there is a playbook to handle those changes to ensure service continuity in the environment when the new playbooks run. The playbook is tagged to ensure that any part of it can be executed on its own or skipped. Please review the contents of the playbook for more information.
# openstack-ansible "${SCRIPTS_PATH}/upgrade-utilities/deploy-config-changes.yml"
Ensure that you have defined all required variables for current Neutron plugin
# openstack-ansible "${SCRIPTS_PATH}/upgrade-utilities/define-neutron-plugin.yml"
Upgrade hosts¶
Before installing the infrastructure and OpenStack, update the host machines.
Предупреждение
Usage of non-trusted certificates for RabbitMQ is not possible
due to requirements of newer amqp
versions.
The SSH certificate authority must be updated for the upgraded release version. SSH certificates are used for nova live migration and keystone credential synchonrisation in the new release. This step ensures that the required CA is generated and available for other playbooks.
# openstack-ansible certificate-ssh-authority.yml
Once CA is generated, we can proceed with standard OpenStack upgrade steps:
# openstack-ansible setup-hosts.yml --limit '!galera_all:!rabbitmq_all' -e package_state=latest
This command is the same setting up hosts on a new installation. The
galera_all
and rabbitmq_all
host groups are excluded to prevent
reconfiguration and restarting of any of those containers as they need to
be updated, but not restarted.
Once that is complete, upgrade the final host groups with the flag to prevent container restarts.
# openstack-ansible setup-hosts.yml -e 'lxc_container_allow_restarts=false' --limit 'galera_all:rabbitmq_all'
Upgrade infrastructure¶
We can now go ahead with the upgrade of all the infrastructure components. To ensure that rabbitmq and mariadb are upgraded, we pass the appropriate flags.
# openstack-ansible setup-infrastructure.yml -e 'galera_upgrade=true' -e 'rabbitmq_upgrade=true' -e package_state=latest
With this complete, we can now restart the mariadb containers one at a time, ensuring that each is started, responding, and synchronized with the other nodes in the cluster before moving on to the next steps. This step allows the LXC container configuration that you applied earlier to take effect, ensuring that the containers are restarted in a controlled fashion.
# openstack-ansible "${SCRIPTS_PATH}/upgrade-utilities/galera-cluster-rolling-restart.yml"
Upgrade OpenStack¶
In 2023.1 policies has been adjusted to fully deprecate _member_
role.
If your environment still relying on this role, you can make _member_
role to imply member
. This can be done with the following upgrade playbook:
# openstack-ansible "${SCRIPTS_PATH}/upgrade-utilities/implied_member_role.yml"
We can now go ahead with the upgrade of all the OpenStack components.
# openstack-ansible setup-openstack.yml -e package_state=latest
Upgrade Ceph¶
With each OpenStack-Ansible version we define default Ceph client version
that will be installed on Glance/Cinder/Nova hosts and used by these services.
If you want to preserve the previous version of the ceph client during an
OpenStack-Ansible upgrade, you will need to override a variable
ceph_stable_release
in your user_variables.yml
If Ceph has been deployed as part of an OpenStack-Ansible deployment using the roles maintained by the Ceph-Ansible project you will also need to upgrade the Ceph version. Each OpenStack-Ansible release is tested only with specific Ceph-Ansible release and Ceph upgrades are not checked in any Openstack-Ansible integration tests. So we do not test or guarantee an upgrade path for such deployments. In this case tests should be done in a lab environment before upgrading.
Предупреждение
Ceph related playbooks are included as part of setup-infrastructure.yml
and setup-openstack.yml
playbooks, so you should be cautious when
running them during OpenStack upgrades.
If you have upgrade_ceph_packages: true
in your user variables or
provided -e upgrade_ceph_packages=true
as argument and run
setup-infrastructure.yml
this will result in Ceph package being upgraded
as well.
In order to upgrade Ceph in the deployment you will need to run:
# openstack-ansible /etc/ansible/roles/ceph-ansible/infrastructure-playbooks/rolling_update.yml