Prepare the target hosts¶
Configuring the operating system¶
This section describes the installation and configuration of operating systems for the target hosts, as well as deploying SSH keys and configuring storage.
Installing the operating system¶
Install one of the following supported operating systems on the target host:
Ubuntu server 22.04 (Jammy Jellyfish) LTS 64-bit
Ubuntu server 24.04 (Noble Numbat) LTS 64-bit
Debian 12 64-bit
Centos 9 Stream 64-bit
Rocky Linux 9 64-bit
Configure at least one network interface to access the Internet or suitable local repositories.
Some distributions add an extraneous entry in the /etc/hosts
file that
resolves the actual hostname to another loopback IP address such as
127.0.1.1
. You must comment out or remove this entry to prevent name
resolution problems. Do not remove the 127.0.0.1 entry.
This step is especially important for metal deployments.
We recommend adding the Secure Shell (SSH) server packages to the installation on target hosts that do not have local (console) access.
Note
We also recommend setting your locale to en_US.UTF-8. Other locales might work, but they are not tested or supported.
Configure Debian¶
Update package source lists
# apt update
Upgrade the system packages and kernel:
# apt dist-upgrade
Install additional software packages:
# apt install bridge-utils debootstrap ifenslave ifenslave-2.6 \ lsof lvm2 openssh-server sudo tcpdump vlan python3
Reboot the host to activate the changes and use the new kernel.
Configure Ubuntu¶
Update package source lists
# apt update
Upgrade the system packages and kernel:
# apt dist-upgrade
Install additional software packages:
# apt install bridge-utils debootstrap openssh-server \ tcpdump vlan python3
Install the kernel extra package if you have one for your kernel version
# apt install linux-modules-extra-$(uname -r)
Reboot the host to activate the changes and use the new kernel.
Configure CentOS / Rocky¶
Upgrade the system packages and kernel:
# dnf upgrade
Disable SELinux. Edit
/etc/sysconfig/selinux
, make sure thatSELINUX=enforcing
is changed toSELINUX=disabled
.Note
SELinux enabled is not currently supported in OpenStack-Ansible for CentOS/RHEL due to a lack of maintainers for the feature.
Install additional software packages:
# dnf install iputils lsof openssh-server\ sudo tcpdump python3
(Optional) Reduce the kernel log level by changing the printk value in your sysctls:
# echo "kernel.printk='4 1 7 4'" >> /etc/sysctl.conf
Reboot the host to activate the changes and use the new kernel.
Configure SSH keys¶
Ansible uses SSH to connect the deployment host and target hosts. You can
either use root
user or any other user that is allowed to escalate
privileges through Ansible become (like adding user to sudoers).
For more details, please reffer to the Running as non-root.
Copy the contents of the public key file on the deployment host to the
~/.ssh/authorized_keys
file on each target host.Test public key authentication from the deployment host to each target host by using SSH to connect to the target host from the deployment host. If you can connect and get the shell without authenticating, it is working. SSH provides a shell without asking for a password.
For more information about how to generate an SSH key pair, as well as best practices, see GitHub’s documentation about generating SSH keys.
Configuring the storage¶
Logical Volume Manager (LVM) enables a single device to be split into multiple logical volumes that appear as a physical storage device to the operating system. The Block Storage (cinder) service, and LXC containers that optionally run the OpenStack infrastructure, can optionally use LVM for their data storage.
Note
OpenStack-Ansible automatically configures LVM on the nodes, and overrides any existing LVM configuration. If you had a customized LVM configuration, edit the generated configuration file as needed.
To use the optional Block Storage (cinder) service, create an LVM volume group named
cinder-volumes
on the storage host. Specify a metadata size of 2048 when creating the physical volume. For example:# pvcreate --metadatasize 2048 physical_volume_device_path # vgcreate cinder-volumes physical_volume_device_path
Optionally, create an LVM volume group named
lxc
for container file systems and setlxc_container_backing_store: lvm
in user_variables.yml if you want to use LXC with LVM. If thelxc
volume group does not exist, containers are automatically installed on the file system under/var/lib/lxc
by default.
Configuring the network¶
OpenStack-Ansible uses bridges to connect physical and logical network interfaces on the host to virtual network interfaces within containers. Target hosts need to be configured with the following network bridges:
Bridge name |
Best configured on |
With a static IP |
---|---|---|
br-mgmt |
On every node |
Always |
br-storage |
On every storage node |
When component is deployed on metal |
On every compute node |
Always |
|
br-vxlan |
On every network node |
When component is deployed on metal |
On every compute node |
Always |
|
br-vlan |
On every network node |
Never |
On every compute node |
Never |
For a detailed reference of how the host and container networking is implemented, refer to OpenStack-Ansible Reference Architecture, section Container Networking.
For use case examples, refer to User Guides.
Host network bridges information¶
LXC internal:
lxcbr0
The
lxcbr0
bridge is required for LXC, but OpenStack-Ansible configures it automatically. It provides external (typically Internet) connectivity to containers with dnsmasq (DHCP/DNS) + NAT.This bridge does not directly attach to any physical or logical interfaces on the host because iptables handles connectivity. It attaches to
eth0
in each container.The container network that the bridge attaches to is configurable in the
openstack_user_config.yml
file in theprovider_networks
dictionary.Container management:
br-mgmt
The
br-mgmt
bridge provides management of and communication between the infrastructure and OpenStack services.The bridge attaches to a physical or logical interface, typically a
bond0
VLAN subinterface. It also attaches toeth1
in each container.The container network interface that the bridge attaches to is configurable in the
openstack_user_config.yml
file.Storage:
br-storage
The
br-storage
bridge provides segregated access to Block Storage devices between OpenStack services and Block Storage devices.The bridge attaches to a physical or logical interface, typically a
bond0
VLAN subinterface. It also attaches toeth2
in each associated container.The container network interface that the bridge attaches to is configurable in the
openstack_user_config.yml
file.OpenStack Networking tunnel:
br-vxlan
The
br-vxlan
interface is required if the environment is configured to allow projects to create virtual networks using VXLAN. It provides the interface for encapsulated virtual (VXLAN) tunnel network traffic.Note that
br-vxlan
is not required to be a bridge at all, a physical interface or a bond VLAN subinterface can be used directly and will be more efficient. The namebr-vxlan
is maintained here for consistency in the documentation and example configurations.The container network interface it attaches to is configurable in the
openstack_user_config.yml
file.OpenStack Networking provider:
br-vlan
The
br-vlan
bridge is provides infrastructure for VLAN tagged or flat (no VLAN tag) networks.The bridge attaches to a physical or logical interface, typically
bond1
. It is not assigned an IP address because it handles only layer 2 connectivity.The container network interface that the bridge attaches to is configurable in the
openstack_user_config.yml
file.