Install and configure a compute node¶
This section describes how to install and configure the Compute service on a compute node.
Note
This section assumes that you are following the instructions in this guide step-by-step to configure the first compute node. If you want to configure additional compute nodes, prepare them in a similar fashion. Each additional compute node requires a unique IP address.
Prerequisites¶
Before you install and configure Zun, you must have Docker and Kuryr-libnetwork installed properly in the compute node, and have Etcd installed properly in the controller node. Refer Get Docker for Docker installation and Kuryr libnetwork installation guide, Etcd installation guide
Install and configure components¶
Create zun user and necessary directories:
Create user:
# groupadd --system zun # useradd --home-dir "/var/lib/zun" \ --create-home \ --system \ --shell /bin/false \ -g zun \ zun
Create directories:
# mkdir -p /etc/zun # chown zun:zun /etc/zun
Create CNI directories:
# mkdir -p /etc/cni/net.d # chown zun:zun /etc/cni/net.d
Install the following dependencies:
For Ubuntu, run:
# apt-get install python3-pip git numactl
For CentOS, run:
# yum install python3-pip git python3-devel libffi-devel gcc openssl-devel numactl
Clone and install zun:
# cd /var/lib/zun # git clone -b stable/zed https://opendev.org/openstack/zun.git # chown -R zun:zun zun # git config --global --add safe.directory /var/lib/zun/zun # cd zun # pip3 install -r requirements.txt # python3 setup.py install
Generate a sample configuration file:
# su -s /bin/sh -c "oslo-config-generator \ --config-file etc/zun/zun-config-generator.conf" zun # su -s /bin/sh -c "cp etc/zun/zun.conf.sample \ /etc/zun/zun.conf" zun # su -s /bin/sh -c "cp etc/zun/rootwrap.conf \ /etc/zun/rootwrap.conf" zun # su -s /bin/sh -c "mkdir -p /etc/zun/rootwrap.d" zun # su -s /bin/sh -c "cp etc/zun/rootwrap.d/* \ /etc/zun/rootwrap.d/" zun # su -s /bin/sh -c "cp etc/cni/net.d/* /etc/cni/net.d/" zun
Configure sudoers for
zun
users:Note
CentOS might install binary files into
/usr/bin/
. If it does, replace/usr/local/bin/
directory with the correct in the following command.# echo "zun ALL=(root) NOPASSWD: /usr/local/bin/zun-rootwrap \ /etc/zun/rootwrap.conf *" | sudo tee /etc/sudoers.d/zun-rootwrap
Edit the
/etc/zun/zun.conf
:In the
[DEFAULT]
section, configureRabbitMQ
message queue access:[DEFAULT] ... transport_url = rabbit://openstack:RABBIT_PASS@controller
Replace
RABBIT_PASS
with the password you chose for theopenstack
account inRabbitMQ
.In the
[DEFAULT]
section, configure the path that is used by Zun to store the states:[DEFAULT] ... state_path = /var/lib/zun
In the
[database]
section, configure database access:[database] ... connection = mysql+pymysql://zun:ZUN_DBPASS@controller/zun
Replace
ZUN_DBPASS
with the password you chose for the zun database.In the
[keystone_auth]
section, configure Identity service access:[keystone_auth] memcached_servers = controller:11211 www_authenticate_uri = http://controller:5000 project_domain_name = default project_name = service user_domain_name = default password = ZUN_PASS username = zun auth_url = http://controller:5000 auth_type = password auth_version = v3 auth_protocol = http service_token_roles_required = True endpoint_type = internalURL
In the
[keystone_authtoken]
section, configure Identity service access:[keystone_authtoken] ... memcached_servers = controller:11211 www_authenticate_uri= http://controller:5000 project_domain_name = default project_name = service user_domain_name = default password = ZUN_PASS username = zun auth_url = http://controller:5000 auth_type = password
Replace ZUN_PASS with the password you chose for the zun user in the Identity service.
In the
[oslo_concurrency]
section, configure thelock_path
:[oslo_concurrency] ... lock_path = /var/lib/zun/tmp
(Optional) If you want to run both containers and nova instances in this compute node, in the
[compute]
section, configure thehost_shared_with_nova
:[compute] ... host_shared_with_nova = true
Note
Make sure that
/etc/zun/zun.conf
still have the correct permissions. You can set the permissions again with:# chown zun:zun /etc/zun/zun.conf
Configure Docker and Kuryr:
Create the directory
/etc/systemd/system/docker.service.d
# mkdir -p /etc/systemd/system/docker.service.d
Create the file
/etc/systemd/system/docker.service.d/docker.conf
. Configure docker to listen to port 2375 as well as the default unix socket. Also, configure docker to use etcd3 as storage backend:[Service] ExecStart= ExecStart=/usr/bin/dockerd --group zun -H tcp://compute1:2375 -H unix:///var/run/docker.sock --cluster-store etcd://controller:2379
Restart Docker:
# systemctl daemon-reload # systemctl restart docker
Edit the Kuryr config file
/etc/kuryr/kuryr.conf
. Setcapability_scope
toglobal
andprocess_external_connectivity
toFalse
:[DEFAULT] ... capability_scope = global process_external_connectivity = False
Restart Kuryr-libnetwork:
# systemctl restart kuryr-libnetwork
Configure containerd:
Generate config file for containerd:
# containerd config default > /etc/containerd/config.toml
Edit the
/etc/containerd/config.toml
. In the[grpc]
section, configure thegid
as the group ID of thezun
user:[grpc] ... gid = ZUN_GROUP_ID
Replace
ZUN_GROUP_ID
with the real group ID ofzun
user. You can retrieve the ID by (for example):# getent group zun | cut -d: -f3
Note
Make sure that
/etc/containerd/config.toml
still have the correct permissions. You can set the permissions again with:# chown zun:zun /etc/containerd/config.toml
Restart containerd:
# systemctl restart containerd
Configure CNI:
Download and install the standard loopback plugin:
# mkdir -p /opt/cni/bin # curl -L https://github.com/containernetworking/plugins/releases/download/v0.7.1/cni-plugins-amd64-v0.7.1.tgz \ | tar -C /opt/cni/bin -xzvf - ./loopback
Install the Zun CNI plugin:
# install -o zun -m 0555 -D /usr/local/bin/zun-cni /opt/cni/bin/zun-cni
Note
CentOS might install binary files into
/usr/bin/
. If it does, replace/usr/local/bin/zun-cni
with the correct path in the command above.
Finalize installation¶
Create an upstart config for zun compute, it could be named as
/etc/systemd/system/zun-compute.service
:Note
CentOS might install binary files into
/usr/bin/
. If it does, replace/usr/local/bin/
directory with the correct in the following example file.[Unit] Description = OpenStack Container Service Compute Agent [Service] ExecStart = /usr/local/bin/zun-compute User = zun [Install] WantedBy = multi-user.target
Create an upstart config for zun cni daemon, it could be named as
/etc/systemd/system/zun-cni-daemon.service
:Note
CentOS might install binary files into
/usr/bin/
, If it does, replace/usr/local/bin/
directory with the correct in the following example file.[Unit] Description = OpenStack Container Service CNI daemon [Service] ExecStart = /usr/local/bin/zun-cni-daemon User = zun [Install] WantedBy = multi-user.target
Enable and start zun-compute:
# systemctl enable zun-compute # systemctl start zun-compute
Enable and start zun-cni-daemon:
# systemctl enable zun-cni-daemon # systemctl start zun-cni-daemon
Verify that zun-compute and zun-cni-daemon services are running:
# systemctl status zun-compute # systemctl status zun-cni-daemon
Enable Kata Containers (Optional)¶
By default, runc
is used as the container runtime.
If you want to use Kata Containers instead, this section describes the
additional configuration steps.
Note
Kata Containers requires nested virtualization or bare metal. See the official document for details.
Enable the repository for Kata Containers:
For Ubuntu, run:
# curl -sL http://download.opensuse.org/repositories/home:/katacontainers:/releases:/$(arch):/master/xUbuntu_$(lsb_release -rs)/Release.key | apt-key add - # add-apt-repository "deb http://download.opensuse.org/repositories/home:/katacontainers:/releases:/$(arch):/master/xUbuntu_$(lsb_release -rs)/ /"
For CentOS, run:
# yum-config-manager --add-repo "http://download.opensuse.org/repositories/home:/katacontainers:/releases:/$(arch):/master/CentOS_7/home:katacontainers:releases:$(arch):master.repo"
Install Kata Containers:
For Ubuntu, run:
# apt-get update # apt install kata-runtime kata-proxy kata-shim
For CentOS, run:
# yum install kata-runtime kata-proxy kata-shim
Configure Docker to add Kata Container as runtime:
Edit the file
/etc/systemd/system/docker.service.d/docker.conf
. Append--add-runtime
option to add kata-runtime to Docker:[Service] ExecStart= ExecStart=/usr/bin/dockerd --group zun -H tcp://compute1:2375 -H unix:///var/run/docker.sock --cluster-store etcd://controller:2379 --add-runtime kata=/usr/bin/kata-runtime
Restart Docker:
# systemctl daemon-reload # systemctl restart docker
Configure containerd to add Kata Containers as runtime:
Edit the
/etc/containerd/config.toml
. In the[plugins.cri.containerd]
section, add the kata runtime configuration:[plugins] ... [plugins.cri] ... [plugins.cri.containerd] ... [plugins.cri.containerd.runtimes.kata] runtime_type = "io.containerd.kata.v2"
Restart containerd:
# systemctl restart containerd
Configure Zun to use Kata runtime:
Edit the
/etc/zun/zun.conf
. In the[DEFAULT]
section, configurecontainer_runtime
as kata:[DEFAULT] ... container_runtime = kata
Restart zun-compute:
# systemctl restart zun-compute