Install and configure a compute node¶
This section describes how to install and configure the Compute service on a compute node.
Note
This section assumes that you are following the instructions in this guide step-by-step to configure the first compute node. If you want to configure additional compute nodes, prepare them in a similar fashion. Each additional compute node requires a unique IP address.
Prerequisites¶
Before you install and configure Zun, you must have Docker and Kuryr-libnetwork installed properly in the compute node, and have Etcd installed properly in the controller node. Refer Get Docker for Docker installation and Kuryr libnetwork installation guide, Etcd installation guide
Install and configure components¶
Create zun user and necessary directories:
Create user:
# groupadd --system zun # useradd --home-dir "/var/lib/zun" \ --create-home \ --system \ --shell /bin/false \ -g zun \ zun
Create directories:
# mkdir -p /etc/zun # chown zun:zun /etc/zun
Install the following dependencies:
For Ubuntu, run:
# apt-get install python-pip git
For CentOS, run:
# yum install python-pip git python-devel libffi-devel gcc openssl-devel
Note
python-pip
package is not in CentOS base repositories, may need to install EPEL repository in order to havepython-pip
available.Clone and install zun:
# cd /var/lib/zun # git clone -b stable/train https://git.openstack.org/openstack/zun.git # chown -R zun:zun zun # cd zun # pip install -r requirements.txt # python setup.py install
Generate a sample configuration file:
# su -s /bin/sh -c "oslo-config-generator \ --config-file etc/zun/zun-config-generator.conf" zun # su -s /bin/sh -c "cp etc/zun/zun.conf.sample \ /etc/zun/zun.conf" zun # su -s /bin/sh -c "cp etc/zun/rootwrap.conf \ /etc/zun/rootwrap.conf" zun # su -s /bin/sh -c "mkdir -p /etc/zun/rootwrap.d" zun # su -s /bin/sh -c "cp etc/zun/rootwrap.d/* \ /etc/zun/rootwrap.d/" zun
Configure sudoers for
zun
users:Note
CentOS install binary files into
/usr/bin/
, replace/usr/local/bin/
directory with the correct in the following command.# echo "zun ALL=(root) NOPASSWD: /usr/local/bin/zun-rootwrap \ /etc/zun/rootwrap.conf *" | sudo tee /etc/sudoers.d/zun-rootwrap
Edit the
/etc/zun/zun.conf
:In the
[DEFAULT]
section, configureRabbitMQ
message queue access:[DEFAULT] ... transport_url = rabbit://openstack:RABBIT_PASS@controller
Replace
RABBIT_PASS
with the password you chose for theopenstack
account inRabbitMQ
.In the
[DEFAULT]
section, configure the path that is used by Zun to store the states:[DEFAULT] ... state_path = /var/lib/zun
In the
[database]
section, configure database access:[database] ... connection = mysql+pymysql://zun:ZUN_DBPASS@controller/zun
Replace
ZUN_DBPASS
with the password you chose for the zun database.In the
[keystone_auth]
section, configure Identity service access:[keystone_auth] memcached_servers = controller:11211 www_authenticate_uri = http://controller:5000 project_domain_name = default project_name = service user_domain_name = default password = ZUN_PASS username = zun auth_url = http://controller:5000 auth_type = password auth_version = v3 auth_protocol = http service_token_roles_required = True endpoint_type = internalURL
In the
[keystone_authtoken]
section, configure Identity service access:[keystone_authtoken] ... memcached_servers = controller:11211 www_authenticate_uri= http://controller:5000 project_domain_name = default project_name = service user_domain_name = default password = ZUN_PASS username = zun auth_url = http://controller:5000 auth_type = password
Replace ZUN_PASS with the password you chose for the zun user in the Identity service.
In the
[oslo_concurrency]
section, configure thelock_path
:[oslo_concurrency] ... lock_path = /var/lib/zun/tmp
(Optional) If you want to run both containers and nova instances in this compute node, in the
[compute]
section, configure thehost_shared_with_nova
:[compute] ... host_shared_with_nova = true
Note
Make sure that
/etc/zun/zun.conf
still have the correct permissions. You can set the permissions again with:# chown zun:zun /etc/zun/zun.conf
Configure Docker and Kuryr:
Create the directory
/etc/systemd/system/docker.service.d
# mkdir -p /etc/systemd/system/docker.service.d
Create the file
/etc/systemd/system/docker.service.d/docker.conf
. Configure docker to listen to port 2375 as well as the default unix socket. Also, configure docker to use etcd3 as storage backend:[Service] ExecStart= ExecStart=/usr/bin/dockerd --group zun -H tcp://compute1:2375 -H unix:///var/run/docker.sock --cluster-store etcd://controller:2379
Restart Docker:
# systemctl daemon-reload # systemctl restart docker
Edit the Kuryr config file
/etc/kuryr/kuryr.conf
. Setcapability_scope
toglobal
andprocess_external_connectivity
toFalse
:[DEFAULT] ... capability_scope = global process_external_connectivity = False
Restart Kuryr-libnetwork:
# systemctl restart kuryr-libnetwork
Finalize installation¶
Create an upstart config, it could be named as
/etc/systemd/system/zun-compute.service
:Note
CentOS install binary files into
/usr/bin/
, replace/usr/local/bin/
directory with the correct in the following example file.[Unit] Description = OpenStack Container Service Compute Agent [Service] ExecStart = /usr/local/bin/zun-compute User = zun [Install] WantedBy = multi-user.target
Enable and start zun-compute:
# systemctl enable zun-compute # systemctl start zun-compute
Verify that zun-compute services are running:
# systemctl status zun-compute