[ English | Indonesia | 한국어 (대한민국) | Deutsch | English (United Kingdom) ]
Multinode¶
Overview¶
In order to drive towards a production-ready OpenStack solution, our goal is to provide containerized, yet stable persistent volumes that Kubernetes can use to schedule applications that require state, such as MariaDB (Galera). Although we assume that the project should provide a „batteries included“ approach towards persistent storage, we want to allow operators to define their own solution as well. Examples of this work will be documented in another section, however evidence of this is found throughout the project. If you find any issues or gaps, please create a story to track what can be done to improve our documentation.
Bemerkung
Please see the supported application versions outlined in the source variable file.
Other versions and considerations (such as other CNI SDN providers), config map data, and value overrides will be included in other documentation as we explore these options further.
The installation procedures below, will take an administrator from a new
kubeadm
installation to OpenStack-Helm deployment.
Bemerkung
Many of the default container images that are referenced across
OpenStack-Helm charts are not intended for production use; for example,
while LOCI and Kolla can be used to produce production-grade images, their
public reference images are not prod-grade. In addition, some of the default
images use latest
or master
tags, which are moving targets and can
lead to unpredictable behavior. For production-like deployments, we
recommend building custom images, or at minimum caching a set of known
images, and incorporating them into OpenStack-Helm via values overrides.
Warnung
Until the Ubuntu kernel shipped with 16.04 supports CephFS subvolume mounts by default the HWE Kernel is required to use CephFS.
Kubernetes Preparation¶
You can use any Kubernetes deployment tool to bring up a working Kubernetes cluster for use with OpenStack-Helm. For production deployments, please choose (and tune appropriately) a highly-resilient Kubernetes distribution, e.g.:
Airship, a declarative open cloud infrastructure platform
KubeADM, the foundation of a number of Kubernetes installation solutions
For a lab or proof-of-concept environment, the OpenStack-Helm gate scripts can be used to quickly deploy a multinode Kubernetes cluster using KubeADM and Ansible. Please refer to the deployment guide here.
Managing and configuring a Kubernetes cluster is beyond the scope of OpenStack-Helm and this guide.
Deploy OpenStack-Helm¶
Bemerkung
The following commands all assume that they are run from the
/opt/openstack-helm
directory.
Setup Clients on the host and assemble the charts¶
The OpenStack clients and Kubernetes RBAC rules, along with assembly of the charts can be performed by running the following commands:
#!/bin/bash
sudo -H -E pip3 install \
-c${UPPER_CONSTRAINTS_FILE:=https://releases.openstack.org/constraints/upper/${OPENSTACK_RELEASE:-stein}} \
cmd2 python-openstackclient python-heatclient --ignore-installed
sudo -H mkdir -p /etc/openstack
sudo -H chown -R $(id -un): /etc/openstack
FEATURE_GATE="tls"; if [[ ${FEATURE_GATES//,/ } =~ (^|[[:space:]])${FEATURE_GATE}($|[[:space:]]) ]]; then
tee /etc/openstack/clouds.yaml << EOF
clouds:
openstack_helm:
region_name: RegionOne
identity_api_version: 3
cacert: /etc/openstack-helm/certs/ca/ca.pem
auth:
username: 'admin'
password: 'password'
project_name: 'admin'
project_domain_name: 'default'
user_domain_name: 'default'
auth_url: 'https://keystone.openstack.svc.cluster.local/v3'
EOF
else
tee /etc/openstack/clouds.yaml << EOF
clouds:
openstack_helm:
region_name: RegionOne
identity_api_version: 3
auth:
username: 'admin'
password: 'password'
project_name: 'admin'
project_domain_name: 'default'
user_domain_name: 'default'
auth_url: 'http://keystone.openstack.svc.cluster.local/v3'
EOF
fi
#NOTE: Build helm-toolkit, most charts depend on helm-toolkit
make helm-toolkit
Alternatively, this step can be performed by running the script directly:
./tools/deployment/multinode/010-setup-client.sh
Deploy the ingress controller¶
export OSH_DEPLOY_MULTINODE=True
#!/bin/bash
#NOTE: Get the over-rides to use
export HELM_CHART_ROOT_PATH="${HELM_CHART_ROOT_PATH:="${OSH_INFRA_PATH:="../openstack-helm-infra"}"}"
: ${OSH_EXTRA_HELM_ARGS_INGRESS:="$(./tools/deployment/common/get-values-overrides.sh ingress)"}
#NOTE: Lint and package chart
make -C ${HELM_CHART_ROOT_PATH} ingress
#NOTE: Deploy command
: ${OSH_EXTRA_HELM_ARGS:=""}
tee /tmp/ingress-kube-system.yaml << EOF
deployment:
mode: cluster
type: DaemonSet
network:
host_namespace: true
EOF
touch /tmp/ingress-component.yaml
if [ -n "${OSH_DEPLOY_MULTINODE}" ]; then
tee --append /tmp/ingress-kube-system.yaml << EOF
pod:
replicas:
error_page: 2
EOF
tee /tmp/ingress-component.yaml << EOF
pod:
replicas:
ingress: 2
error_page: 2
EOF
fi
helm upgrade --install ingress-kube-system ${HELM_CHART_ROOT_PATH}/ingress \
--namespace=kube-system \
--values=/tmp/ingress-kube-system.yaml \
${OSH_EXTRA_HELM_ARGS} \
${OSH_EXTRA_HELM_ARGS_INGRESS} \
${OSH_EXTRA_HELM_ARGS_INGRESS_KUBE_SYSTEM}
#NOTE: Wait for deploy
./tools/deployment/common/wait-for-pods.sh kube-system
#NOTE: Display info
helm status ingress-kube-system
#NOTE: Deploy namespace ingress
helm upgrade --install ingress-openstack ${HELM_CHART_ROOT_PATH}/ingress \
--namespace=openstack \
--values=/tmp/ingress-component.yaml \
${OSH_EXTRA_HELM_ARGS} \
${OSH_EXTRA_HELM_ARGS_INGRESS} \
${OSH_EXTRA_HELM_ARGS_INGRESS_OPENSTACK}
#NOTE: Wait for deploy
./tools/deployment/common/wait-for-pods.sh openstack
#NOTE: Display info
helm status ingress-openstack
helm upgrade --install ingress-ceph ${HELM_CHART_ROOT_PATH}/ingress \
--namespace=ceph \
--values=/tmp/ingress-component.yaml \
${OSH_EXTRA_HELM_ARGS} \
${OSH_EXTRA_HELM_ARGS_INGRESS} \
${OSH_EXTRA_HELM_ARGS_INGRESS_CEPH}
#NOTE: Wait for deploy
./tools/deployment/common/wait-for-pods.sh ceph
#NOTE: Display info
helm status ingress-ceph
Alternatively, this step can be performed by running the script directly:
OSH_DEPLOY_MULTINODE=True ./tools/deployment/component/common/ingress.sh
Create loopback devices for CEPH¶
Create two loopback devices for ceph as one disk for OSD data and other disk for block DB and block WAL. If loop0 and loop1 devices are busy in your case , feel free to change them in parameters by using –ceph-osd-data and –ceph-osd-dbwal options.
ansible all -i /opt/openstack-helm-infra/tools/gate/devel/multinode-inventory.yaml -m shell -s -a "/opt/openstack-helm/tools/deployment/common/setup-ceph-loopback-device.sh --ceph-osd-data /dev/loop0 --ceph-osd-dbwal /dev/loop1"
Deploy Ceph¶
The script below configures Ceph to use loopback devices created in previous step as backend for ceph osds.
To configure a custom block device-based backend, please refer
to the ceph-osd
values.yaml.
Additional information on Kubernetes Ceph-based integration can be found in the documentation for the CephFS and RBD storage provisioners, as well as for the alternative NFS provisioner.
Warnung
The upstream Ceph image repository does not currently pin tags to specific Ceph point releases. This can lead to unpredictable results in long-lived deployments. In production scenarios, we strongly recommend overriding the Ceph images to use either custom built images or controlled, cached images.
Bemerkung
The ./tools/deployment/multinode/kube-node-subnet.sh script requires docker to run.
#!/bin/bash
#NOTE: Deploy command
[ -s /tmp/ceph-fs-uuid.txt ] || uuidgen > /tmp/ceph-fs-uuid.txt
CEPH_PUBLIC_NETWORK="$(./tools/deployment/multinode/kube-node-subnet.sh)"
CEPH_CLUSTER_NETWORK="${CEPH_PUBLIC_NETWORK}"
CEPH_FS_ID="$(cat /tmp/ceph-fs-uuid.txt)"
#NOTE(portdirect): to use RBD devices with kernels < 4.5 this should be set to 'hammer'
LOWEST_CLUSTER_KERNEL_VERSION=$(kubectl get node -o go-template='{{range .items}}{{.status.nodeInfo.kernelVersion}}{{"\n"}}{{ end }}' | sort -V | tail -1)
if [ "$(echo ${LOWEST_CLUSTER_KERNEL_VERSION} | awk -F "." '{ print $1 }')" -lt "4" ] || [ "$(echo ${LOWEST_CLUSTER_KERNEL_VERSION} | awk -F "." '{ print $2 }')" -lt "15" ]; then
echo "Using hammer crush tunables"
CRUSH_TUNABLES=hammer
else
CRUSH_TUNABLES=null
fi
NUMBER_OF_OSDS="$(kubectl get nodes -l ceph-osd=enabled --no-headers | wc -l)"
tee /tmp/ceph.yaml << EOF
endpoints:
ceph_mon:
namespace: ceph
network:
public: ${CEPH_PUBLIC_NETWORK}
cluster: ${CEPH_CLUSTER_NETWORK}
deployment:
storage_secrets: true
ceph: true
rbd_provisioner: true
cephfs_provisioner: false
client_secrets: false
bootstrap:
enabled: true
conf:
ceph:
global:
fsid: ${CEPH_FS_ID}
pool:
crush:
tunables: ${CRUSH_TUNABLES}
target:
osd: ${NUMBER_OF_OSDS}
pg_per_osd: 100
storage:
osd:
- data:
type: bluestore
location: /dev/loop0
block_db:
location: /dev/loop1
size: "5GB"
block_wal:
location: /dev/loop1
size: "2GB"
storageclass:
cephfs:
provision_storage_class: false
manifests:
deployment_cephfs_provisioner: false
job_cephfs_client_key: false
EOF
: ${OSH_INFRA_PATH:="../openstack-helm-infra"}
for CHART in ceph-mon ceph-osd ceph-client ceph-provisioners; do
make -C ${OSH_INFRA_PATH} ${CHART}
helm upgrade --install ${CHART} ${OSH_INFRA_PATH}/${CHART} \
--namespace=ceph \
--values=/tmp/ceph.yaml \
${OSH_EXTRA_HELM_ARGS} \
${OSH_EXTRA_HELM_ARGS_CEPH_DEPLOY}
#NOTE: Wait for deploy
./tools/deployment/common/wait-for-pods.sh ceph 1200
#NOTE: Validate deploy
MON_POD=$(kubectl get pods \
--namespace=ceph \
--selector="application=ceph" \
--selector="component=mon" \
--no-headers | awk '{ print $1; exit }')
kubectl exec -n ceph ${MON_POD} -- ceph -s
done
Alternatively, this step can be performed by running the script directly:
./tools/deployment/multinode/030-ceph.sh
Activate the openstack namespace to be able to use Ceph¶
#!/bin/bash
#NOTE: Deploy command
CEPH_PUBLIC_NETWORK="$(./tools/deployment/multinode/kube-node-subnet.sh)"
CEPH_CLUSTER_NETWORK="${CEPH_PUBLIC_NETWORK}"
tee /tmp/ceph-openstack-config.yaml <<EOF
endpoints:
ceph_mon:
namespace: ceph
network:
public: ${CEPH_PUBLIC_NETWORK}
cluster: ${CEPH_CLUSTER_NETWORK}
deployment:
ceph: false
rbd_provisioner: false
cephfs_provisioner: false
client_secrets: true
bootstrap:
enabled: false
storageclass:
cephfs:
provision_storage_class: false
EOF
: ${OSH_INFRA_PATH:="../openstack-helm-infra"}
helm upgrade --install ceph-openstack-config ${OSH_INFRA_PATH}/ceph-provisioners \
--namespace=openstack \
--values=/tmp/ceph-openstack-config.yaml \
${OSH_EXTRA_HELM_ARGS} \
${OSH_EXTRA_HELM_ARGS_CEPH_NS_ACTIVATE}
#NOTE: Wait for deploy
./tools/deployment/common/wait-for-pods.sh openstack
#NOTE: Validate Deployment info
helm status ceph-openstack-config
Alternatively, this step can be performed by running the script directly:
./tools/deployment/multinode/040-ceph-ns-activate.sh
Deploy MariaDB¶
#!/bin/bash
#NOTE: Deploy command
tee /tmp/mariadb.yaml << EOF
pod:
replicas:
server: 3
ingress: 2
EOF
export HELM_CHART_ROOT_PATH="${HELM_CHART_ROOT_PATH:="${OSH_INFRA_PATH:="../openstack-helm-infra"}"}"
: ${OSH_EXTRA_HELM_ARGS_MARIADB:="$(./tools/deployment/common/get-values-overrides.sh mariadb)"}
#NOTE: Lint and package chart
make -C ${HELM_CHART_ROOT_PATH} mariadb
#NOTE: Deploy command
: ${OSH_EXTRA_HELM_ARGS:=""}
helm upgrade --install mariadb ${HELM_CHART_ROOT_PATH}/mariadb \
--namespace=openstack \
--set volume.use_local_path_for_single_pod_cluster.enabled=true \
--set volume.enabled=false \
--values=/tmp/mariadb.yaml \
${OSH_EXTRA_HELM_ARGS} \
${OSH_EXTRA_HELM_ARGS_MARIADB}
#NOTE: Wait for deploy
./tools/deployment/common/wait-for-pods.sh openstack
#NOTE: Validate Deployment info
helm status mariadb
Alternatively, this step can be performed by running the script directly:
./tools/deployment/multinode/050-mariadb.sh
Deploy RabbitMQ¶
#!/bin/bash
#NOTE: Get the over-rides to use
export HELM_CHART_ROOT_PATH="${HELM_CHART_ROOT_PATH:="${OSH_INFRA_PATH:="../openstack-helm-infra"}"}"
: ${OSH_EXTRA_HELM_ARGS_RABBITMQ:="$(./tools/deployment/common/get-values-overrides.sh rabbitmq)"}
#NOTE: Lint and package chart
make -C ${HELM_CHART_ROOT_PATH} rabbitmq
#NOTE: Deploy command
helm upgrade --install rabbitmq ${HELM_CHART_ROOT_PATH}/rabbitmq \
--namespace=openstack \
--set volume.enabled=false \
--set pod.replicas.server=2 \
${OSH_EXTRA_HELM_ARGS:=} \
${OSH_EXTRA_HELM_ARGS_RABBITMQ}
#NOTE: Wait for deploy
./tools/deployment/common/wait-for-pods.sh openstack
#NOTE: Validate Deployment info
helm status rabbitmq
Alternatively, this step can be performed by running the script directly:
./tools/deployment/multinode/060-rabbitmq.sh
Deploy Memcached¶
#!/bin/bash
#NOTE: Get the over-rides to use
export HELM_CHART_ROOT_PATH="${HELM_CHART_ROOT_PATH:="${OSH_INFRA_PATH:="../openstack-helm-infra"}"}"
: ${OSH_EXTRA_HELM_ARGS_MEMCACHED:="$(./tools/deployment/common/get-values-overrides.sh memcached)"}
#NOTE: Lint and package chart
make -C ${HELM_CHART_ROOT_PATH} memcached
#NOTE: Deploy command
: ${OSH_EXTRA_HELM_ARGS:=""}
helm upgrade --install memcached ${HELM_CHART_ROOT_PATH}/memcached \
--namespace=openstack \
${OSH_EXTRA_HELM_ARGS} \
${OSH_EXTRA_HELM_ARGS_MEMCACHED}
#NOTE: Wait for deploy
./tools/deployment/common/wait-for-pods.sh openstack
#NOTE: Validate Deployment info
helm status memcached
Alternatively, this step can be performed by running the script directly:
./tools/deployment/multinode/070-memcached.sh
Deploy Keystone¶
#!/bin/bash
#NOTE: Get the over-rides to use
: ${OSH_EXTRA_HELM_ARGS_KEYSTONE:="$(./tools/deployment/common/get-values-overrides.sh keystone)"}
: ${RUN_HELM_TESTS:="yes"}
#NOTE: Lint and package chart
make keystone
#NOTE: Deploy command
: ${OSH_EXTRA_HELM_ARGS:=""}
helm upgrade --install keystone ./keystone \
--namespace=openstack \
--set pod.replicas.api=2 \
${OSH_EXTRA_HELM_ARGS} \
${OSH_EXTRA_HELM_ARGS_KEYSTONE}
#NOTE: Wait for deploy
./tools/deployment/common/wait-for-pods.sh openstack
#NOTE: Validate Deployment info
helm status keystone
export OS_CLOUD=openstack_helm
sleep 30 #NOTE(portdirect): Wait for ingress controller to update rules and restart Nginx
openstack endpoint list
if [ "x${RUN_HELM_TESTS}" != "xno" ]; then
./tools/deployment/common/run-helm-tests.sh keystone
fi
Alternatively, this step can be performed by running the script directly:
./tools/deployment/multinode/080-keystone.sh
Deploy Rados Gateway for object store¶
#!/bin/bash
#NOTE: Get the over-rides to use
export HELM_CHART_ROOT_PATH="${HELM_CHART_ROOT_PATH:="${OSH_INFRA_PATH:="../openstack-helm-infra"}"}"
: ${OSH_EXTRA_HELM_ARGS_CEPH_RGW:="$(./tools/deployment/common/get-values-overrides.sh ceph-rgw)"}
#NOTE: Lint and package chart
make -C ${HELM_CHART_ROOT_PATH} ceph-rgw
#NOTE: Deploy command
: ${OSH_EXTRA_HELM_ARGS:=""}
CEPH_PUBLIC_NETWORK="$(./tools/deployment/multinode/kube-node-subnet.sh)"
CEPH_CLUSTER_NETWORK="$(./tools/deployment/multinode/kube-node-subnet.sh)"
tee /tmp/radosgw-openstack.yaml <<EOF
endpoints:
identity:
namespace: openstack
object_store:
namespace: openstack
ceph_mon:
namespace: ceph
network:
public: ${CEPH_PUBLIC_NETWORK}
cluster: ${CEPH_CLUSTER_NETWORK}
deployment:
ceph: true
bootstrap:
enabled: false
conf:
rgw_ks:
enabled: true
pod:
replicas:
rgw: 1
EOF
: ${OSH_INFRA_PATH:="../openstack-helm-infra"}
helm upgrade --install radosgw-openstack ${OSH_INFRA_PATH}/ceph-rgw \
--namespace=openstack \
--values=/tmp/radosgw-openstack.yaml \
${OSH_EXTRA_HELM_ARGS} \
${OSH_EXTRA_HELM_ARGS_CEPH_RGW}
#NOTE: Wait for deploy
./tools/deployment/common/wait-for-pods.sh openstack
#NOTE: Validate Deployment info
helm status radosgw-openstack
#NOTE: Run Tests
export OS_CLOUD=openstack_helm
# Delete the test pod if it still exists
kubectl delete pods -l application=radosgw-openstack,release_group=radosgw-openstack,component=test --namespace=openstack --ignore-not-found
helm test radosgw-openstack
Alternatively, this step can be performed by running the script directly:
./tools/deployment/multinode/090-ceph-radosgateway.sh
Deploy Glance¶
#!/bin/bash
#NOTE: Get the over-rides to use
: ${OSH_EXTRA_HELM_ARGS_GLANCE:="$(./tools/deployment/common/get-values-overrides.sh glance)"}
: ${RUN_HELM_TESTS:="yes"}
#NOTE: Lint and package chart
make glance
#NOTE: Deploy command
: ${OSH_EXTRA_HELM_ARGS:=""}
: ${GLANCE_BACKEND:="pvc"}
tee /tmp/glance.yaml <<EOF
storage: ${GLANCE_BACKEND}
pod:
replicas:
api: 2
registry: 2
EOF
helm upgrade --install glance ./glance \
--namespace=openstack \
--values=/tmp/glance.yaml \
${OSH_EXTRA_HELM_ARGS:=} \
${OSH_EXTRA_HELM_ARGS_GLANCE}
#NOTE: Wait for deploy
./tools/deployment/common/wait-for-pods.sh openstack
#NOTE: Validate Deployment info
helm status glance
export OS_CLOUD=openstack_helm
openstack service list
sleep 30 #NOTE(portdirect): Wait for ingress controller to update rules and restart Nginx
openstack image list
openstack image show 'Cirros 0.3.5 64-bit'
if [ "x${RUN_HELM_TESTS}" == "xno" ]; then
exit 0
fi
./tools/deployment/common/run-helm-tests.sh glance
Alternatively, this step can be performed by running the script directly:
./tools/deployment/multinode/100-glance.sh
Deploy Cinder¶
#!/bin/bash
: ${OSH_EXTRA_HELM_ARGS_CINDER:="$(./tools/deployment/common/get-values-overrides.sh cinder)"}
#NOTE: Lint and package chart
make cinder
#NOTE: Deploy command
tee /tmp/cinder.yaml << EOF
conf:
ceph:
pools:
backup:
replication: 1
crush_rule: same_host
chunk_size: 8
app_name: cinder-backup
# default pool used by rbd1 backend
cinder.volumes:
replication: 1
crush_rule: same_host
chunk_size: 8
app_name: cinder-volume
# secondary pool used by rbd2 backend
cinder.volumes.gold:
replication: 1
crush_rule: same_host
chunk_size: 8
app_name: cinder-volume
backends:
# add an extra storage backend same values as rbd1 (see
# cinder/values.yaml) except for volume_backend_name and rbd_pool
rbd2:
volume_driver: cinder.volume.drivers.rbd.RBDDriver
volume_backend_name: rbd2
rbd_pool: cinder.volumes.gold
rbd_ceph_conf: "/etc/ceph/ceph.conf"
rbd_flatten_volume_from_snapshot: false
report_discard_supported: true
rbd_max_clone_depth: 5
rbd_store_chunk_size: 4
rados_connect_timeout: -1
rbd_user: cinder
rbd_secret_uuid: 457eb676-33da-42ec-9a8c-9293d545c337
pod:
replicas:
api: 2
volume: 1
scheduler: 1
backup: 1
EOF
helm upgrade --install cinder ./cinder \
--namespace=openstack \
--values=/tmp/cinder.yaml \
${OSH_EXTRA_HELM_ARGS} \
${OSH_EXTRA_HELM_ARGS_CINDER}
#NOTE: Wait for deploy
./tools/deployment/common/wait-for-pods.sh openstack
#NOTE: Validate Deployment info
export OS_CLOUD=openstack_helm
openstack service list
sleep 30 #NOTE(portdirect): Wait for ingress controller to update rules and restart Nginx
openstack volume type list
openstack volume type list --default
# Delete the test pod if it still exists
kubectl delete pods -l application=cinder,release_group=cinder,component=test --namespace=openstack --ignore-not-found
helm test cinder --timeout 900
Alternatively, this step can be performed by running the script directly:
./tools/deployment/multinode/110-cinder.sh
Deploy OpenvSwitch¶
#!/bin/bash
: ${OSH_INFRA_PATH:="../openstack-helm-infra"}
: ${OSH_EXTRA_HELM_ARGS_OPENVSWITCH:="$(./tools/deployment/common/get-values-overrides.sh openvswitch)"}
#NOTE: Lint and package chart
make -C ${OSH_INFRA_PATH} openvswitch
#NOTE: Deploy command
helm upgrade --install openvswitch ${OSH_INFRA_PATH}/openvswitch \
--namespace=openstack \
${OSH_EXTRA_HELM_ARGS} \
${OSH_EXTRA_HELM_ARGS_OPENVSWITCH}
#NOTE: Wait for deploy
./tools/deployment/common/wait-for-pods.sh openstack
#NOTE: Validate Deployment info
helm status openvswitch
Alternatively, this step can be performed by running the script directly:
./tools/deployment/multinode/120-openvswitch.sh
Deploy Libvirt¶
#!/bin/bash
CEPH_ENABLED=false
if openstack service list -f value -c Type | grep -q "^volume" && \
openstack volume type list -f value -c Name | grep -q "rbd"; then
CEPH_ENABLED=true
fi
#NOTE: Get the over-rides to use
: ${OSH_INFRA_PATH:="../openstack-helm-infra"}
: ${OSH_EXTRA_HELM_ARGS_LIBVIRT:="$(./tools/deployment/common/get-values-overrides.sh libvirt)"}
#NOTE: Lint and package chart
make -C ${OSH_INFRA_PATH} libvirt
#NOTE: Deploy libvirt
helm upgrade --install libvirt ${OSH_INFRA_PATH}/libvirt \
--namespace=openstack \
--set conf.ceph.enabled=${CEPH_ENABLED} \
${OSH_EXTRA_HELM_ARGS} \
${OSH_EXTRA_HELM_ARGS_LIBVIRT}
#NOTE(portdirect): We don't wait for libvirt pods to come up, as they depend
# on the neutron agents being up.
#NOTE: Validate Deployment info
helm status libvirt
Alternatively, this step can be performed by running the script directly:
./tools/deployment/multinode/130-libvirt.sh
Deploy Compute Kit (Nova and Neutron)¶
#!/bin/bash
export OS_CLOUD=openstack_helm
CEPH_ENABLED=false
if openstack service list -f value -c Type | grep -q "^volume" && \
openstack volume type list -f value -c Name | grep -q "rbd"; then
CEPH_ENABLED=true
fi
#NOTE: Get the overrides to use for placement, should placement be deployed.
case "${OPENSTACK_RELEASE}" in
"queens")
DEPLOY_SEPARATE_PLACEMENT="no"
;;
"rocky")
DEPLOY_SEPARATE_PLACEMENT="no"
;;
"stein")
DEPLOY_SEPARATE_PLACEMENT="yes"
;;
*)
DEPLOY_SEPARATE_PLACEMENT="yes"
;;
esac
if [[ "${DEPLOY_SEPARATE_PLACEMENT}" == "yes" ]]; then
# Get overrides
: ${OSH_EXTRA_HELM_ARGS_PLACEMENT:="$(./tools/deployment/common/get-values-overrides.sh placement)"}
# Lint and package
make placement
tee /tmp/placement.yaml << EOF
pod:
replicas:
api: 2
EOF
# Deploy
helm upgrade --install placement ./placement \
--namespace=openstack \
--values=/tmp/placement.yaml \
${OSH_EXTRA_HELM_ARGS:=} \
${OSH_EXTRA_HELM_ARGS_PLACEMENT}
fi
#NOTE: Get the over-rides to use
: ${OSH_EXTRA_HELM_ARGS_NOVA:="$(./tools/deployment/common/get-values-overrides.sh nova)"}
# TODO: Revert this reasoning when gates are pointing to more up to
# date openstack release. When doing so, we should revert the default
# values of the nova chart to NOT use placement by default, and
# have a ocata/pike/queens/rocky/stein override to enable placement in the nova chart deploy
if [[ "${DEPLOY_SEPARATE_PLACEMENT}" == "yes" ]]; then
OSH_EXTRA_HELM_ARGS_NOVA="${OSH_EXTRA_HELM_ARGS_NOVA} --values=./nova/values_overrides/train-disable-nova-placement.yaml"
fi
#NOTE: Lint and package chart
make nova
#NOTE: Deploy nova
tee /tmp/nova.yaml << EOF
pod:
replicas:
osapi: 2
conductor: 2
consoleauth: 2
EOF
if [[ "${DEPLOY_SEPARATE_PLACEMENT}" == "no" ]]; then
echo " placement: 2" >> /tmp/nova.yaml
fi
#NOTE: Deploy nova
: ${OSH_EXTRA_HELM_ARGS:=""}
if [ "x$(systemd-detect-virt)" == "xnone" ]; then
echo 'OSH is not being deployed in virtualized environment'
helm upgrade --install nova ./nova \
--namespace=openstack \
--values=/tmp/nova.yaml \
--set bootstrap.wait_for_computes.enabled=true \
--set conf.ceph.enabled=${CEPH_ENABLED} \
${OSH_EXTRA_HELM_ARGS:=} \
${OSH_EXTRA_HELM_ARGS_NOVA}
else
echo 'OSH is being deployed in virtualized environment, using qemu for nova'
helm upgrade --install nova ./nova \
--namespace=openstack \
--values=/tmp/nova.yaml \
--set bootstrap.wait_for_computes.enabled=true \
--set conf.ceph.enabled=${CEPH_ENABLED} \
--set conf.nova.libvirt.virt_type=qemu \
--set conf.nova.libvirt.cpu_mode=none \
${OSH_EXTRA_HELM_ARGS:=} \
${OSH_EXTRA_HELM_ARGS_NOVA}
fi
#NOTE: Get the over-rides to use
: ${OSH_EXTRA_HELM_ARGS_NEUTRON:="$(./tools/deployment/common/get-values-overrides.sh neutron)"}
#NOTE: Lint and package chart
make neutron
tee /tmp/neutron.yaml << EOF
network:
interface:
tunnel: docker0
pod:
replicas:
server: 2
conf:
neutron:
DEFAULT:
l3_ha: False
max_l3_agents_per_router: 1
l3_ha_network_type: vxlan
dhcp_agents_per_network: 1
plugins:
ml2_conf:
ml2_type_flat:
flat_networks: public
openvswitch_agent:
agent:
tunnel_types: vxlan
ovs:
bridge_mappings: public:br-ex
linuxbridge_agent:
linux_bridge:
bridge_mappings: public:br-ex
EOF
helm upgrade --install neutron ./neutron \
--namespace=openstack \
--values=/tmp/neutron.yaml \
${OSH_RELEASE_OVERRIDES_NEUTRON} \
${OSH_EXTRA_HELM_ARGS} \
${OSH_EXTRA_HELM_ARGS_NEUTRON}
# If compute kit installed using Tungsten Fubric, it will be alive when Tunsten Fabric become active.
if [[ "$FEATURE_GATES" =~ (,|^)tf(,|$) ]]; then
exit 0
fi
#NOTE: Wait for deploy
./tools/deployment/common/wait-for-pods.sh openstack
#NOTE: Validate Deployment info
export OS_CLOUD=openstack_helm
openstack service list
sleep 30 #NOTE(portdirect): Wait for ingress controller to update rules and restart Nginx
openstack compute service list
openstack network agent list
openstack hypervisor list
if [ "x${RUN_HELM_TESTS}" == "xno" ]; then
exit 0
fi
./tools/deployment/common/run-helm-tests.sh nova
./tools/deployment/common/run-helm-tests.sh neutron
Alternatively, this step can be performed by running the script directly:
./tools/deployment/multinode/140-compute-kit.sh
Deploy Heat¶
#!/bin/bash
: ${OSH_EXTRA_HELM_ARGS_HEAT:="$(./tools/deployment/common/get-values-overrides.sh heat)"}
#NOTE: Lint and package chart
make heat
tee /tmp/heat.yaml << EOF
pod:
replicas:
api: 2
cfn: 2
cloudwatch: 2
engine: 2
EOF
#NOTE: Deploy command
: ${OSH_EXTRA_HELM_ARGS:=""}
helm upgrade --install heat ./heat \
--namespace=openstack \
--values=/tmp/heat.yaml \
${OSH_EXTRA_HELM_ARGS} \
${OSH_EXTRA_HELM_ARGS_HEAT}
#NOTE: Wait for deploy
./tools/deployment/common/wait-for-pods.sh openstack
#NOTE: Validate Deployment info
export OS_CLOUD=openstack_helm
openstack service list
openstack endpoint list
sleep 30 #NOTE(portdirect): Wait for ingress controller to update rules and restart Nginx
openstack --os-interface internal orchestration service list
Alternatively, this step can be performed by running the script directly:
./tools/deployment/multinode/150-heat.sh
Deploy Barbican¶
#!/bin/bash
helm upgrade --install barbican ./barbican \
--namespace=openstack \
--set pod.replicas.api=2 \
${OSH_EXTRA_HELM_ARGS} \
${OSH_EXTRA_HELM_ARGS_BARBICAN}
#NOTE: Wait for deploy
./tools/deployment/common/wait-for-pods.sh openstack
#NOTE: Validate Deployment info
export OS_CLOUD=openstack_helm
openstack service list
sleep 30 #NOTE(portdirect): Wait for ingress controller to update rules and restart Nginx
# Delete the test pod if it still exists
kubectl delete pods -l application=barbican,release_group=barbican,component=test --namespace=openstack --ignore-not-found
helm test barbican
Alternatively, this step can be performed by running the script directly:
./tools/deployment/multinode/160-barbican.sh
Configure OpenStack¶
Configuring OpenStack for a particular production use-case is beyond the scope of this guide. Please refer to the OpenStack Configuration documentation for your selected version of OpenStack to determine what additional values overrides should be provided to the OpenStack-Helm charts to ensure appropriate networking, security, etc. is in place.