[ English | Indonesia | 한국어 (대한민국) | Deutsch | English (United Kingdom) ]
Kubernetes and Common Setup¶
Install Basic Utilities¶
To get started with OSH, we will need both git
and curl
.
sudo apt install git curl
Clone the OpenStack-Helm Repos¶
Once the host has been configured the repos containing the OpenStack-Helm charts should be cloned:
#!/bin/bash
set -xe
git clone https://opendev.org/openstack/openstack-helm-infra.git
git clone https://opendev.org/openstack/openstack-helm.git
OSH Proxy & DNS Configuration¶
참고
If you are not deploying OSH behind a proxy, skip this step and continue with “Deploy Kubernetes & Helm”.
In order to deploy OSH behind a proxy, add the following entries to
openstack-helm-infra/tools/gate/devel/local-vars.yaml
:
proxy:
http: http://PROXY_URL:PORT
https: https://PROXY_URL:PORT
noproxy: 127.0.0.1,localhost,172.17.0.1,.svc.cluster.local
참고
Depending on your specific proxy, https_proxy may be the same as http_proxy. Refer to your specific proxy documentation.
By default OSH will use Google DNS Server IPs (8.8.8.8, 8.8.4.4) and will
update resolv.conf as a result. If those IPs are blocked by your proxy, running
the OSH scripts will result in the inability to connect to anything on the
network. These DNS nameserver entries can be changed by updating the
external_dns_nameservers entry in the file
openstack-helm-infra/tools/images/kubeadm-aio/assets/opt/playbooks/vars.yaml
.
external_dns_nameservers:
- YOUR_PROXY_DNS_IP
- ALT_PROXY_DNS_IP
These values can be retrieved by running:
systemd-resolve --status
Deploy Kubernetes & Helm¶
You may now deploy kubernetes, and helm onto your machine, first move into the
openstack-helm
directory and then run the following:
#!/bin/bash
CURRENT_DIR="$(pwd)"
: ${OSH_INFRA_PATH:="../openstack-helm-infra"}
cd ${OSH_INFRA_PATH}
make dev-deploy setup-host
make dev-deploy k8s
cd ${CURRENT_DIR}
Alternatively, this step can be performed by running the script directly:
./tools/deployment/developer/common/010-deploy-k8s.sh
This command will deploy a single node KubeADM administered cluster. This will
use the parameters in ${OSH_INFRA_PATH}/playbooks/vars.yaml
to control the
deployment, which can be over-ridden by adding entries to
${OSH_INFRA_PATH}/tools/gate/devel/local-vars.yaml
.
Helm Chart Installation¶
Using the Helm packages previously pushed to the local Helm repository, run the following commands to instruct tiller to create an instance of the given chart. During installation, the helm client will print useful information about resources created, the state of the Helm releases, and whether any additional configuration steps are necessary.
Install OpenStack-Helm¶
참고
The following commands all assume that they are run from the
openstack-helm
directory and the repos have been cloned as above.
Setup Clients on the host and assemble the charts¶
The OpenStack clients and Kubernetes RBAC rules, along with assembly of the charts can be performed by running the following commands:
#!/bin/bash
sudo -H -E pip3 install \
-c${UPPER_CONSTRAINTS_FILE:=https://releases.openstack.org/constraints/upper/${OPENSTACK_RELEASE:-stein}} \
cmd2 python-openstackclient python-heatclient --ignore-installed
sudo -H mkdir -p /etc/openstack
sudo -H chown -R $(id -un): /etc/openstack
FEATURE_GATE="tls"; if [[ ${FEATURE_GATES//,/ } =~ (^|[[:space:]])${FEATURE_GATE}($|[[:space:]]) ]]; then
tee /etc/openstack/clouds.yaml << EOF
clouds:
openstack_helm:
region_name: RegionOne
identity_api_version: 3
cacert: /etc/openstack-helm/certs/ca/ca.pem
auth:
username: 'admin'
password: 'password'
project_name: 'admin'
project_domain_name: 'default'
user_domain_name: 'default'
auth_url: 'https://keystone.openstack.svc.cluster.local/v3'
EOF
else
tee /etc/openstack/clouds.yaml << EOF
clouds:
openstack_helm:
region_name: RegionOne
identity_api_version: 3
auth:
username: 'admin'
password: 'password'
project_name: 'admin'
project_domain_name: 'default'
user_domain_name: 'default'
auth_url: 'http://keystone.openstack.svc.cluster.local/v3'
EOF
fi
#NOTE: Build helm-toolkit, most charts depend on helm-toolkit
make helm-toolkit
Alternatively, this step can be performed by running the script directly:
./tools/deployment/developer/common/020-setup-client.sh
Deploy the ingress controller¶
#!/bin/bash
#NOTE: Get the over-rides to use
export HELM_CHART_ROOT_PATH="${HELM_CHART_ROOT_PATH:="${OSH_INFRA_PATH:="../openstack-helm-infra"}"}"
: ${OSH_EXTRA_HELM_ARGS_INGRESS:="$(./tools/deployment/common/get-values-overrides.sh ingress)"}
#NOTE: Lint and package chart
make -C ${HELM_CHART_ROOT_PATH} ingress
#NOTE: Deploy command
: ${OSH_EXTRA_HELM_ARGS:=""}
tee /tmp/ingress-kube-system.yaml << EOF
deployment:
mode: cluster
type: DaemonSet
network:
host_namespace: true
EOF
touch /tmp/ingress-component.yaml
if [ -n "${OSH_DEPLOY_MULTINODE}" ]; then
tee --append /tmp/ingress-kube-system.yaml << EOF
pod:
replicas:
error_page: 2
EOF
tee /tmp/ingress-component.yaml << EOF
pod:
replicas:
ingress: 2
error_page: 2
EOF
fi
helm upgrade --install ingress-kube-system ${HELM_CHART_ROOT_PATH}/ingress \
--namespace=kube-system \
--values=/tmp/ingress-kube-system.yaml \
${OSH_EXTRA_HELM_ARGS} \
${OSH_EXTRA_HELM_ARGS_INGRESS} \
${OSH_EXTRA_HELM_ARGS_INGRESS_KUBE_SYSTEM}
#NOTE: Wait for deploy
./tools/deployment/common/wait-for-pods.sh kube-system
#NOTE: Display info
helm status ingress-kube-system
#NOTE: Deploy namespace ingress
helm upgrade --install ingress-openstack ${HELM_CHART_ROOT_PATH}/ingress \
--namespace=openstack \
--values=/tmp/ingress-component.yaml \
${OSH_EXTRA_HELM_ARGS} \
${OSH_EXTRA_HELM_ARGS_INGRESS} \
${OSH_EXTRA_HELM_ARGS_INGRESS_OPENSTACK}
#NOTE: Wait for deploy
./tools/deployment/common/wait-for-pods.sh openstack
#NOTE: Display info
helm status ingress-openstack
helm upgrade --install ingress-ceph ${HELM_CHART_ROOT_PATH}/ingress \
--namespace=ceph \
--values=/tmp/ingress-component.yaml \
${OSH_EXTRA_HELM_ARGS} \
${OSH_EXTRA_HELM_ARGS_INGRESS} \
${OSH_EXTRA_HELM_ARGS_INGRESS_CEPH}
#NOTE: Wait for deploy
./tools/deployment/common/wait-for-pods.sh ceph
#NOTE: Display info
helm status ingress-ceph
Alternatively, this step can be performed by running the script directly:
./tools/deployment/component/common/ingress.sh
To continue to deploy OpenStack on Kubernetes via OSH, see Deploy NFS or Deploy Ceph.