[ English | Indonesia | 한국어 (대한민국) | Deutsch | English (United Kingdom) ]
Exercise the Cloud¶
Once OpenStack-Helm has been deployed, the cloud can be exercised either with the OpenStack client, or the same heat templates that are used in the validation gates.
#!/bin/bash
: ${OSH_EXT_NET_NAME:="public"}
: ${OSH_EXT_SUBNET_NAME:="public-subnet"}
: ${OSH_EXT_SUBNET:="172.24.4.0/24"}
: ${OSH_BR_EX_ADDR:="172.24.4.1/24"}
openstack stack create --wait \
--parameter network_name=${OSH_EXT_NET_NAME} \
--parameter physical_network_name=public \
--parameter subnet_name=${OSH_EXT_SUBNET_NAME} \
--parameter subnet_cidr=${OSH_EXT_SUBNET} \
--parameter subnet_gateway=${OSH_BR_EX_ADDR%/*} \
-t ./tools/gate/files/heat-public-net-deployment.yaml \
heat-public-net-deployment
: ${OSH_PRIVATE_SUBNET_POOL:="10.0.0.0/8"}
: ${OSH_PRIVATE_SUBNET_POOL_NAME:="shared-default-subnetpool"}
: ${OSH_PRIVATE_SUBNET_POOL_DEF_PREFIX:="24"}
openstack stack create --wait \
--parameter subnet_pool_name=${OSH_PRIVATE_SUBNET_POOL_NAME} \
--parameter subnet_pool_prefixes=${OSH_PRIVATE_SUBNET_POOL} \
--parameter subnet_pool_default_prefix_length=${OSH_PRIVATE_SUBNET_POOL_DEF_PREFIX} \
-t ./tools/gate/files/heat-subnet-pool-deployment.yaml \
heat-subnet-pool-deployment
: ${OSH_EXT_NET_NAME:="public"}
: ${OSH_VM_KEY_STACK:="heat-vm-key"}
: ${OSH_PRIVATE_SUBNET:="10.0.0.0/24"}
# NOTE(portdirect): We do this fancy, and seemingly pointless, footwork to get
# the full image name for the cirros Image without having to be explicit.
IMAGE_NAME=$(openstack image show -f value -c name \
$(openstack image list -f csv | awk -F ',' '{ print $2 "," $1 }' | \
grep "^\"Cirros" | head -1 | awk -F ',' '{ print $2 }' | tr -d '"'))
# Setup SSH Keypair in Nova
mkdir -p ${HOME}/.ssh
openstack keypair create --private-key ${HOME}/.ssh/osh_key ${OSH_VM_KEY_STACK}
chmod 600 ${HOME}/.ssh/osh_key
openstack stack create --wait \
--parameter public_net=${OSH_EXT_NET_NAME} \
--parameter image="${IMAGE_NAME}" \
--parameter ssh_key=${OSH_VM_KEY_STACK} \
--parameter cidr=${OSH_PRIVATE_SUBNET} \
--parameter dns_nameserver=${OSH_BR_EX_ADDR%/*} \
-t ./tools/gate/files/heat-basic-vm-deployment.yaml \
heat-basic-vm-deployment
FLOATING_IP=$(openstack stack output show \
heat-basic-vm-deployment \
floating_ip \
-f value -c output_value)
function wait_for_ssh_port {
# Default wait timeout is 300 seconds
set +x
end=$(date +%s)
if ! [ -z $2 ]; then
end=$((end + $2))
else
end=$((end + 300))
fi
while true; do
# Use Nmap as its the same on Ubuntu and RHEL family distros
nmap -Pn -p22 $1 | awk '$1 ~ /22/ {print $2}' | grep -q 'open' && \
break || true
sleep 1
now=$(date +%s)
[ $now -gt $end ] && echo "Could not connect to $1 port 22 in time" && exit -1
done
set -x
}
wait_for_ssh_port $FLOATING_IP
# SSH into the VM and check it can reach the outside world
ssh-keyscan "$FLOATING_IP" >> ~/.ssh/known_hosts
ssh -i ${HOME}/.ssh/osh_key cirros@${FLOATING_IP} ping -q -c 1 -W 2 ${OSH_BR_EX_ADDR%/*}
# Check the VM can reach the metadata server
ssh -i ${HOME}/.ssh/osh_key cirros@${FLOATING_IP} curl --verbose --connect-timeout 5 169.254.169.254
# Check the VM can reach the keystone server
ssh -i ${HOME}/.ssh/osh_key cirros@${FLOATING_IP} curl --verbose --connect-timeout 5 keystone.openstack.svc.cluster.local
# Check to see if cinder has been deployed, if it has then perform a volume attach.
if openstack service list -f value -c Type | grep -q "^volume"; then
INSTANCE_ID=$(openstack stack output show \
heat-basic-vm-deployment \
instance_uuid \
-f value -c output_value)
# Get the devices that are present on the instance
DEVS_PRE_ATTACH=$(mktemp)
ssh -i ${HOME}/.ssh/osh_key cirros@${FLOATING_IP} lsblk > ${DEVS_PRE_ATTACH}
# Create and attach a block device to the instance
openstack stack create --wait \
--parameter instance_uuid=${INSTANCE_ID} \
-t ./tools/gate/files/heat-vm-volume-attach.yaml \
heat-vm-volume-attach
# Get the devices that are present on the instance
DEVS_POST_ATTACH=$(mktemp)
ssh -i ${HOME}/.ssh/osh_key cirros@${FLOATING_IP} lsblk > ${DEVS_POST_ATTACH}
# Check that we have the expected number of extra devices on the instance post attach
if ! [ "$(comm -13 ${DEVS_PRE_ATTACH} ${DEVS_POST_ATTACH} | wc -l)" -eq "1" ]; then
echo "Volume not successfully attached"
exit 1
fi
fi
Alternatively, this step can be performed by running the script directly:
./tools/deployment/developer/common/900-use-it.sh
To run further commands from the CLI manually, execute the following to set up authentication credentials:
export OS_CLOUD=openstack_helm
Note that this command will only enable you to auth successfully using the
python-openstackclient
CLI. To use legacy clients like the
python-novaclient
from the CLI, reference the auth values in
/etc/openstack/clouds.yaml
and run:
export OS_USERNAME='admin'
export OS_PASSWORD='password'
export OS_PROJECT_NAME='admin'
export OS_PROJECT_DOMAIN_NAME='default'
export OS_USER_DOMAIN_NAME='default'
export OS_AUTH_URL='http://keystone.openstack.svc.cluster.local/v3'
The example above uses the default values used by openstack-helm-infra
.
Subsequent Runs & Post Clean-up¶
Execution of the 900-use-it.sh script results in the creation of 4 heat stacks and a unique keypair enabling access to a newly created VM. Subsequent runs of the 900-use-it.sh script requires deletion of the stacks, a keypair, and key files, generated during the initial script execution.
The following steps serve as a guide to clean-up the client environment by deleting stacks and respective artifacts created during the 900-use-it.sh script:
List the stacks created during script execution which will need to be deleted:
sudo openstack --os-cloud openstack_helm stack list # Sample results returned for *Stack Name* include: # - heat-vm-volume-attach # - heat-basic-vm-deployment # - heat-subnet-pool-deployment # - heat-public-net-deployment
Delete the stacks returned from the openstack helm stack list command above:
sudo openstack --os-cloud openstack_helm stack delete heat-vm-volume-attach sudo openstack --os-cloud openstack_helm stack delete heat-basic-vm-deployment sudo openstack --os-cloud openstack_helm stack delete heat-subnet-pool-deployment sudo openstack --os-cloud openstack_helm stack delete heat-public-net-deployment
List the keypair(s) generated during the script execution:
sudo openstack --os-cloud openstack_helm keypair list # Sample Results returned for “Name” include: # - heat-vm-key
Delete the keypair(s) returned from the list command above:
sudo openstack --os-cloud openstack_helm keypair delete heat-vm-key
Manually remove the keypair directories created from the script in the ~/.ssh directory:
cd ~/.ssh rm osh_key rm known_hosts
As a final validation step, re-run the openstack helm stack list and openstack helm keypair list commands and confirm the returned results are shown as empty.:
sudo openstack --os-cloud openstack_helm stack list sudo openstack --os-cloud openstack_helm keypair list
Alternatively, these steps can be performed by running the script directly:
./tools/deployment/developer/common/910-clean-it.sh