[ English | русский | 한국어 (대한민국) | Deutsch | Indonesia | English (United Kingdom) | français ]
Managing your cloud¶
This chapter is intended to document OpenStack operations tasks that are integral to the operations support in an OpenStack-Ansible deployment.
It explains operations such as managing images, instances, or networks.
[ English | русский | 한국어 (대한민국) | Deutsch | Indonesia | English (United Kingdom) | français ]
Managing images¶
An image represents the operating system, software, and any settings that instances may need depending on the project goals. Create images first before creating any instances.
Adding images can be done through the Dashboard, or the command line.
Another option available is the python-openstackclient
tool, which
can be installed on the controller node, or on a workstation.
Adding an image using the Dashboard¶
In order to add an image using the Dashboard, prepare an image binary
file, which must be accessible over HTTP using a valid and direct URL.
Images can be compressed using .zip
or .tar.gz
.
참고
Uploading images using the Dashboard will be available to users with administrator privileges. Operators can set user access privileges.
Log in to the Dashboard.
Select the Admin tab in the navigation pane and click images.
Click the Create Image button. The Create an Image dialog box will appear.
Enter the details of the image, including the Image Location, which is where the URL location of the image is required.
Click the Create Image button. The newly created image may take some time before it is completely uploaded since the image arrives in an image queue.
Adding an image using the command line¶
The utility container provides a CLI environment for additional configuration and management.
Access the utility container:
$ lxc-attach -n `lxc-ls -1 | grep utility | head -n 1`
Use the openstack client within the utility container to manage all glance images. See the openstack client official documentation on managing images.
[ English | русский | 한국어 (대한민국) | Deutsch | Indonesia | English (United Kingdom) | français ]
Managing instances¶
This chapter describes how to create and access instances.
Creating an instance using the Dashboard¶
Using an image, create a new instance via the Dashboard options.
Log into the Dashboard, and select the Compute project from the drop down list.
Click the Images option.
Locate the image that will act as the instance base from the Images table.
Click Launch from the Actions column.
Check the Launch Instances dialog, and find the details tab. Enter the appropriate values for the instance.
In the Launch Instance dialog, click the Access & Security tab. Select the keypair. Set the security group as 《default》.
Click the Networking tab. This tab will be unavailable if OpenStack networking (neutron) has not been enabled. If networking is enabled, select the networks on which the instance will reside.
Click the Volume Options tab. This tab will only be available if a Block Storage volume exists for the instance. Select Don’t boot from a volume for now.
For more information on attaching Block Storage volumes to instances for persistent storage, see the Managing volumes for persistent storage section below.
Add customisation scripts, if needed, by clicking the Post-Creation tab. These run after the instance has been created. Some instances support user data, such as root passwords, or admin users. Enter the information specific to the instance here if required.
Click Advanced Options. Specify whether the instance uses a configuration drive to store metadata by selecting a disk partition type.
Click Launch to create the instance. The instance will start on a compute node. The Instance page will open and start creating a new instance. The Instance page that opens will list the instance name, size, status, and task. Power state and public and private IP addresses are also listed here.
The process will take less than a minute to complete. Instance creation is complete when the status is listed as active. Refresh the page to see the new active instance.
¶ Field Name
Required
Details
Availability Zone
Optional
The availability zone in which the image service creates the instance. If no availability zones is defined, no instances will be found. The cloud provider sets the availability zone to a specific value.
Instance Name
Required
The name of the new instance, which becomes the initial host name of the server. If the server name is changed in the API or directly changed, the Dashboard names remain unchanged
Image
Required
The type of container format, one of
ami
,ari
,aki
,bare
, orovf
Flavor
Required
The vCPU, Memory, and Disk configuration. Note that larger flavors can take a long time to create. If creating an instance for the first time and want something small with which to test, select
m1.small
.Instance Count
Required
If creating multiple instances with this configuration, enter an integer up to the number permitted by the quota, which is
10
by default.Instance Boot Source
Required
Specify whether the instance will be based on an image or a snapshot. If it is the first time creating an instance, there will not yet be any snapshots available.
Image Name
Required
The instance will boot from the selected image. This option will be pre-populated with the instance selected from the table. However, choose
Boot from Snapshot
in Instance Boot Source, and it will default toSnapshot
instead.Security Groups
Optional
This option assigns security groups to an instance. The default security group activates when no customised group is specified here. Security Groups, similar to a cloud firewall, define which incoming network traffic is forwarded to instances.
Keypair
Optional
Specify a key pair with this option. If the image uses a static key set (not recommended), a key pair is not needed.
Selected Networks
Optional
To add a network to an instance, click the + in the Networks field.
Customisation Script
Optional
Specify a customisation script. This script runs after the instance launches and becomes active.
Creating an instance using the command line¶
On the command line, instance creation is managed with the openstack server create command. Before launching an instance, determine what images and flavors are available to create a new instance using the openstack image list and openstack flavor list commands.
Log in to any utility container.
Issue the openstack server create command with a name for the instance, along with the name of the image and flavor to use:
$ openstack server create --image precise-image --flavor 2 --key-name example-key example-instance +-------------------------------------+--------------------------------------+ | Property | Value | +-------------------------------------+--------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-SRV-ATTR:host | None | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | | OS-EXT-SRV-ATTR:instance_name | instance-0000000d | | OS-EXT-STS:power_state | 0 | | OS-EXT-STS:task_state | scheduling | | OS-EXT-STS:vm_state | building | | accessIPv4 | | | accessIPv6 | | | adminPass | ATSEfRY9fZPx | | config_drive | | | created | 2012-08-02T15:43:46Z | | flavor | m1.small | | hostId | | | id | 5bf46a3b-084c-4ce1-b06f-e460e875075b | | image | precise-image | | key_name | example-key | | metadata | {} | | name | example-instance | | progress | 0 | | status | BUILD | | tenant_id | b4769145977045e2a9279c842b09be6a | | updated | 2012-08-02T15:43:46Z | | user_id | 5f2f2c28bdc844f9845251290b524e80 | +-------------------------------------+--------------------------------------+
To check that the instance was created successfully, issue the openstack server list command:
$ openstack server list +------------------+------------------+--------+-------------------+---------------+ | ID | Name | Status | Networks | Image Name | +------------------+------------------+--------+-------------------+---------------+ | [ID truncated] | example-instance | ACTIVE | public=192.0.2.0 | precise-image | +------------------+------------------+--------+-------------------+---------------+
Managing an instance¶
Log in to the Dashboard. Select one of the projects, and click Instances.
Select an instance from the list of available instances.
Check the Actions column, and click on the More option. Select the instance state.
The Actions column includes the following options:
Resize or rebuild any instance
View the instance console log
Edit the instance
Modify security groups
Pause, resume, or suspend the instance
Soft or hard reset the instance
참고
Terminate the instance under the Actions column.
Managing volumes for persistent storage¶
Volumes attach to instances, enabling persistent storage. Volume storage provides a source of memory for instances. Administrators can attach volumes to a running instance, or move a volume from one instance to another.
Nova instances live migration¶
Nova is capable of live migration instances from one host to a different host to support various operational tasks including:
Host Maintenance
Host capacity management
Resizing and moving instances to better hardware
Nova configuration drive implication¶
Depending on the OpenStack-Ansible version in use, Nova can
be configured to force configuration drive attachments to instances.
In this case, a ISO9660 CD-ROM image will be made available to the
instance via the /mnt
mount point. This can be used by tools,
such as cloud-init, to gain access to instance metadata. This is
an alternative way of accessing the Nova EC2-style Metadata.
To allow live migration of Nova instances, this forced provisioning of the config (CD-ROM) drive needs either be turned off, or the format of the configuration drive needs to be changed to a disk format like vfat, a format which both Linux and Windows instances can access.
This work around is required for all Libvirt versions prior 1.2.17.
To turn off the forced provisioning of the config drive, add the following
override to the /etc/openstack_deploy/user_variables.yml
file:
nova_force_config_drive: False
To change the format of the configuration drive, to a hard disk style format,
use the following configuration inside the same
/etc/openstack_deploy/user_variables.yml
file:
nova_nova_conf_overrides:
DEFAULT:
config_drive_format: vfat
force_config_drive: false
Tunneling versus direct transport¶
In the default configuration, Nova determines the correct transport
URL for how to transfer the data from one host to the other.
Depending on the nova_virt_type
override the following configurations
are used:
kvm defaults to
qemu+tcp://%s/system
qemu defaults to
qemu+tcp://%s/system
xen defaults to
xenmigr://%s/system
Libvirt TCP port to transfer the data to migrate.
OpenStack-Ansible changes the default setting and used a encrypted SSH connection to transfer the instance data.
live_migration_uri = "qemu+ssh://nova@%s/system?no_verify=1&keyfile={{ nova_system_home_folder }}/.ssh/id_rsa"
Other configurations can be configured inside the
/etc/openstack_deploy/user_variables.yml
file:
nova_nova_conf_overrides:
libvirt:
live_migration_completion_timeout: 0
live_migration_progress_timeout: 0
live_migration_uri: "qemu+ssh://nova@%s/system?keyfile=/var/lib/nova/.ssh/id_rsa&no_verify=1"
Executing the migration¶
The live migration is accessible via the nova client.
nova live-migration [--block-migrate] [--force] <uuid> [<host>]
Examplarery live migration on a local storage:
nova live-migration --block-migrate <uuid of the instance> <nova host>
Monitoring the status¶
Once the live migration request has been accepted, the status can be monitored with the nova client:
nova migration-list
+-----+------------+-----------+----------------+--------------+-----------+-----------+---------------+------------+------------+------------+------------+-----------------+
| Id | Source Node | Dest Node | Source Compute | Dest Compute | Dest Host | Status | Instance UUID | Old Flavor | New Flavor | Created At | Updated At | Type |
+----+-------------+-----------+----------------+--------------+-----------+-----------+---------------+------------+------------+------------+------------+-----------------+
| 6 | - | - | compute01 | compute02 | - | preparing | f95ee17a-d09c | 7 | 7 | date | date | live-migration |
+----+-------------+-----------+----------------+--------------+-----------+-----------+---------------+------------+------------+------------+------------+-----------------+
To filter the list, the options --host
or --status
can be used:
nova migration-list --status error
In cases where the live migration fails, both the source and destination compute nodes need to be checked for errors. Usually it is sufficient to search for the instance UUID only to find errors related to the live migration.
Other forms of instance migration¶
Besides the live migration, Nova offers the option to migrate entire hosts in a online (live) or offline (cold) migration.
The following nova client commands are provided:
host-evacuate-live
Live migrate all instances of the specified host to other hosts if resource utilzation allows. It is best to use shared storage like Ceph or NFS for host evacuation.
host-servers-migrate
This command is similar to host evacuation but migrates all instances off the specified host while they are shutdown.
resize
Changes the flavor of an Nova instance (increase) while rebooting and also migrates (cold) the instance to a new host to accommodate the new resource requirements. This operation can take considerate amount of time, depending disk image sizes.
[ English | русский | 한국어 (대한민국) | Deutsch | Indonesia | English (United Kingdom) | français ]
Managing networks¶
Operational considerations, like compliance, can make it necessary to manage networks. For example, adding new provider networks to the OpenStack-Ansible managed cloud. The following sections are the most common administrative tasks outlined to complete those tasks.
For more generic information on troubleshooting your network, see the Network Troubleshooting chapter in the Operations Guide.
For more in-depth information on Networking, see the Networking Guide.
Add provider bridges using new network interfaces¶
Add each provider network to your cloud to be made known to OpenStack-Ansible and the operating system before you can execute the necessary playbooks to complete the configuration.
OpenStack-Ansible configuration¶
All provider networks need to be added to the OpenStack-Ansible configuration.
Edit the file /etc/openstack_deploy/openstack_user_config.yml
and
add a new block underneath the provider_networks
section:
The container_bridge
setting defines the physical network bridge used
to connect the veth pair from the physical host to the container.
Inside the container, the container_interface
setting defines the name
at which the physical network will be made available. The
container_interface
setting is not required when Neutron agents are
deployed on bare metal.
Make sure that both settings are uniquely defined across their provider
networks and that the network interface is correctly configured inside your
operating system.
group_binds
define where this network need to attached to, to either
containers or physical hosts and is ultimately dependent on the network
stack in use. For example, Linuxbridge versus OVS.
The configuration range
defines Neutron physical segmentation IDs which are
automatically used by end users when creating networks via mainly horizon and
the Neutron API.
Similar is true for the net_name
configuration which defines the
addressable name inside the Neutron configuration.
This configuration also need to be unique across other provider networks.
For more information, see Configure the deployment in the OpenStack-Ansible Deployment Guide.
Updating the node with the new configuration¶
Run the appropriate playbooks depending on the group_binds
section.
For example, if you update the networks requiring a change in all nodes with a linux bridge agent, assuming you have infra nodes named infra01, infra02, and infra03, run:
# openstack-ansible containers-deploy.yml --limit localhost,infra01,infra01-host_containers
# openstack-ansible containers-deploy.yml --limit localhost,infra02,infra02-host_containers
# openstack-ansible containers-deploy.yml --limit localhost,infra03,infra03-host_containers
Then update the neutron configuration.
# openstack-ansible os-neutron-install.yml --limit localhost,infra01,infra01-host_containers
# openstack-ansible os-neutron-install.yml --limit localhost,infra02,infra02-host_containers
# openstack-ansible os-neutron-install.yml --limit localhost,infra03,infra03-host_containers
Then update your compute nodes if necessary.
Remove provider bridges from OpenStack¶
Similar to adding a provider network, the removal process uses the same procedure but in a reversed order. The Neutron ports will need to be removed, prior to the removal of the OpenStack-Ansible configuration.
Unassign all Neutron floating IPs:
참고
Export the Neutron network that is about to be removed as single UUID.
export NETWORK_UUID=<uuid> for p in $( neutron port-list -c id --device_owner compute:nova --network_id=${NETWORK_UUID}| awk '/([A-Fa-f0-9]+-){3}/ {print $2}' ); do floatid=$( neutron floatingip-list -c id --port_id=$p | awk '/([A-Fa-z0-9]+-){3}/ { print $2 }' ) if [ -n "$floatid" ]; then echo "Disassociating floating IP $floatid from port $p" neutron floatingip-disassociate $floatid fi done
Remove all Neutron ports from the instances:
export NETWORK_UUID=<uuid> for p in $( neutron port-list -c id -c device_id --device_owner compute:nova --network_id=${NETWORK_UUID}| awk '/([A-Fa-f0-9]+-){3}/ {print $2}' ); do echo "Removing Neutron compute port $p" neutron port-delete $p done
Remove Neutron router ports and DHCP agents:
export NETWORK_UUID=<uuid> for line in $( neutron port-list -c id -c device_id --device_owner network:router_interface --network_id=${NETWORK_UUID}| awk '/([A-Fa-f0-9]+-){3}/ {print $2 "+" $4}' ); do p=$( echo "$line"| cut -d'+' -f1 ); r=$( echo "$line"| cut -d'+' -f2 ) echo "Removing Neutron router port $p from $r" neutron router-interface-delete $r port=$p done for agent in $( neutron agent-list -c id --agent_type='DHCP Agent' --network_id=${NETWORK_UUID}| awk '/([A-Fa-f0-9]+-){3}/ {print $2}' ); do echo "Remove network $NETWORK_UUID from Neutron DHCP Agent $agent" neutron dhcp-agent-network-remove "${agent}" $NETWORK_UUID done
Remove the Neutron network:
export NETWORK_UUID=<uuid> neutron net-delete $NETWORK_UUID
Remove the provider network from the
provider_networks
configuration of the OpenStack-Ansible configuration/etc/openstack_deploy/openstack_user_config.yml
and re-run the following playbooks:# openstack-ansible lxc-containers-create.yml --limit infra01:infra01-host_containers # openstack-ansible lxc-containers-create.yml --limit infra02:infra02-host_containers # openstack-ansible lxc-containers-create.yml --limit infra03:infra03-host_containers # openstack-ansible os-neutron-install.yml --tags neutron-config
Restart a Networking agent container¶
Under some circumstances, configuration or temporary issues, one specific or all neutron agents container need to be restarted.
This can be accomplished with multiple commands:
Example of rebooting still accessible containers.
This example will issue a reboot to the container named with
neutron_agents_container_hostname_name
from inside:# ansible -m shell neutron_agents_container_hostname_name -a 'reboot'
Example of rebooting one container at a time, 60 seconds apart:
# ansible -m shell neutron_agents_container -a 'sleep 60; reboot' --forks 1
If the container does not respond, it can be restarted from the physical network host:
# ansible -m shell network_hosts -a 'for c in $(lxc-ls -1 |grep neutron_agents_container); do lxc-stop -n $c && lxc-start -d -n $c; done' --forks 1