Ceph keyring from file example¶
OpenStack-Ansible (OSA) allows to deploy an OpenStack environment that uses an existing Ceph cluster for block storage for images, volumes and instances. Interaction with the Ceph cluster is normally done using SSH to Ceph MONs. To avoid the SSH access to the Ceph cluster nodes all necessary client configurations can be read from files. This example describes what these files need to contain.
This example has just a single main requirement. You need to configure a storage network in your OpenStack environment. Both Ceph services - the MONs and the OSDs - need to be connected to this storage network, too. On the OpenStack side you need to connect the affected services to the storage network. Glance to store images in Ceph, Cinder to create volumes in Ceph and in most cases the compute nodes to use volumes and maybe store ephemeral discs in Ceph.
Network configuration assumptions¶
The following CIDR assignments are used for this environment.
Network |
CIDR |
---|---|
Storage Network |
172.29.244.0/22 |
IP assignments¶
The following host name and IP address assignments are used for this environment.
Host name |
Storage IP |
---|---|
ceph1 |
172.29.244.18 |
ceph2 |
172.29.244.19 |
ceph3 |
172.29.244.20 |
Configuration¶
Environment customizations¶
For a ceph environment, you can run the cinder-volume
in a container. By
default cinder-volume
runs on the host. See
here
an example how to a service in a container.
User variables¶
The /etc/openstack_deploy/user_variables.yml
file defines the global
overrides for the default variables.
For this example environment, we configure an existing Ceph cluster, that we
want the OpenStack environment to connect to. Your
/etc/openstack_deploy/user_variables.yml
must have the
following content to configure ceph for images, volumes and instances. If not
all necessary block storages should be provided from the Ceph backend, do only
include the block storage you want to store in Ceph:
---
# OSA options for using an existing Ceph deployment. This example can be used
# if all configuration needs to come from OSA configuration files instead of
# the Ceph MONs.
# Directory containing the Ceph keyring files with access credentials.
ceph_keyrings_dir: /etc/openstack_deploy/ceph-keyrings
# General Ceph configuration file containing the information for Ceph clients
# to connect to the Ceph cluster.
ceph_conf_file: |
[global]
mon initial members = ceph1,ceph2,ceph3
## Ceph clusters starting with the Nautilus release can support the v2 wire protocol
mon host = [v2:172.29.244.18:3300,v1:172.29.244.18:6789],[v2:172.29.244.19:3300,v1:172.29.244.19:6789],[v2:172.29.244.20:3300,v1:172.29.244.20:6789]
## for a Ceph cluster not supporting the v2 wire protocol (before Nautilus release)
# mon host = [v1:172.29.244.18:6789],[v1:172.29.244.19:6789],[v1:172.29.244.20:6789]
# For configuring the Ceph backend for Glance to store images in Ceph.
glance_ceph_client: glance
glance_default_store: rbd
glance_rbd_store_pool: images
# For configuring a backend in Cinder to store volumes in Ceph. This
# configuration will be used for Nova compute and libvirt to access volumes.
cinder_ceph_client: cinder
cinder_backends:
rbd:
volume_driver: cinder.volume.drivers.rbd.RBDDriver
rbd_pool: volumes
rbd_ceph_conf: /etc/ceph/ceph.conf
rbd_store_chunk_size: 8
volume_backend_name: rbd
rbd_user: "{{ cinder_ceph_client }}"
rbd_secret_uuid: "{{ cinder_ceph_client_uuid }}"
report_discard_supported: true
# Configuration for Nova compute and libvirt to store ephemeral discs in Ceph.
nova_libvirt_images_rbd_pool: vms
Ceph keyrings¶
With the above settings in the /etc/openstack_deploy/user_variables.yml
we
configured to read the credentials for accessing the Ceph cluster in the
/etc/openstack_deploy/ceph-keyrings/
directory. We need to place now the
keyring files for Ceph credentials into this directory. They need to be named
according to the ceph client names, e.g. glance.keyring
according to
glance_ceph_client: glance
. See the following example for the file
contents:
[client.glance]
key = AQC93h9fAAAAABAAUrAlQF+xJnjD6E8ChZkTaQ==