If you use KVM or QEMU as your hypervisor, you can configure the Compute service to use Ceph RADOS block devices (RBD) for volumes.
Ceph is a massively scalable, open source, distributed storage system. It is comprised of an object store, block store, and a POSIX-compliant distributed file system. The platform can auto-scale to the exabyte level and beyond. It runs on commodity hardware, is self-healing and self-managing, and has no single point of failure. Due to its open-source nature, you can install and use this portable storage platform in public or private clouds.
Ceph is based on Reliable Autonomic Distributed Object Store (RADOS). RADOS distributes objects across the storage cluster and replicates objects for fault tolerance. RADOS contains the following major components:
ceph-mon
daemons on separate servers.To store and access your data, you can use the following storage systems:
Ceph exposes RADOS; you can access it through the following interfaces:
The following table contains the configuration options supported by the Ceph RADOS Block Device driver.
Warning
Due to security concerns, it is recommended deployers do not use the
rbd_keyring_conf
option. This configuration option has been deprecated
and will be removed in the Victoria release.
For more information, see OSSN-0085 Cinder configuration option can leak secret key from Ceph backend.
Configuration option = Default value | Description |
---|---|
deferred_deletion_delay = 0 |
(Integer) Time delay in seconds before a volume is eligible for permanent removal after being tagged for deferred deletion. |
deferred_deletion_purge_interval = 60 |
(Integer) Number of seconds between runs of the periodic task to purge volumes tagged for deletion. |
enable_deferred_deletion = False |
(Boolean) Enable deferred deletion. Upon deletion, volumes are tagged for deletion but will only be removed asynchronously at a later time. |
rados_connect_timeout = -1 |
(Integer) Timeout value (in seconds) used when connecting to ceph cluster. If value < 0, no timeout is set and default librados value is used. |
rados_connection_interval = 5 |
(Integer) Interval value (in seconds) between connection retries to ceph cluster. |
rados_connection_retries = 3 |
(Integer) Number of retries if connection to ceph cluster failed. |
rbd_ceph_conf = <> |
(String) Path to the ceph configuration file |
rbd_cluster_name = ceph |
(String) The name of ceph cluster |
rbd_exclusive_cinder_pool = False |
(Boolean) Set to True if the pool is used exclusively by Cinder. On exclusive use driver won’t query images’ provisioned size as they will match the value calculated by the Cinder core code for allocated_capacity_gb. This reduces the load on the Ceph cluster as well as on the volume service. |
rbd_flatten_volume_from_snapshot = False |
(Boolean) Flatten volumes created from snapshots to remove dependency from volume to snapshot |
rbd_keyring_conf = <> |
(String) Path to the ceph keyring file |
rbd_max_clone_depth = 5 |
(Integer) Maximum number of nested volume clones that are taken before a flatten occurs. Set to 0 to disable cloning. Note: lowering this value will not affect existing volumes whose clone depth exceeds the new value. |
rbd_pool = rbd |
(String) The RADOS pool where rbd volumes are stored |
rbd_secret_uuid = None |
(String) The libvirt uuid of the secret for the rbd_user volumes |
rbd_store_chunk_size = 4 |
(Integer) Volumes will be chunked into objects of this size (in megabytes). |
rbd_user = None |
(String) The RADOS client name for accessing rbd volumes - only set when using cephx authentication |
replication_connect_timeout = 5 |
(Integer) Timeout value (in seconds) used when connecting to ceph cluster to do a demotion/promotion of volumes. If value < 0, no timeout is set and default librados value is used. |
report_dynamic_total_capacity = True |
(Boolean) Set to True for driver to report total capacity as a dynamic value (used + current free) and to False to report a static value (quota max bytes if defined and global size of cluster if not). |
Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.