Storage Volumes, Disks

The cinder.volume.manager Module

Volume manager manages creating, attaching, detaching, and persistent storage.

Persistent storage volumes keep their state independent of instances. You can attach to an instance, terminate the instance, spawn a new instance (even one from a different image) and re-attach the volume with the same data intact.

Related Flags

volume_manager:The module name of a class derived from manager.Manager (default: cinder.volume.manager.Manager).
volume_driver:Used by Manager. Defaults to cinder.volume.drivers.lvm.LVMVolumeDriver.
volume_group:Name of the group that will contain exported volumes (default: cinder-volumes)
num_shell_tries:
 Number of times to attempt to run commands (default: 3)
class VolumeManager(volume_driver=None, service_name=None, *args, **kwargs)

Bases: cinder.manager.CleanableManager, cinder.manager.SchedulerDependentManager

Manages attachable block storage devices.

FAILBACK_SENTINEL = ‘default’
RPC_API_VERSION = ‘3.15’
accept_transfer(context, volume_id, new_user, new_project)
attach_volume(context, volume_id, instance_uuid, host_name, mountpoint, mode, volume=None)

Updates db to show volume is attached.

attachment_delete(context, attachment_id, vref)

Delete/Detach the specified attachment.

Notifies the backend device that we’re detaching the specified attachment instance.

param: vref: Volume object associated with the attachment param: attachment: Attachment reference object to remove

NOTE if the attachment reference is None, we remove all existing attachments for the specified volume object.

attachment_update(context, vref, connector, attachment_id)

Update/Finalize an attachment.

This call updates a valid attachment record to associate with a volume and provide the caller with the proper connection info. Note that this call requires an attachment_ref. It’s expected that prior to this call that the volume and an attachment UUID has been reserved.

param: vref: Volume object to create attachment for param: connector: Connector object to use for attachment creation param: attachment_ref: ID of the attachment record to update

copy_volume_to_image(context, volume_id, image_meta)

Uploads the specified volume to Glance.

image_meta is a dictionary containing the following keys: ‘id’, ‘container_format’, ‘disk_format’

create_group(context, group)

Creates the group.

create_group_from_src(context, group, group_snapshot=None, source_group=None)

Creates the group from source.

The source can be a group snapshot or a source group.

create_group_snapshot(context, group_snapshot)

Creates the group_snapshot.

create_snapshot(context, snapshot)

Creates and exports the snapshot.

create_volume(context, volume, request_spec=None, filter_properties=None, allow_reschedule=True)

Creates the volume.

delete_group(context, group)

Deletes group and the volumes in the group.

delete_group_snapshot(context, group_snapshot)

Deletes group_snapshot.

delete_snapshot(context, snapshot, unmanage_only=False, handle_quota=True)

Deletes and unexports snapshot.

delete_volume(context, volume, unmanage_only=False, cascade=False)

Deletes and unexports volume.

  1. Delete a volume(normal case) Delete a volume and update quotas.
  2. Delete a migration volume If deleting the volume in a migration, we want to skip quotas but we need database updates for the volume.
detach_volume(context, volume_id, attachment_id=None, volume=None)

Updates db to show volume is detached.

disable_replication(ctxt, group)

Disable replication.

enable_replication(ctxt, group)

Enable replication.

extend_volume(context, volume, new_size, reservations)
failover(context, secondary_backend_id=None)

Failover a backend to a secondary replication target.

Instructs a replication capable/configured backend to failover to one of it’s secondary replication targets. host=None is an acceetable input, and leaves it to the driver to failover to the only configured target, or to choose a target on it’s own. All of the hosts volumes will be passed on to the driver in order for it to determine the replicated volumes on the host, if needed.

Parameters:
  • context – security context
  • secondary_backend_id – Specifies backend_id to fail over to
failover_completed(context, updates)

Finalize failover of this backend.

When a service is clustered and replicated the failover has 2 stages, one that does the failover of the volumes and another that finalizes the failover of the services themselves.

This method takes care of the last part and is called from the service doing the failover of the volumes after finished processing the volumes.

failover_host(context, secondary_backend_id=None)

Failover a backend to a secondary replication target.

Instructs a replication capable/configured backend to failover to one of it’s secondary replication targets. host=None is an acceetable input, and leaves it to the driver to failover to the only configured target, or to choose a target on it’s own. All of the hosts volumes will be passed on to the driver in order for it to determine the replicated volumes on the host, if needed.

Parameters:
  • context – security context
  • secondary_backend_id – Specifies backend_id to fail over to
failover_replication(ctxt, group, allow_attached_volume=False, secondary_backend_id=None)

Failover replication.

finish_failover(context, service, updates)

Completion of the failover locally or via RPC.

freeze_host(context)

Freeze management plane on this backend.

Basically puts the control/management plane into a Read Only state. We should handle this in the scheduler, however this is provided to let the driver know in case it needs/wants to do something specific on the backend.

Parameters:context – security context
get_backup_device(ctxt, backup, want_objects=False)
get_capabilities(context, discover)

Get capabilities of backend storage.

get_manageable_snapshots(ctxt, marker, limit, offset, sort_keys, sort_dirs, want_objects=False)
get_manageable_volumes(ctxt, marker, limit, offset, sort_keys, sort_dirs, want_objects=False)
init_host(added_to_cluster=None, **kwargs)

Perform any required initialization.

init_host_with_rpc()
initialize_connection(context, volume, connector)

Prepare volume for connection from host represented by connector.

This method calls the driver initialize_connection and returns it to the caller. The connector parameter is a dictionary with information about the host that will connect to the volume in the following format:

{
    'ip': ip,
    'initiator': initiator,
}

ip: the ip address of the connecting machine

initiator: the iscsi initiator name of the connecting machine. This can be None if the connecting machine does not support iscsi connections.

driver is responsible for doing any necessary security setup and returning a connection_info dictionary in the following format:

{
    'driver_volume_type': driver_volume_type,
    'data': data,
}
driver_volume_type: a string to identify the type of volume. This
can be used by the calling code to determine the strategy for connecting to the volume. This could be ‘iscsi’, ‘rbd’, ‘sheepdog’, etc.
data: this is the data that the calling code will use to connect
to the volume. Keep in mind that this will be serialized to json in various places, so it should not contain any non-json data types.
initialize_connection_snapshot(ctxt, snapshot_id, connector)
is_working()

Return if Manager is ready to accept requests.

This is to inform Service class that in case of volume driver initialization failure the manager is actually down and not ready to accept any requests.

list_replication_targets(ctxt, group)

Provide a means to obtain replication targets for a group.

This method is used to find the replication_device config info. ‘backend_id’ is a required key in ‘replication_device’.

Response Example for admin: {

‘replication_targets’: [
{
‘backend_id’: ‘vendor-id-1’, ‘unique_key’: ‘val1’, ……

}, {

‘backend_id’: ‘vendor-id-2’, ‘unique_key’: ‘val2’, ……

}

]

}

Response example for non-admin: {

‘replication_targets’: [
{
‘backend_id’: ‘vendor-id-1’

}, {

‘backend_id’: ‘vendor-id-2’

}

]

}

manage_existing(ctxt, volume, ref=None)
manage_existing_snapshot(ctxt, snapshot, ref=None)
migrate_volume(ctxt, volume, host, force_host_copy=False, new_type_id=None)

Migrate the volume to the specified host (called on source host).

migrate_volume_completion(ctxt, volume, new_volume, error=False)
publish_service_capabilities(context)

Collect driver status and then publish.

remove_export(context, volume_id)

Removes an export for a volume.

remove_export_snapshot(ctxt, snapshot_id)

Removes an export for a snapshot.

retype(context, volume, new_type_id, host, migration_policy=’never’, reservations=None, old_reservations=None)
revert_to_snapshot(context, volume, snapshot)

Revert a volume to a snapshot.

The process of reverting to snapshot consists of several steps: 1. create a snapshot for backup (in case of data loss) 2.1. use driver’s specific logic to revert volume 2.2. try the generic way to revert volume if driver’s method is missing 3. delete the backup snapshot

secure_file_operations_enabled(ctxt, volume)
target = <Target version=3.15>
terminate_connection(context, volume_id, connector, force=False)

Cleanup connection from host represented by connector.

The format of connector is the same as for initialize_connection.

terminate_connection_snapshot(ctxt, snapshot_id, connector, force=False)
thaw_host(context)

UnFreeze management plane on this backend.

Basically puts the control/management plane back into a normal state. We should handle this in the scheduler, however this is provided to let the driver know in case it needs/wants to do something specific on the backend.

Parameters:context – security context
update_group(context, group, add_volumes=None, remove_volumes=None)

Updates group.

Update group by adding volumes to the group, or removing volumes from the group.

update_migrated_volume(ctxt, volume, new_volume, volume_status)

Finalize migration process on backend device.

The cinder.volume.driver Module

Drivers for volumes.

class BaseVD(execute=<function execute>, *args, **kwargs)

Bases: object

Executes commands relating to Volumes.

Base Driver for Cinder Volume Control Path, This includes supported/required implementation for API calls. Also provides generic implementation of core features like cloning, copy_image_to_volume etc, this way drivers that inherit from this base class and don’t offer their own impl can fall back on a general solution here.

Key thing to keep in mind with this driver is that it’s intended that these drivers ONLY implement Control Path details (create, delete, extend…), while transport or data path related implementation should be a member object that we call a connector. The point here is that for example don’t allow the LVM driver to implement iSCSI methods, instead call whatever connector it has configured via conf file (iSCSI{LIO, TGT, IET}, FC, etc).

In the base class and for example the LVM driver we do this via a has-a relationship and just provide an interface to the specific connector methods. How you do this in your own driver is of course up to you.

REPLICATION_FEATURE_CHECKERS = {‘a/a’: ‘failover_completed’, ‘v2.1’: ‘failover_host’}
SUPPORTED = True
SUPPORTS_ACTIVE_ACTIVE = False
VERSION = ‘N/A’
accept_transfer(context, volume, new_user, new_project)
after_volume_copy(context, src_vol, dest_vol, remote=None)

Driver-specific actions after copyvolume data.

This method will be called after _copy_volume_data during volume migration

attach_volume(context, volume, instance_uuid, host_name, mountpoint)

Callback for volume attached to instance or host.

backup_use_temp_snapshot()
before_volume_copy(context, src_vol, dest_vol, remote=None)

Driver-specific actions before copyvolume data.

This method will be called before _copy_volume_data during volume migration

check_for_setup_error()
clear_download(context, volume)

Clean up after an interrupted image copy.

clone_image(context, volume, image_location, image_meta, image_service)
copy_image_to_encrypted_volume(context, volume, image_service, image_id)

Fetch image from image_service and write to encrypted volume.

This attaches the encryptor layer when connecting to the volume.

copy_image_to_volume(context, volume, image_service, image_id)

Fetch image from image_service and write to unencrypted volume.

This does not attach an encryptor layer when connecting to the volume.

copy_volume_to_image(context, volume, image_service, image_meta)

Copy the volume to the specified image.

create_cloned_volume(volume, src_vref)

Creates a clone of the specified volume.

If volume_type extra specs includes ‘replication: <is> True’ the driver needs to create a volume replica (secondary) and setup replication between the newly created volume and the secondary volume.

create_export(context, volume, connector)

Exports the volume.

Can optionally return a Dictionary of changes to the volume object to be persisted.

create_export_snapshot(context, snapshot, connector)

Exports the snapshot.

Can optionally return a Dictionary of changes to the snapshot object to be persisted.

create_group(context, group)

Creates a group.

Parameters:
  • context – the context of the caller.
  • group – the Group object of the group to be created.
Returns:

model_update

model_update will be in this format: {‘status’: xxx, ……}.

If the status in model_update is ‘error’, the manager will throw an exception and it will be caught in the try-except block in the manager. If the driver throws an exception, the manager will also catch it in the try-except block. The group status in the db will be changed to ‘error’.

For a successful operation, the driver can either build the model_update and return it or return None. The group status will be set to ‘available’.

create_group_from_src(context, group, volumes, group_snapshot=None, snapshots=None, source_group=None, source_vols=None)

Creates a group from source.

Parameters:
  • context – the context of the caller.
  • group – the Group object to be created.
  • volumes – a list of Volume objects in the group.
  • group_snapshot – the GroupSnapshot object as source.
  • snapshots – a list of Snapshot objects in group_snapshot.
  • source_group – the Group object as source.
  • source_vols – a list of Volume objects in the source_group.
Returns:

model_update, volumes_model_update

The source can be group_snapshot or a source_group.

param volumes is a list of objects retrieved from the db. It cannot be assigned to volumes_model_update. volumes_model_update is a list of dictionaries. It has to be built by the driver. An entry will be in this format: {‘id’: xxx, ‘status’: xxx, ……}. model_update will be in this format: {‘status’: xxx, ……}.

To be consistent with other volume operations, the manager will assume the operation is successful if no exception is thrown by the driver. For a successful operation, the driver can either build the model_update and volumes_model_update and return them or return None, None.

create_group_snapshot(context, group_snapshot, snapshots)

Creates a group_snapshot.

Parameters:
  • context – the context of the caller.
  • group_snapshot – the GroupSnapshot object to be created.
  • snapshots – a list of Snapshot objects in the group_snapshot.
Returns:

model_update, snapshots_model_update

param snapshots is a list of Snapshot objects. It cannot be assigned to snapshots_model_update. snapshots_model_update is a list of dictionaries. It has to be built by the driver. An entry will be in this format: {‘id’: xxx, ‘status’: xxx, ……}. model_update will be in this format: {‘status’: xxx, ……}.

The driver should populate snapshots_model_update and model_update and return them.

The manager will check snapshots_model_update and update db accordingly for each snapshot. If the driver successfully deleted some snapshots but failed to delete others, it should set statuses of the snapshots accordingly so that the manager can update db correctly.

If the status in any entry of snapshots_model_update is ‘error’, the status in model_update will be set to the same if it is not already ‘error’.

If the status in model_update is ‘error’, the manager will raise an exception and the status of group_snapshot will be set to ‘error’ in the db. If snapshots_model_update is not returned by the driver, the manager will set the status of every snapshot to ‘error’ in the except block.

If the driver raises an exception during the operation, it will be caught by the try-except block in the manager and the statuses of group_snapshot and all snapshots will be set to ‘error’.

For a successful operation, the driver can either build the model_update and snapshots_model_update and return them or return None, None. The statuses of group_snapshot and all snapshots will be set to ‘available’ at the end of the manager function.

create_volume(volume)

Creates a volume.

Can optionally return a Dictionary of changes to the volume object to be persisted.

If volume_type extra specs includes ‘capabilities:replication <is> True’ the driver needs to create a volume replica (secondary), and setup replication between the newly created volume and the secondary volume. Returned dictionary should include:

volume['replication_status'] = 'copying'
volume['replication_extended_status'] = <driver specific value>
volume['driver_data'] = <driver specific value>
delete_group(context, group, volumes)

Deletes a group.

Parameters:
  • context – the context of the caller.
  • group – the Group object of the group to be deleted.
  • volumes – a list of Volume objects in the group.
Returns:

model_update, volumes_model_update

param volumes is a list of objects retrieved from the db. It cannot be assigned to volumes_model_update. volumes_model_update is a list of dictionaries. It has to be built by the driver. An entry will be in this format: {‘id’: xxx, ‘status’: xxx, ……}. model_update will be in this format: {‘status’: xxx, ……}.

The driver should populate volumes_model_update and model_update and return them.

The manager will check volumes_model_update and update db accordingly for each volume. If the driver successfully deleted some volumes but failed to delete others, it should set statuses of the volumes accordingly so that the manager can update db correctly.

If the status in any entry of volumes_model_update is ‘error_deleting’ or ‘error’, the status in model_update will be set to the same if it is not already ‘error_deleting’ or ‘error’.

If the status in model_update is ‘error_deleting’ or ‘error’, the manager will raise an exception and the status of the group will be set to ‘error’ in the db. If volumes_model_update is not returned by the driver, the manager will set the status of every volume in the group to ‘error’ in the except block.

If the driver raises an exception during the operation, it will be caught by the try-except block in the manager. The statuses of the group and all volumes in it will be set to ‘error’.

For a successful operation, the driver can either build the model_update and volumes_model_update and return them or return None, None. The statuses of the group and all volumes will be set to ‘deleted’ after the manager deletes them from db.

delete_group_snapshot(context, group_snapshot, snapshots)

Deletes a group_snapshot.

Parameters:
  • context – the context of the caller.
  • group_snapshot – the GroupSnapshot object to be deleted.
  • snapshots – a list of Snapshot objects in the group_snapshot.
Returns:

model_update, snapshots_model_update

param snapshots is a list of objects. It cannot be assigned to snapshots_model_update. snapshots_model_update is a list of of dictionaries. It has to be built by the driver. An entry will be in this format: {‘id’: xxx, ‘status’: xxx, ……}. model_update will be in this format: {‘status’: xxx, ……}.

The driver should populate snapshots_model_update and model_update and return them.

The manager will check snapshots_model_update and update db accordingly for each snapshot. If the driver successfully deleted some snapshots but failed to delete others, it should set statuses of the snapshots accordingly so that the manager can update db correctly.

If the status in any entry of snapshots_model_update is ‘error_deleting’ or ‘error’, the status in model_update will be set to the same if it is not already ‘error_deleting’ or ‘error’.

If the status in model_update is ‘error_deleting’ or ‘error’, the manager will raise an exception and the status of group_snapshot will be set to ‘error’ in the db. If snapshots_model_update is not returned by the driver, the manager will set the status of every snapshot to ‘error’ in the except block.

If the driver raises an exception during the operation, it will be caught by the try-except block in the manager and the statuses of group_snapshot and all snapshots will be set to ‘error’.

For a successful operation, the driver can either build the model_update and snapshots_model_update and return them or return None, None. The statuses of group_snapshot and all snapshots will be set to ‘deleted’ after the manager deletes them from db.

delete_volume(volume)

Deletes a volume.

If volume_type extra specs includes ‘replication: <is> True’ then the driver needs to delete the volume replica too.

detach_volume(context, volume, attachment=None)

Callback for volume detached.

disable_replication(context, group, volumes)

Disables replication for a group and volumes in the group.

Parameters:
  • group – group object
  • volumes – list of volume objects in the group
Returns:

model_update - dict of group updates

Returns:

volume_model_updates - list of dicts of volume updates

do_setup(context)

Any initialization the volume driver does while starting.

enable_replication(context, group, volumes)

Enables replication for a group and volumes in the group.

Parameters:
  • group – group object
  • volumes – list of volume objects in the group
Returns:

model_update - dict of group updates

Returns:

volume_model_updates - list of dicts of volume updates

ensure_export(context, volume)

Synchronously recreates an export for a volume.

extend_volume(volume, new_size)
failover(context, volumes, secondary_id=None)

Like failover but for a host that is clustered.

Most of the time this will be the exact same behavior as failover_host, so if it’s not overwritten, it is assumed to be the case.

failover_completed(context, active_backend_id=None)

This method is called after failover for clustered backends.

failover_host(context, volumes, secondary_id=None)

Failover a backend to a secondary replication target.

Instructs a replication capable/configured backend to failover to one of it’s secondary replication targets. host=None is an acceptable input, and leaves it to the driver to failover to the only configured target, or to choose a target on it’s own. All of the hosts volumes will be passed on to the driver in order for it to determine the replicated volumes on the host, if needed.

Response is a tuple, including the new target backend_id AND a lit of dictionaries with volume_id and updates. Key things to consider (attaching failed-over volumes): - provider_location - provider_auth - provider_id - replication_status

Parameters:
  • context – security context
  • volumes – list of volume objects, in case the driver needs to take action on them in some way
  • secondary_id – Specifies rep target backend to fail over to
Returns:

ID of the backend that was failed-over to and model update for volumes

failover_replication(context, group, volumes, secondary_backend_id=None)

Fails over replication for a group and volumes in the group.

Parameters:
  • group – group object
  • volumes – list of volume objects in the group
  • secondary_backend_id – backend_id of the secondary site
Returns:

model_update - dict of group updates

Returns:

volume_model_updates - list of dicts of volume updates

freeze_backend(context)

Notify the backend that it’s frozen.

We use set to prohibit the creation of any new resources on the backend, or any modifications to existing items on a backend. We set/enforce this by not allowing scheduling of new volumes to the specified backend, and checking at the api for modifications to resources and failing.

In most cases the driver may not need to do anything, but this provides a handle if they need it.

Parameters:context – security context
Response:True|False
get_backup_device(context, backup)

Get a backup device from an existing volume.

The function returns a volume or snapshot to backup service, and then backup service attaches the device and does backup.

get_default_filter_function()

Get the default filter_function string.

Each driver could overwrite the method to return a well-known default string if it is available.

Returns:None
get_default_goodness_function()

Get the default goodness_function string.

Each driver could overwrite the method to return a well-known default string if it is available.

Returns:None
get_filter_function()

Get filter_function string.

Returns either the string from the driver instance or global section in cinder.conf. If nothing is specified in cinder.conf, then try to find the default filter_function. When None is returned the scheduler will always pass the driver instance.

Returns:a filter_function string or None
get_goodness_function()

Get good_function string.

Returns either the string from the driver instance or global section in cinder.conf. If nothing is specified in cinder.conf, then try to find the default goodness_function. When None is returned the scheduler will give the lowest score to the driver instance.

Returns:a goodness_function string or None
get_pool(volume)

Return pool name where volume reside on.

Parameters:volume – The volume hosted by the driver.
Returns:name of the pool where given volume is in.
get_prefixed_property(property)

Return prefixed property name

Returns:a prefixed property name string or None
get_replication_error_status(context, groups)

Returns error info for replicated groups and its volumes.

Returns:group_model_updates - list of dicts of group updates

if error happens. For example, a dict of a group can be as follows: {‘group_id’: xxxx,

‘replication_status’: fields.ReplicationStatus.ERROR}
Returns:volume_model_updates - list of dicts of volume updates

if error happens. For example, a dict of a volume can be as follows: {‘volume_id’: xxxx,

‘replication_status’: fields.ReplicationStatus.ERROR}
get_replication_updates(context)

Old replication update method, deprecate.

get_version()

Get the current version of this driver.

get_volume_stats(refresh=False)

Return the current state of the volume service.

If ‘refresh’ is True, run the update first.

For replication the following state should be reported: replication = True (None or false disables replication)

init_capabilities()

Obtain backend volume stats and capabilities list.

This stores a dictionary which is consisted of two parts. First part includes static backend capabilities which are obtained by get_volume_stats(). Second part is properties, which includes parameters correspond to extra specs. This properties part is consisted of cinder standard capabilities and vendor unique properties.

Using this capabilities list, operator can manage/configure backend using key/value from capabilities without specific knowledge of backend.

initialize_connection(volume, connector)

Allow connection to connector and return connection info.

Parameters:
  • volume – The volume to be attached
  • connector – Dictionary containing information about what is being connected to.
Returns conn_info:
 

A dictionary of connection information.

initialize_connection_snapshot(snapshot, connector, **kwargs)

Allow connection to connector and return connection info.

Parameters:
  • snapshot – The snapshot to be attached
  • connector – Dictionary containing information about what is being connected to.
Returns conn_info:
 

A dictionary of connection information. This can optionally include a “initiator_updates” field.

The “initiator_updates” field must be a dictionary containing a “set_values” and/or “remove_values” field. The “set_values” field must be a dictionary of key-value pairs to be set/updated in the db. The “remove_values” field must be a list of keys, previously set with “set_values”, that will be deleted from the db.

initialized
manage_existing(volume, existing_ref)

Manage exiting stub.

This is for drivers that don’t implement manage_existing().

migrate_volume(context, volume, host)

Migrate volume stub.

This is for drivers that don’t implement an enhanced version of this operation.

remove_export(context, volume)

Removes an export for a volume.

remove_export_snapshot(context, snapshot)

Removes an export for a snapshot.

retype(context, volume, new_type, diff, host)
secure_file_operations_enabled()

Determine if driver is running in Secure File Operations mode.

The Cinder Volume driver needs to query if this driver is running in a secure file operations mode. By default, it is False: any driver that does support secure file operations should override this method.

set_initialized()
set_throttle()
snapshot_remote_attachable()
supported
classmethod supports_replication_feature(feature)

Check if driver class supports replication features.

Feature is a string that must be one of:
  • v2.1
  • a/a
terminate_connection(volume, connector, **kwargs)

Disallow connection from connector.

Parameters:
  • volume – The volume to be disconnected.
  • connector – A dictionary describing the connection with details about the initiator. Can be None.
terminate_connection_snapshot(snapshot, connector, **kwargs)

Disallow connection from connector.

thaw_backend(context)

Notify the backend that it’s unfrozen/thawed.

Returns the backend to a normal state after a freeze operation.

In most cases the driver may not need to do anything, but this provides a handle if they need it.

Parameters:context – security context
Response:True|False
unmanage(volume)

Unmanage stub.

This is for drivers that don’t implement unmanage().

update_group(context, group, add_volumes=None, remove_volumes=None)

Updates a group.

Parameters:
  • context – the context of the caller.
  • group – the Group object of the group to be updated.
  • add_volumes – a list of Volume objects to be added.
  • remove_volumes – a list of Volume objects to be removed.
Returns:

model_update, add_volumes_update, remove_volumes_update

model_update is a dictionary that the driver wants the manager to update upon a successful return. If None is returned, the manager will set the status to ‘available’.

add_volumes_update and remove_volumes_update are lists of dictionaries that the driver wants the manager to update upon a successful return. Note that each entry requires a {‘id’: xxx} so that the correct volume entry can be updated. If None is returned, the volume will remain its original status. Also note that you cannot directly assign add_volumes to add_volumes_update as add_volumes is a list of volume objects and cannot be used for db update directly. Same with remove_volumes.

If the driver throws an exception, the status of the group as well as those of the volumes to be added/removed will be set to ‘error’.

update_migrated_volume(ctxt, volume, new_volume, original_volume_status)

Return model update for migrated volume.

Each driver implementing this method needs to be responsible for the values of _name_id and provider_location. If None is returned or either key is not set, it means the volume table does not need to change the value(s) for the key(s). The return format is {“_name_id”: value, “provider_location”: value}.

Parameters:
  • volume – The original volume that was migrated to this backend
  • new_volume – The migration volume object that was created on this backend as part of the migration process
  • original_volume_status – The status of the original volume
Returns:

model_update to update DB with any needed changes

update_provider_info(volumes, snapshots)

Get provider info updates from driver.

Parameters:
  • volumes – List of Cinder volumes to check for updates
  • snapshots – List of Cinder snapshots to check for updates
Returns:

tuple (volume_updates, snapshot_updates)

where volume updates {‘id’: uuid, provider_id: <provider-id>} and snapshot updates {‘id’: uuid, provider_id: <provider-id>}

validate_connector(connector)

Fail if connector doesn’t contain all the data needed by driver.

static validate_connector_has_setting(connector, setting)
class CloneableImageVD

Bases: object

clone_image(volume, image_location, image_id, image_meta, image_service)

Create a volume efficiently from an existing image.

image_location is a string whose format depends on the image service backend in use. The driver should use it to determine whether cloning is possible.

image_id is a string which represents id of the image. It can be used by the driver to introspect internal stores or registry to do an efficient image clone.

image_meta is a dictionary that includes ‘disk_format’ (e.g. raw, qcow2) and other image attributes that allow drivers to decide whether they can clone the image without first requiring conversion.

image_service is the reference of the image_service to use. Note that this is needed to be passed here for drivers that will want to fetch images from the image service directly.

Returns a dict of volume properties eg. provider_location, boolean indicating whether cloning occurred

class ConsistencyGroupVD

Bases: object

This class has been deprecated and should not be inherited.

class ExtendVD

Bases: object

This class has been deprecated and should not be inherited.

class FibreChannelDriver(*args, **kwargs)

Bases: cinder.volume.driver.VolumeDriver

Executes commands relating to Fibre Channel volumes.

get_volume_stats(refresh=False)

Get volume stats.

If ‘refresh’ is True, run update the stats first.

initialize_connection(volume, connector)

Initializes the connection and returns connection info.

The driver returns a driver_volume_type of ‘fibre_channel’. The target_wwn can be a single entry or a list of wwns that correspond to the list of remote wwn(s) that will export the volume. Example return values:

{
    'driver_volume_type': 'fibre_channel'
    'data': {
        'target_discovered': True,
        'target_lun': 1,
        'target_wwn': '1234567890123',
        'discard': False,
    }
}

or

 {
    'driver_volume_type': 'fibre_channel'
    'data': {
        'target_discovered': True,
        'target_lun': 1,
        'target_wwn': ['1234567890123', '0987654321321'],
        'discard': False,
    }
}
validate_connector(connector)

Fail if connector doesn’t contain all the data needed by driver.

Do a check on the connector and ensure that it has wwnns, wwpns.

static validate_connector_has_setting(connector, setting)

Test for non-empty setting in connector.

class ISCSIDriver(*args, **kwargs)

Bases: cinder.volume.driver.VolumeDriver

Executes commands relating to ISCSI volumes.

We make use of model provider properties as follows:

provider_location
if present, contains the iSCSI target information in the same format as an ietadm discovery i.e. ‘<ip>:<port>,<portal> <target IQN>’
provider_auth
if present, contains a space-separated triple: ‘<auth method> <auth username> <auth password>’. CHAP is the only auth_method in use at the moment.
get_volume_stats(refresh=False)

Get volume stats.

If ‘refresh’ is True, run update the stats first.

initialize_connection(volume, connector)

Initializes the connection and returns connection info.

The iscsi driver returns a driver_volume_type of ‘iscsi’. The format of the driver data is defined in _get_iscsi_properties. Example return value:

{
    'driver_volume_type': 'iscsi'
    'data': {
        'target_discovered': True,
        'target_iqn': 'iqn.2010-10.org.openstack:volume-00000001',
        'target_portal': '127.0.0.0.1:3260',
        'volume_id': 1,
        'discard': False,
    }
}

If the backend driver supports multiple connections for multipath and for single path with failover, “target_portals”, “target_iqns”, “target_luns” are also populated:

{
    'driver_volume_type': 'iscsi'
    'data': {
        'target_discovered': False,
        'target_iqn': 'iqn.2010-10.org.openstack:volume1',
        'target_iqns': ['iqn.2010-10.org.openstack:volume1',
                        'iqn.2010-10.org.openstack:volume1-2'],
        'target_portal': '10.0.0.1:3260',
        'target_portals': ['10.0.0.1:3260', '10.0.1.1:3260']
        'target_lun': 1,
        'target_luns': [1, 1],
        'volume_id': 1,
        'discard': False,
    }
}
terminate_connection(volume, connector, **kwargs)
validate_connector(connector)
class ISERDriver(*args, **kwargs)

Bases: cinder.volume.driver.ISCSIDriver

Executes commands relating to ISER volumes.

We make use of model provider properties as follows:

provider_location
if present, contains the iSER target information in the same format as an ietadm discovery i.e. ‘<ip>:<port>,<portal> <target IQN>’
provider_auth
if present, contains a space-separated triple: ‘<auth method> <auth username> <auth password>’. CHAP is the only auth_method in use at the moment.
initialize_connection(volume, connector)

Initializes the connection and returns connection info.

The iser driver returns a driver_volume_type of ‘iser’. The format of the driver data is defined in _get_iser_properties. Example return value:

{
    'driver_volume_type': 'iser'
    'data': {
        'target_discovered': True,
        'target_iqn':
        'iqn.2010-10.org.iser.openstack:volume-00000001',
        'target_portal': '127.0.0.0.1:3260',
        'volume_id': 1,
    }
}
class LocalVD

Bases: object

This class has been deprecated and should not be inherited.

class ManageableSnapshotsVD

Bases: object

get_manageable_snapshots(cinder_snapshots, marker, limit, offset, sort_keys, sort_dirs)

List snapshots on the backend available for management by Cinder.

Returns a list of dictionaries, each specifying a snapshot in the host, with the following keys: - reference (dictionary): The reference for a snapshot, which can be

passed to “manage_existing_snapshot”.
  • size (int): The size of the snapshot according to the storage backend, rounded up to the nearest GB.
  • safe_to_manage (boolean): Whether or not this snapshot is safe to manage according to the storage backend. For example, is the snapshot in use or invalid for any reason.
  • reason_not_safe (string): If safe_to_manage is False, the reason why.
  • cinder_id (string): If already managed, provide the Cinder ID.
  • extra_info (string): Any extra information to return to the user
  • source_reference (string): Similar to “reference”, but for the snapshot’s source volume.
Parameters:
  • cinder_snapshots – A list of snapshots in this host that Cinder currently manages, used to determine if a snapshot is manageable or not.
  • marker – The last item of the previous page; we return the next results after this value (after sorting)
  • limit – Maximum number of items to return
  • offset – Number of items to skip after marker
  • sort_keys – List of keys to sort results by (valid keys are ‘identifier’ and ‘size’)
  • sort_dirs – List of directions to sort by, corresponding to sort_keys (valid directions are ‘asc’ and ‘desc’)
manage_existing_snapshot(snapshot, existing_ref)

Brings an existing backend storage object under Cinder management.

existing_ref is passed straight through from the API request’s manage_existing_ref value, and it is up to the driver how this should be interpreted. It should be sufficient to identify a storage object that the driver should somehow associate with the newly-created cinder snapshot structure.

There are two ways to do this:

  1. Rename the backend storage object so that it matches the snapshot[‘name’] which is how drivers traditionally map between a cinder snapshot and the associated backend storage object.
  2. Place some metadata on the snapshot, or somewhere in the backend, that allows other driver requests (e.g. delete) to locate the backend storage object when required.

If the existing_ref doesn’t make sense, or doesn’t refer to an existing backend storage object, raise a ManageExistingInvalidReference exception.

Parameters:
  • snapshot – Cinder volume snapshot to manage
  • existing_ref – Driver-specific information used to identify a volume snapshot
manage_existing_snapshot_get_size(snapshot, existing_ref)

Return size of snapshot to be managed by manage_existing.

When calculating the size, round up to the next GB.

Parameters:
  • snapshot – Cinder volume snapshot to manage
  • existing_ref – Driver-specific information used to identify a volume snapshot
Returns size:

Volume snapshot size in GiB (integer)

unmanage_snapshot(snapshot)

Removes the specified snapshot from Cinder management.

Does not delete the underlying backend storage object.

For most drivers, this will not need to do anything. However, some drivers might use this call as an opportunity to clean up any Cinder-specific configuration that they have associated with the backend storage object.

Parameters:snapshot – Cinder volume snapshot to unmanage
class ManageableVD

Bases: object

get_manageable_volumes(cinder_volumes, marker, limit, offset, sort_keys, sort_dirs)

List volumes on the backend available for management by Cinder.

Returns a list of dictionaries, each specifying a volume in the host, with the following keys: - reference (dictionary): The reference for a volume, which can be

passed to “manage_existing”.
  • size (int): The size of the volume according to the storage backend, rounded up to the nearest GB.
  • safe_to_manage (boolean): Whether or not this volume is safe to manage according to the storage backend. For example, is the volume in use or invalid for any reason.
  • reason_not_safe (string): If safe_to_manage is False, the reason why.
  • cinder_id (string): If already managed, provide the Cinder ID.
  • extra_info (string): Any extra information to return to the user
Parameters:
  • cinder_volumes – A list of volumes in this host that Cinder currently manages, used to determine if a volume is manageable or not.
  • marker – The last item of the previous page; we return the next results after this value (after sorting)
  • limit – Maximum number of items to return
  • offset – Number of items to skip after marker
  • sort_keys – List of keys to sort results by (valid keys are ‘identifier’ and ‘size’)
  • sort_dirs – List of directions to sort by, corresponding to sort_keys (valid directions are ‘asc’ and ‘desc’)
manage_existing(volume, existing_ref)

Brings an existing backend storage object under Cinder management.

existing_ref is passed straight through from the API request’s manage_existing_ref value, and it is up to the driver how this should be interpreted. It should be sufficient to identify a storage object that the driver should somehow associate with the newly-created cinder volume structure.

There are two ways to do this:

  1. Rename the backend storage object so that it matches the, volume[‘name’] which is how drivers traditionally map between a cinder volume and the associated backend storage object.
  2. Place some metadata on the volume, or somewhere in the backend, that allows other driver requests (e.g. delete, clone, attach, detach…) to locate the backend storage object when required.

If the existing_ref doesn’t make sense, or doesn’t refer to an existing backend storage object, raise a ManageExistingInvalidReference exception.

The volume may have a volume_type, and the driver can inspect that and compare against the properties of the referenced backend storage object. If they are incompatible, raise a ManageExistingVolumeTypeMismatch, specifying a reason for the failure.

Parameters:
  • volume – Cinder volume to manage
  • existing_ref – Driver-specific information used to identify a volume
manage_existing_get_size(volume, existing_ref)

Return size of volume to be managed by manage_existing.

When calculating the size, round up to the next GB.

Parameters:
  • volume – Cinder volume to manage
  • existing_ref – Driver-specific information used to identify a volume
Returns size:

Volume size in GiB (integer)

unmanage(volume)

Removes the specified volume from Cinder management.

Does not delete the underlying backend storage object.

For most drivers, this will not need to do anything. However, some drivers might use this call as an opportunity to clean up any Cinder-specific configuration that they have associated with the backend storage object.

Parameters:volume – Cinder volume to unmanage
class MigrateVD

Bases: object

migrate_volume(context, volume, host)

Migrate the volume to the specified host.

Returns a boolean indicating whether the migration occurred, as well as model_update.

Parameters:
  • context – Context
  • volume – A dictionary describing the volume to migrate
  • host – A dictionary describing the host to migrate to, where host[‘host’] is its name, and host[‘capabilities’] is a dictionary of its reported capabilities.
class ProxyVD

Bases: object

Proxy Volume Driver to mark proxy drivers

If a driver uses a proxy class (e.g. by using __setattr__ and __getattr__) without directly inheriting from base volume driver this class can help marking them and retrieve the actual used driver object.

class SnapshotVD

Bases: object

This class has been deprecated and should not be inherited.

class TransferVD

Bases: object

This class has been deprecated and should not be inherited.

class VolumeDriver(execute=<function execute>, *args, **kwargs)

Bases: cinder.volume.driver.ManageableVD, cinder.volume.driver.CloneableImageVD, cinder.volume.driver.ManageableSnapshotsVD, cinder.volume.driver.MigrateVD, cinder.volume.driver.BaseVD

accept_transfer(context, volume, new_user, new_project)
check_for_setup_error()
clear_download(context, volume)
clone_image(volume, image_location, image_id, image_meta, image_service)
create_cgsnapshot(context, cgsnapshot, snapshots)

Creates a cgsnapshot.

Parameters:
  • context – the context of the caller.
  • cgsnapshot – the dictionary of the cgsnapshot to be created.
  • snapshots – a list of snapshot dictionaries in the cgsnapshot.
Returns:

model_update, snapshots_model_update

param snapshots is retrieved directly from the db. It is a list of cinder.db.sqlalchemy.models.Snapshot to be precise. It cannot be assigned to snapshots_model_update. snapshots_model_update is a list of dictionaries. It has to be built by the driver. An entry will be in this format: {‘id’: xxx, ‘status’: xxx, ……}. model_update will be in this format: {‘status’: xxx, ……}.

The driver should populate snapshots_model_update and model_update and return them.

The manager will check snapshots_model_update and update db accordingly for each snapshot. If the driver successfully deleted some snapshots but failed to delete others, it should set statuses of the snapshots accordingly so that the manager can update db correctly.

If the status in any entry of snapshots_model_update is ‘error’, the status in model_update will be set to the same if it is not already ‘error’.

If the status in model_update is ‘error’, the manager will raise an exception and the status of cgsnapshot will be set to ‘error’ in the db. If snapshots_model_update is not returned by the driver, the manager will set the status of every snapshot to ‘error’ in the except block.

If the driver raises an exception during the operation, it will be caught by the try-except block in the manager and the statuses of cgsnapshot and all snapshots will be set to ‘error’.

For a successful operation, the driver can either build the model_update and snapshots_model_update and return them or return None, None. The statuses of cgsnapshot and all snapshots will be set to ‘available’ at the end of the manager function.

create_consistencygroup(context, group)

Creates a consistencygroup.

Parameters:
  • context – the context of the caller.
  • group – the dictionary of the consistency group to be created.
Returns:

model_update

model_update will be in this format: {‘status’: xxx, ……}.

If the status in model_update is ‘error’, the manager will throw an exception and it will be caught in the try-except block in the manager. If the driver throws an exception, the manager will also catch it in the try-except block. The group status in the db will be changed to ‘error’.

For a successful operation, the driver can either build the model_update and return it or return None. The group status will be set to ‘available’.

create_consistencygroup_from_src(context, group, volumes, cgsnapshot=None, snapshots=None, source_cg=None, source_vols=None)

Creates a consistencygroup from source.

Parameters:
  • context – the context of the caller.
  • group – the dictionary of the consistency group to be created.
  • volumes – a list of volume dictionaries in the group.
  • cgsnapshot – the dictionary of the cgsnapshot as source.
  • snapshots – a list of snapshot dictionaries in the cgsnapshot.
  • source_cg – the dictionary of a consistency group as source.
  • source_vols – a list of volume dictionaries in the source_cg.
Returns:

model_update, volumes_model_update

The source can be cgsnapshot or a source cg.

param volumes is retrieved directly from the db. It is a list of cinder.db.sqlalchemy.models.Volume to be precise. It cannot be assigned to volumes_model_update. volumes_model_update is a list of dictionaries. It has to be built by the driver. An entry will be in this format: {‘id’: xxx, ‘status’: xxx, ……}. model_update will be in this format: {‘status’: xxx, ……}.

To be consistent with other volume operations, the manager will assume the operation is successful if no exception is thrown by the driver. For a successful operation, the driver can either build the model_update and volumes_model_update and return them or return None, None.

create_export(context, volume, connector)
create_export_snapshot(context, snapshot, connector)
create_replica_test_volume(volume, src_vref)
create_snapshot(snapshot)

Creates a snapshot.

create_volume(volume)
create_volume_from_snapshot(volume, snapshot)

Creates a volume from a snapshot.

If volume_type extra specs includes ‘replication: <is> True’ the driver needs to create a volume replica (secondary), and setup replication between the newly created volume and the secondary volume.

delete_cgsnapshot(context, cgsnapshot, snapshots)

Deletes a cgsnapshot.

Parameters:
  • context – the context of the caller.
  • cgsnapshot – the dictionary of the cgsnapshot to be deleted.
  • snapshots – a list of snapshot dictionaries in the cgsnapshot.
Returns:

model_update, snapshots_model_update

param snapshots is retrieved directly from the db. It is a list of cinder.db.sqlalchemy.models.Snapshot to be precise. It cannot be assigned to snapshots_model_update. snapshots_model_update is a list of dictionaries. It has to be built by the driver. An entry will be in this format: {‘id’: xxx, ‘status’: xxx, ……}. model_update will be in this format: {‘status’: xxx, ……}.

The driver should populate snapshots_model_update and model_update and return them.

The manager will check snapshots_model_update and update db accordingly for each snapshot. If the driver successfully deleted some snapshots but failed to delete others, it should set statuses of the snapshots accordingly so that the manager can update db correctly.

If the status in any entry of snapshots_model_update is ‘error_deleting’ or ‘error’, the status in model_update will be set to the same if it is not already ‘error_deleting’ or ‘error’.

If the status in model_update is ‘error_deleting’ or ‘error’, the manager will raise an exception and the status of cgsnapshot will be set to ‘error’ in the db. If snapshots_model_update is not returned by the driver, the manager will set the status of every snapshot to ‘error’ in the except block.

If the driver raises an exception during the operation, it will be caught by the try-except block in the manager and the statuses of cgsnapshot and all snapshots will be set to ‘error’.

For a successful operation, the driver can either build the model_update and snapshots_model_update and return them or return None, None. The statuses of cgsnapshot and all snapshots will be set to ‘deleted’ after the manager deletes them from db.

delete_consistencygroup(context, group, volumes)

Deletes a consistency group.

Parameters:
  • context – the context of the caller.
  • group – the dictionary of the consistency group to be deleted.
  • volumes – a list of volume dictionaries in the group.
Returns:

model_update, volumes_model_update

param volumes is retrieved directly from the db. It is a list of cinder.db.sqlalchemy.models.Volume to be precise. It cannot be assigned to volumes_model_update. volumes_model_update is a list of dictionaries. It has to be built by the driver. An entry will be in this format: {‘id’: xxx, ‘status’: xxx, ……}. model_update will be in this format: {‘status’: xxx, ……}.

The driver should populate volumes_model_update and model_update and return them.

The manager will check volumes_model_update and update db accordingly for each volume. If the driver successfully deleted some volumes but failed to delete others, it should set statuses of the volumes accordingly so that the manager can update db correctly.

If the status in any entry of volumes_model_update is ‘error_deleting’ or ‘error’, the status in model_update will be set to the same if it is not already ‘error_deleting’ or ‘error’.

If the status in model_update is ‘error_deleting’ or ‘error’, the manager will raise an exception and the status of the group will be set to ‘error’ in the db. If volumes_model_update is not returned by the driver, the manager will set the status of every volume in the group to ‘error’ in the except block.

If the driver raises an exception during the operation, it will be caught by the try-except block in the manager. The statuses of the group and all volumes in it will be set to ‘error’.

For a successful operation, the driver can either build the model_update and volumes_model_update and return them or return None, None. The statuses of the group and all volumes will be set to ‘deleted’ after the manager deletes them from db.

delete_snapshot(snapshot)

Deletes a snapshot.

delete_volume(volume)
ensure_export(context, volume)
extend_volume(volume, new_size)
get_manageable_snapshots(cinder_snapshots, marker, limit, offset, sort_keys, sort_dirs)
get_manageable_volumes(cinder_volumes, marker, limit, offset, sort_keys, sort_dirs)
get_pool(volume)

Return pool name where volume reside on.

Parameters:volume – The volume hosted by the driver.
Returns:name of the pool where given volume is in.
initialize_connection(volume, connector, **kwargs)
initialize_connection_snapshot(snapshot, connector, **kwargs)

Allow connection from connector for a snapshot.

local_path(volume)
manage_existing(volume, existing_ref)
manage_existing_get_size(volume, existing_ref)
manage_existing_snapshot(snapshot, existing_ref)
manage_existing_snapshot_get_size(snapshot, existing_ref)
migrate_volume(context, volume, host)
remove_export(context, volume)
remove_export_snapshot(context, snapshot)
retype(context, volume, new_type, diff, host)
revert_to_snapshot(context, volume, snapshot)

Revert volume to snapshot.

Note: the revert process should not change the volume’s current size, that means if the driver shrank the volume during the process, it should extend the volume internally.

terminate_connection(volume, connector, **kwargs)

Disallow connection from connector

Parameters:
  • volume – The volume to be disconnected.
  • connector – A dictionary describing the connection with details about the initiator. Can be None.
terminate_connection_snapshot(snapshot, connector, **kwargs)

Disallow connection from connector for a snapshot.

unmanage(volume)
unmanage_snapshot(snapshot)

Unmanage the specified snapshot from Cinder management.

update_consistencygroup(context, group, add_volumes=None, remove_volumes=None)

Updates a consistency group.

Parameters:
  • context – the context of the caller.
  • group – the dictionary of the consistency group to be updated.
  • add_volumes – a list of volume dictionaries to be added.
  • remove_volumes – a list of volume dictionaries to be removed.
Returns:

model_update, add_volumes_update, remove_volumes_update

model_update is a dictionary that the driver wants the manager to update upon a successful return. If None is returned, the manager will set the status to ‘available’.

add_volumes_update and remove_volumes_update are lists of dictionaries that the driver wants the manager to update upon a successful return. Note that each entry requires a {‘id’: xxx} so that the correct volume entry can be updated. If None is returned, the volume will remain its original status. Also note that you cannot directly assign add_volumes to add_volumes_update as add_volumes is a list of cinder.db.sqlalchemy.models.Volume objects and cannot be used for db update directly. Same with remove_volumes.

If the driver throws an exception, the status of the group as well as those of the volumes to be added/removed will be set to ‘error’.

Tests

The cinder.tests.unit.volume Module

class BaseVolumeTestCase(*args, **kwargs)

Bases: cinder.test.TestCase

Test Case for volumes.

FAKE_UUID = ‘e79161cd-5f9d-4007-8823-81a807a64332’
fake_get_all_volume_groups(obj, vg_name=None, no_suffix=True)
setUp(*args, **kwargs)

Old Docs

Cinder uses iSCSI to export storage volumes from multiple storage nodes. These iSCSI exports are attached (using libvirt) directly to running instances.

Cinder volumes are exported over the primary system VLAN (usually VLAN 1), and not over individual VLANs.

The underlying volumes by default are LVM logical volumes, created on demand within a single large volume group.