Share Replication¶
As of the Mitaka release of OpenStack, manila supports replication of
shares between different pools for drivers that operate with
driver_handles_share_servers=False
mode. These pools may be on different
backends or within the same backend. This feature can be used as a disaster
recovery solution or as a load sharing mirroring solution depending upon the
replication style chosen, the capability of the driver and the configuration
of backends.
This feature assumes and relies on the fact that share drivers will be responsible for communicating with ALL storage controllers necessary to achieve any replication tasks, even if that involves sending commands to other storage controllers in other Availability Zones (or AZs).
End users would be able to create and manage their replicas, alongside their shares and snapshots.
Storage availability zones and replication domains¶
Replication is supported within the same availability zone, but in an ideal solution, an Availability Zone should be perceived as a single failure domain. So this feature provides the most value in an inter-AZ replication use case.
The replication_domain
option is a backend specific StrOpt option to be
used within manila.conf
. The value can be any ASCII string. Two backends
that can replicate between each other would have the same
replication_domain
. This comes from the premise that manila expects
Share Replication to be performed between backends that have similar
characteristics.
When scheduling new replicas, the scheduler takes into account the
replication_domain
option to match similar backends. It also ensures that
only one replica can be scheduled per pool. When backends report multiple
pools, manila would allow for replication between two pools on the same
backend.
The replication_domain
option is meant to be used in conjunction with the
storage_availability_zone
(or back end specific
backend_availability_zone
) option to utilize
this solution for Data Protection/Disaster Recovery.
Replication types¶
When creating a share that is meant to have replicas in the future, the user
will use a share_type
with an extra_spec, replication_type set to
a valid replication type that manila supports. Drivers must report the
replication type that they support as the replication_type
capability during the _update_share_stats()
call.
Three types of replication are currently supported:
- writable
Synchronously replicated shares where all replicas are writable. Promotion is not supported and not needed.
- readable
Mirror-style replication with a primary (writable) copy and one or more secondary (read-only) copies which can become writable after a promotion.
- dr (for Disaster Recovery)
Generalized replication with secondary copies that are inaccessible until they are promoted to become the
active
replica.
Health of a share replica¶
Apart from the status
attribute, share replicas have the
replica_state attribute to denote the state of the replica. The
primary
replica will have it’s replica_state attribute set to
active. A secondary
replica may have one of the following values as
its replica_state:
- in_sync
The replica is up to date with the active replica (possibly within a backend specific recovery point objective).
- out_of_sync
The replica has gone out of date (all new replicas start out in this replica_state).
- error
When the scheduler failed to schedule this replica or some potentially irrecoverable damage occurred with regard to updating data for this replica.
Manila requests periodic update of the replica_state of all non-active
replicas. The update occurs with respect to an interval defined through the
replica_state_update_interval
option in manila.conf
.
Administrators have an option of initiating a resync
of a secondary
replica (for readable and dr types of replication). This could
be performed before a planned failover operation in order to have the most
up-to-date data on the replica.
Promotion¶
For readable and dr styles, we refer to the task of
switching a non-active
replica with the active replica as
promotion. For the writable style of replication, promotion does
not make sense since all replicas are active (or writable) at all
given points of time.
The status
attribute of the non-active replica being promoted will be set
to replication_change during its promotion. This has been classified
as a busy
state and hence API interactions with the share are restricted
while one of its replicas is in this state.
Promotion of replicas with replica_state set to error
may not be
fully supported by the backend. However, manila allows the action as an
administrator feature and such an attempt may be honored by backends if
possible.
When multiple replicas exist, multiple replication relationships
between shares may need to be redefined at the backend during the promotion
operation. If the driver fails at this stage, the replicas may be left in an
inconsistent state. The share manager will set all replicas to have the
status
attribute set to error
. Recovery from this state would require
administrator intervention.
Snapshots¶
If the driver supports snapshots, the replication of a snapshot is expected
to be initiated simultaneously with the creation of the snapshot on the
active replica. Manila tracks snapshots across replicas as separate
snapshot instances. The aggregate snapshot object itself will be in
creating
state until it is available
across all of the share’s replicas
that have their replica_state attribute set to active or
in_sync
.
Therefore, for a driver that supports snapshots, the definition of being
in_sync
with the primary is not only that data is ensured (within the
recovery point objective), but also that any ‘available’ snapshots
on the primary are ensured on the replica as well. If the snapshots cannot
be ensured, the replica_state must be reported to manila as being
out_of_sync
until the snapshots have been replicated.
When a snapshot instance has its status
attribute set to creating
or
deleting
, manila will poll the respective drivers for a status update. As
described earlier, the parent snapshot itself will be available
only when
its instances across the active and in_sync
replicas of the share
are available
. The polling interval will be the same as
replica_state_update_interval
.
Access Rules¶
Access rules are not meant to be different across the replicas of the share. Manila expects drivers to handle these access rules effectively depending on the style of replication supported. For example, the dr style of replication does mean that the non-active replicas are inaccessible, so if read-write rules are expected, then the rules should be applied on the active replica only. Similarly, drivers that support readable replication type should apply any read-write rules as read-only for the non-active replicas.
Drivers will receive all the access rules in create_replica
,
delete_replica
and update_replica_state
calls and have ample
opportunity to reconcile these rules effectively across replicas.
Understanding Replication Workflows¶
Creating a share that supports replication¶
Administrators can create a share type with extra-spec
replication_type, matching the style of replication the desired backend
supports. Users can use the share type to create a new share that
allows/supports replication. A replicated share always starts out with one
replica, the primary
share itself.
The manila-scheduler service will filter and weigh available pools to find a suitable pool for the share being created. In particular,
The
CapabilityFilter
will match the replication_type extra_spec in the request share_type with thereplication_type
capability reported by a pool.The
ShareReplicationFilter
will further ensure that the pool has a non-emptyreplication_domain
capability being reported as well.The
AvailabilityZoneFilter
will ensure that the availability_zone requested matches with the pool’s availability zone.
Creating a replica¶
The user has to specify the share name/id of the share that is supposed to be replicated and optionally an availability zone for the replica to exist in. The replica inherits the parent share’s share_type and associated extra_specs. Scheduling of the replica is similar to that of the share.
- The ShareReplicationFilter will ensure that the pool is within
the same
replication_domain
as the active replica and also ensures that the pool does not already have a replica for that share.
Drivers supporting writable style must set the
replica_state attribute to active when the replica has been
created and is available
.
Deleting a replica¶
Users can remove replicas that have their status attribute set to
error
, in_sync
or out_of_sync
. They could even delete an
active replica as long as there is another active replica
(as could be the case with writable replication style). Before the
delete_replica
call is made to the driver, an update_access call is made
to ensure access rules are safely removed for the replica.
Administrators may also force-delete
replicas. Any driver exceptions will
only be logged and not re-raised; the replica will be purged from manila’s
database.
Promoting a replica¶
Users can promote replicas that have their replica_state attribute set
to in_sync
. Administrators can attempt to promote replicas that have their
replica_state attribute set to out_of_sync
or error
. During a
promotion, if the driver raises an exception, all replicas will have their
status attribute set to error and recovery from this state will require
administrator intervention.
Resyncing a replica¶
Prior to a planned failover, an administrator could attempt to update the
data on the replica. The update_replica_state
call will be made during
such an action, giving drivers an opportunity to push the latest updates from
the active replica to the secondaries.
Creating a snapshot¶
When a user takes a snapshot of a share that has replicas, manila creates as
many snapshot instances as there are share replicas. These snapshot
instances all begin with their status attribute set to creating. The driver
is expected to create the snapshot of the active
replica and then begin to
replicate this snapshot as soon as the active replica’s
snapshot instance is created and becomes available
.
Deleting a snapshot¶
When a user deletes a snapshot, the snapshot instances corresponding to each
replica of the share have their status
attribute set to deleting
.
Drivers must update their secondaries as soon as the active replica’s
snapshot instance is deleted.
Driver Interfaces¶
As part of the _update_share_stats()
call, the base driver reports the
replication_domain
capability. Drivers are expected to update the
replication_type capability.
Drivers must implement the methods enumerated below in order to support
replication. promote_replica
, update_replica_state
and
update_replicated_snapshot
need not be implemented by drivers that support
the writable style of replication. The snapshot methods
create_replicated_snapshot
, delete_replicated_snapshot
and
update_replicated_snapshot
need not be implemented by a driver that does
not support snapshots.
Each driver request is made on a specific host. Create/delete operations
on secondary replicas are always made on the destination host. Create/delete
operations on snapshots are always made on the active replica’s host.
update_replica_state
and update_replicated_snapshot
calls are made on
the host that the replica or snapshot resides on.
Share Replica interfaces:¶
- class ShareDriver(driver_handles_share_servers, *args, **kwargs)¶
Class defines interface of NAS driver.
- create_replica(context, replica_list, new_replica, access_rules, replica_snapshots, share_server=None)¶
Replicate the active replica to a new replica on this backend.
Note
This call is made on the host that the new replica is being created upon.
- Parameters:
context – Current context
replica_list – List of all replicas for a particular share. This list also contains the replica to be created. The ‘active’ replica will have its ‘replica_state’ attr set to ‘active’.
Example:
[ { 'id': 'd487b88d-e428-4230-a465-a800c2cce5f8', 'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f', 'replica_state': 'in_sync', ... 'share_server_id': '4ce78e7b-0ef6-4730-ac2a-fd2defefbd05', 'share_server': <models.ShareServer> or None, }, { 'id': '10e49c3e-aca9-483b-8c2d-1c337b38d6af', 'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f', 'replica_state': 'active', ... 'share_server_id': 'f63629b3-e126-4448-bec2-03f788f76094', 'share_server': <models.ShareServer> or None, }, { 'id': 'e82ff8b6-65f0-11e5-9d70-feff819cdc9f', 'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f', 'replica_state': 'in_sync', ... 'share_server_id': '07574742-67ea-4dfd-9844-9fbd8ada3d87', 'share_server': <models.ShareServer> or None, }, ... ]
- Parameters:
new_replica – The share replica dictionary.
Example:
{ 'id': 'e82ff8b6-65f0-11e5-9d70-feff819cdc9f', 'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f', 'deleted': False, 'host': 'openstack2@cmodeSSVMNFS2', 'status': 'creating', 'scheduled_at': datetime.datetime(2015, 8, 10, 0, 5, 58), 'launched_at': datetime.datetime(2015, 8, 10, 0, 5, 58), 'terminated_at': None, 'replica_state': 'out_of_sync', 'availability_zone_id': 'f6e146d0-65f0-11e5-9d70-feff819cdc9f', 'export_locations': [ models.ShareInstanceExportLocations, ], 'access_rules_status': 'out_of_sync', 'share_network_id': '4ccd5318-65f1-11e5-9d70-feff819cdc9f', 'share_server_id': 'e6155221-ea00-49ef-abf9-9f89b7dd900a', 'share_server': <models.ShareServer> or None, }
- Parameters:
access_rules – A list of access rules. These are rules that other instances of the share already obey. Drivers are expected to apply access rules to the new replica or disregard access rules that don’t apply.
Example:
[ { 'id': 'f0875f6f-766b-4865-8b41-cccb4cdf1676', 'deleted' = False, 'share_id' = 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f', 'access_type' = 'ip', 'access_to' = '172.16.20.1', 'access_level' = 'rw', } ]
- Parameters:
replica_snapshots – List of dictionaries of snapshot instances. This includes snapshot instances of every snapshot of the share whose ‘aggregate_status’ property was reported to be ‘available’ when the share manager initiated this request. Each list member will have two sub dictionaries: ‘active_replica_snapshot’ and ‘share_replica_snapshot’. The ‘active’ replica snapshot corresponds to the instance of the snapshot on any of the ‘active’ replicas of the share while share_replica_snapshot corresponds to the snapshot instance for the specific replica that will need to exist on the new share replica that is being created. The driver needs to ensure that this snapshot instance is truly available before transitioning the replica from ‘out_of_sync’ to ‘in_sync’. Snapshots instances for snapshots that have an ‘aggregate_status’ of ‘creating’ or ‘deleting’ will be polled for in the
update_replicated_snapshot
method.
Example:
[ { 'active_replica_snapshot': { 'id': '8bda791c-7bb6-4e7b-9b64-fefff85ff13e', 'share_instance_id': '10e49c3e-aca9-483b-8c2d-1c337b38d6af', 'status': 'available', 'provider_location': '/newton/share-snapshot-10e49c3e-aca9', ... }, 'share_replica_snapshot': { 'id': '', 'share_instance_id': 'e82ff8b6-65f0-11e5-9d70-feff819cdc9f', 'status': 'available', 'provider_location': None, ... }, } ]
- Parameters:
share_server – <models.ShareServer> or None Share server of the replica being created.
- Returns:
None or a dictionary. The dictionary can contain export_locations replica_state and access_rules_status. export_locations is a list of paths and replica_state is one of ‘active’, ‘in_sync’, ‘out_of_sync’ or ‘error’.
Important
A backend supporting ‘writable’ type replication should return ‘active’ as the replica_state.
Export locations should be in the same format as returned during the
create_share
call.Example:
{ 'export_locations': [ { 'path': '172.16.20.22/sample/export/path', 'is_admin_only': False, 'metadata': {'some_key': 'some_value'}, }, ], 'replica_state': 'in_sync', 'access_rules_status': 'in_sync', }
- delete_replica(context, replica_list, replica_snapshots, replica, share_server=None)¶
Delete a replica.
Note
This call is made on the host that hosts the replica being deleted.
- Parameters:
context – Current context
replica_list – List of all replicas for a particular share This list also contains the replica to be deleted. The ‘active’ replica will have its ‘replica_state’ attr set to ‘active’.
Example:
[ { 'id': 'd487b88d-e428-4230-a465-a800c2cce5f8', 'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f', 'replica_state': 'in_sync', ... 'share_server_id': '4ce78e7b-0ef6-4730-ac2a-fd2defefbd05', 'share_server': <models.ShareServer> or None, }, { 'id': '10e49c3e-aca9-483b-8c2d-1c337b38d6af', 'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f', 'replica_state': 'active', ... 'share_server_id': 'f63629b3-e126-4448-bec2-03f788f76094', 'share_server': <models.ShareServer> or None, }, { 'id': 'e82ff8b6-65f0-11e5-9d70-feff819cdc9f', 'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f', 'replica_state': 'in_sync', ... 'share_server_id': '07574742-67ea-4dfd-9844-9fbd8ada3d87', 'share_server': <models.ShareServer> or None, }, ... ]
- Parameters:
replica – Dictionary of the share replica being deleted.
Example:
{ 'id': 'e82ff8b6-65f0-11e5-9d70-feff819cdc9f', 'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f', 'deleted': False, 'host': 'openstack2@cmodeSSVMNFS2', 'status': 'available', 'scheduled_at': datetime.datetime(2015, 8, 10, 0, 5, 58), 'launched_at': datetime.datetime(2015, 8, 10, 0, 5, 58), 'terminated_at': None, 'replica_state': 'in_sync', 'availability_zone_id': 'f6e146d0-65f0-11e5-9d70-feff819cdc9f', 'export_locations': [ models.ShareInstanceExportLocations ], 'access_rules_status': 'out_of_sync', 'share_network_id': '4ccd5318-65f1-11e5-9d70-feff819cdc9f', 'share_server_id': '53099868-65f1-11e5-9d70-feff819cdc9f', 'share_server': <models.ShareServer> or None, }
- Parameters:
replica_snapshots – List of dictionaries of snapshot instances. The dict contains snapshot instances that are associated with the share replica being deleted. No model updates to snapshot instances are possible in this method. The driver should return when the cleanup is completed on the backend for both, the snapshots and the replica itself. Drivers must handle situations where the snapshot may not yet have finished ‘creating’ on this replica.
Example:
[ { 'id': '89dafd00-0999-4d23-8614-13eaa6b02a3b', 'snapshot_id': '3ce1caf7-0945-45fd-a320-714973e949d3', 'status: 'available', 'share_instance_id': 'e82ff8b6-65f0-11e5-9d70-feff819cdc9f' ... }, { 'id': '8bda791c-7bb6-4e7b-9b64-fefff85ff13e', 'snapshot_id': '13ee5cb5-fc53-4539-9431-d983b56c5c40', 'status: 'creating', 'share_instance_id': 'e82ff8b6-65f0-11e5-9d70-feff819cdc9f' ... }, ... ]
- Parameters:
share_server – <models.ShareServer> or None Share server of the replica to be deleted.
- Returns:
None.
- Raises:
Exception. Any exception raised will set the share replica’s ‘status’ and ‘replica_state’ attributes to ‘error_deleting’. It will not affect snapshots belonging to this replica.
- promote_replica(context, replica_list, replica, access_rules, share_server=None, quiesce_wait_time=None)¶
Promote a replica to ‘active’ replica state.
Note
This call is made on the host that hosts the replica being promoted.
- Parameters:
context – Current context
replica_list – List of all replicas for a particular share This list also contains the replica to be promoted. The ‘active’ replica will have its ‘replica_state’ attr set to ‘active’.
Example:
[ { 'id': 'd487b88d-e428-4230-a465-a800c2cce5f8', 'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f', 'replica_state': 'in_sync', ... 'share_server_id': '4ce78e7b-0ef6-4730-ac2a-fd2defefbd05', 'share_server': <models.ShareServer> or None, }, { 'id': '10e49c3e-aca9-483b-8c2d-1c337b38d6af', 'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f', 'replica_state': 'active', ... 'share_server_id': 'f63629b3-e126-4448-bec2-03f788f76094', 'share_server': <models.ShareServer> or None, }, { 'id': 'e82ff8b6-65f0-11e5-9d70-feff819cdc9f', 'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f', 'replica_state': 'in_sync', ... 'share_server_id': '07574742-67ea-4dfd-9844-9fbd8ada3d87', 'share_server': <models.ShareServer> or None, }, ... ]
- Parameters:
replica – Dictionary of the replica to be promoted.
Example:
{ 'id': 'e82ff8b6-65f0-11e5-9d70-feff819cdc9f', 'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f', 'deleted': False, 'host': 'openstack2@cmodeSSVMNFS2', 'status': 'available', 'scheduled_at': datetime.datetime(2015, 8, 10, 0, 5, 58), 'launched_at': datetime.datetime(2015, 8, 10, 0, 5, 58), 'terminated_at': None, 'replica_state': 'in_sync', 'availability_zone_id': 'f6e146d0-65f0-11e5-9d70-feff819cdc9f', 'export_locations': [ models.ShareInstanceExportLocations ], 'access_rules_status': 'in_sync', 'share_network_id': '4ccd5318-65f1-11e5-9d70-feff819cdc9f', 'share_server_id': '07574742-67ea-4dfd-9844-9fbd8ada3d87', 'share_server': <models.ShareServer> or None, }
- Parameters:
access_rules – A list of access rules These access rules are obeyed by other instances of the share
Example:
[ { 'id': 'f0875f6f-766b-4865-8b41-cccb4cdf1676', 'deleted' = False, 'share_id' = 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f', 'access_type' = 'ip', 'access_to' = '172.16.20.1', 'access_level' = 'rw', } ]
- Parameters:
share_server – <models.ShareServer> or None Share server of the replica to be promoted.
quiesce_wait_time – time in seconds or None Share replica promote quiesce wait time.
- Returns:
updated_replica_list or None. The driver can return the updated list as in the request parameter. Changes that will be updated to the Database are: ‘export_locations’, ‘access_rules_status’ and ‘replica_state’.
- Raises:
Exception. This can be any exception derived from BaseException. This is re-raised by the manager after some necessary cleanup. If the driver raises an exception during promotion, it is assumed that all of the replicas of the share are in an inconsistent state. Recovery is only possible through the periodic update call and/or administrator intervention to correct the ‘status’ of the affected replicas if they become healthy again.
- update_replica_state(context, replica_list, replica, access_rules, replica_snapshots, share_server=None)¶
Update the replica_state of a replica.
Note
This call is made on the host which hosts the replica being updated.
Drivers should fix replication relationships that were broken if possible inside this method.
This method is called periodically by the share manager; and whenever requested by the administrator through the ‘resync’ API.
- Parameters:
context – Current context
replica_list – List of all replicas for a particular share This list also contains the replica to be updated. The ‘active’ replica will have its ‘replica_state’ attr set to ‘active’.
Example:
[ { 'id': 'd487b88d-e428-4230-a465-a800c2cce5f8', 'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f', 'replica_state': 'in_sync', ... 'share_server_id': '4ce78e7b-0ef6-4730-ac2a-fd2defefbd05', 'share_server': <models.ShareServer> or None, }, { 'id': '10e49c3e-aca9-483b-8c2d-1c337b38d6af', 'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f', 'replica_state': 'active', ... 'share_server_id': 'f63629b3-e126-4448-bec2-03f788f76094', 'share_server': <models.ShareServer> or None, }, { 'id': 'e82ff8b6-65f0-11e5-9d70-feff819cdc9f', 'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f', 'replica_state': 'in_sync', ... 'share_server_id': '07574742-67ea-4dfd-9844-9fbd8ada3d87', 'share_server': <models.ShareServer> or None, }, ... ]
- Parameters:
replica – Dictionary of the replica being updated Replica state will always be ‘in_sync’, ‘out_of_sync’, or ‘error’. Replicas in ‘active’ state will not be passed via this parameter.
Example:
{ 'id': 'd487b88d-e428-4230-a465-a800c2cce5f8', 'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f', 'deleted': False, 'host': 'openstack2@cmodeSSVMNFS1', 'status': 'available', 'scheduled_at': datetime.datetime(2015, 8, 10, 0, 5, 58), 'launched_at': datetime.datetime(2015, 8, 10, 0, 5, 58), 'terminated_at': None, 'replica_state': 'in_sync', 'availability_zone_id': 'e2c2db5c-cb2f-4697-9966-c06fb200cb80', 'export_locations': [ models.ShareInstanceExportLocations, ], 'access_rules_status': 'in_sync', 'share_network_id': '4ccd5318-65f1-11e5-9d70-feff819cdc9f', 'share_server_id': '4ce78e7b-0ef6-4730-ac2a-fd2defefbd05', }
- Parameters:
access_rules – A list of access rules These access rules are obeyed by other instances of the share. The driver could attempt to sync on any un-applied access_rules.
Example:
[ { 'id': 'f0875f6f-766b-4865-8b41-cccb4cdf1676', 'deleted' = False, 'share_id' = 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f', 'access_type' = 'ip', 'access_to' = '172.16.20.1', 'access_level' = 'rw', } ]
- Parameters:
replica_snapshots – List of dictionaries of snapshot instances. This includes snapshot instances of every snapshot of the share whose ‘aggregate_status’ property was reported to be ‘available’ when the share manager initiated this request. Each list member will have two sub dictionaries: ‘active_replica_snapshot’ and ‘share_replica_snapshot’. The ‘active’ replica snapshot corresponds to the instance of the snapshot on any of the ‘active’ replicas of the share while share_replica_snapshot corresponds to the snapshot instance for the specific replica being updated. The driver needs to ensure that this snapshot instance is truly available before transitioning from ‘out_of_sync’ to ‘in_sync’. Snapshots instances for snapshots that have an ‘aggregate_status’ of ‘creating’ or ‘deleting’ will be polled for in the update_replicated_snapshot method.
Example:
[ { 'active_replica_snapshot': { 'id': '8bda791c-7bb6-4e7b-9b64-fefff85ff13e', 'share_instance_id': '10e49c3e-aca9-483b-8c2d-1c337b38d6af', 'status': 'available', 'provider_location': '/newton/share-snapshot-10e49c3e-aca9', ... }, 'share_replica_snapshot': { 'id': '10e49c3e-aca9-483b-8c2d-1c337b38d6af', 'share_instance_id': 'd487b88d-e428-4230-a465-a800c2cce5f8', 'status': 'creating', 'provider_location': None, ... }, } ]
- Parameters:
share_server – <models.ShareServer> or None
- Returns:
replica_state: a str value denoting the replica_state. Valid values are ‘in_sync’ and ‘out_of_sync’ or None (to leave the current replica_state unchanged).
Replicated Snapshot interfaces:¶
- class ShareDriver(driver_handles_share_servers, *args, **kwargs)
Class defines interface of NAS driver.
- create_replicated_snapshot(context, replica_list, replica_snapshots, share_server=None)
Create a snapshot on active instance and update across the replicas.
Note
This call is made on the ‘active’ replica’s host. Drivers are expected to transfer the snapshot created to the respective replicas.
The driver is expected to return model updates to the share manager. If it was able to confirm the creation of any number of the snapshot instances passed in this interface, it can set their status to ‘available’ as a cue for the share manager to set the progress attr to ‘100%’.
- Parameters:
context – Current context
replica_list – List of all replicas for a particular share The ‘active’ replica will have its ‘replica_state’ attr set to ‘active’.
Example:
[ { 'id': 'd487b88d-e428-4230-a465-a800c2cce5f8', 'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f', 'replica_state': 'in_sync', ... 'share_server_id': '4ce78e7b-0ef6-4730-ac2a-fd2defefbd05', 'share_server': <models.ShareServer> or None, }, { 'id': '10e49c3e-aca9-483b-8c2d-1c337b38d6af', 'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f', 'replica_state': 'active', ... 'share_server_id': 'f63629b3-e126-4448-bec2-03f788f76094', 'share_server': <models.ShareServer> or None, }, ... ]
- Parameters:
replica_snapshots – List of dictionaries of snapshot instances. These snapshot instances track the snapshot across the replicas. All the instances will have their status attribute set to ‘creating’.
Example:
[ { 'id': 'd3931a93-3984-421e-a9e7-d9f71895450a', 'snapshot_id': '13ee5cb5-fc53-4539-9431-d983b56c5c40', 'status: 'creating', 'progress': '0%', ... }, { 'id': '8bda791c-7bb6-4e7b-9b64-fefff85ff13e', 'snapshot_id': '13ee5cb5-fc53-4539-9431-d983b56c5c40', 'status: 'creating', 'progress': '0%', ... }, ... ]
- Parameters:
share_server – <models.ShareServer> or None
- Returns:
List of dictionaries of snapshot instances. The dictionaries can contain values that need to be updated on the database for the snapshot instances being created.
- Raises:
Exception. Any exception in this method will set all instances to ‘error’.
- delete_replicated_snapshot(context, replica_list, replica_snapshots, share_server=None)
Delete a snapshot by deleting its instances across the replicas.
Note
This call is made on the ‘active’ replica’s host, since drivers may not be able to delete the snapshot from an individual replica.
The driver is expected to return model updates to the share manager. If it was able to confirm the removal of any number of the snapshot instances passed in this interface, it can set their status to ‘deleted’ as a cue for the share manager to clean up that instance from the database.
- Parameters:
context – Current context
replica_list – List of all replicas for a particular share The ‘active’ replica will have its ‘replica_state’ attr set to ‘active’.
Example:
[ { 'id': 'd487b88d-e428-4230-a465-a800c2cce5f8', 'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f', 'replica_state': 'in_sync', ... 'share_server_id': '4ce78e7b-0ef6-4730-ac2a-fd2defefbd05', 'share_server': <models.ShareServer> or None, }, { 'id': '10e49c3e-aca9-483b-8c2d-1c337b38d6af', 'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f', 'replica_state': 'active', ... 'share_server_id': 'f63629b3-e126-4448-bec2-03f788f76094', 'share_server': <models.ShareServer> or None, }, ... ]
- Parameters:
replica_snapshots – List of dictionaries of snapshot instances. These snapshot instances track the snapshot across the replicas. All the instances will have their status attribute set to ‘deleting’.
Example:
[ { 'id': 'd3931a93-3984-421e-a9e7-d9f71895450a', 'snapshot_id': '13ee5cb5-fc53-4539-9431-d983b56c5c40', 'status': 'deleting', 'progress': '100%', ... }, { 'id': '8bda791c-7bb6-4e7b-9b64-fefff85ff13e', 'snapshot_id': '13ee5cb5-fc53-4539-9431-d983b56c5c40', 'status: 'deleting', 'progress': '100%', ... }, ... ]
- Parameters:
share_server – <models.ShareServer> or None
- Returns:
List of dictionaries of snapshot instances. The dictionaries can contain values that need to be updated on the database for the snapshot instances being deleted. To confirm the deletion of the snapshot instance, set the ‘status’ attribute of the instance to ‘deleted’ (constants.STATUS_DELETED)
- Raises:
Exception. Any exception in this method will set the status attribute of all snapshot instances to ‘error_deleting’.
- update_replicated_snapshot(context, replica_list, share_replica, replica_snapshots, replica_snapshot, share_server=None)
Update the status of a snapshot instance that lives on a replica.
Note
For DR and Readable styles of replication, this call is made on the replica’s host and not the ‘active’ replica’s host.
This method is called periodically by the share manager. It will query for snapshot instances that track the parent snapshot across non-‘active’ replicas. Drivers can expect the status of the instance to be ‘creating’ or ‘deleting’. If the driver sees that a snapshot instance has been removed from the replica’s backend and the instance status was set to ‘deleting’, it is expected to raise a SnapshotResourceNotFound exception. All other exceptions will set the snapshot instance status to ‘error’. If the instance was not in ‘deleting’ state, raising a SnapshotResourceNotFound will set the instance status to ‘error’.
- Parameters:
context – Current context
replica_list – List of all replicas for a particular share The ‘active’ replica will have its ‘replica_state’ attr set to ‘active’.
Example:
[ { 'id': 'd487b88d-e428-4230-a465-a800c2cce5f8', 'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f', 'replica_state': 'in_sync', ... 'share_server_id': '4ce78e7b-0ef6-4730-ac2a-fd2defefbd05', 'share_server': <models.ShareServer> or None, }, { 'id': '10e49c3e-aca9-483b-8c2d-1c337b38d6af', 'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f', 'replica_state': 'active', ... 'share_server_id': 'f63629b3-e126-4448-bec2-03f788f76094', 'share_server': <models.ShareServer> or None, }, ... ]
- Parameters:
share_replica – Share replica dictionary. This replica is associated with the snapshot instance whose status is being updated. Replicas in ‘active’ replica_state will not be passed via this parameter.
Example:
{ 'id': 'd487b88d-e428-4230-a465-a800c2cce5f8', 'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f', 'deleted': False, 'host': 'openstack2@cmodeSSVMNFS1', 'status': 'available', 'scheduled_at': datetime.datetime(2015, 8, 10, 0, 5, 58), 'launched_at': datetime.datetime(2015, 8, 10, 0, 5, 58), 'terminated_at': None, 'replica_state': 'in_sync', 'availability_zone_id': 'e2c2db5c-cb2f-4697-9966-c06fb200cb80', 'export_locations': [ models.ShareInstanceExportLocations, ], 'access_rules_status': 'in_sync', 'share_network_id': '4ccd5318-65f1-11e5-9d70-feff819cdc9f', 'share_server_id': '4ce78e7b-0ef6-4730-ac2a-fd2defefbd05', }
- Parameters:
replica_snapshots – List of dictionaries of snapshot instances. These snapshot instances track the snapshot across the replicas. This will include the snapshot instance being updated as well.
Example:
[ { 'id': 'd3931a93-3984-421e-a9e7-d9f71895450a', 'snapshot_id': '13ee5cb5-fc53-4539-9431-d983b56c5c40', ... }, { 'id': '8bda791c-7bb6-4e7b-9b64-fefff85ff13e', 'snapshot_id': '13ee5cb5-fc53-4539-9431-d983b56c5c40', ... }, ... ]
- Parameters:
replica_snapshot – Dictionary of the snapshot instance. This is the instance to be updated. It will be in ‘creating’ or ‘deleting’ state when sent via this parameter.
Example:
{ 'name': 'share-snapshot-18825630-574f-4912-93bb-af4611ef35a2', 'share_id': 'd487b88d-e428-4230-a465-a800c2cce5f8', 'share_name': 'share-d487b88d-e428-4230-a465-a800c2cce5f8', 'status': 'creating', 'id': '18825630-574f-4912-93bb-af4611ef35a2', 'deleted': False, 'created_at': datetime.datetime(2016, 8, 3, 0, 5, 58), 'share': <models.ShareInstance>, 'updated_at': datetime.datetime(2016, 8, 3, 0, 5, 58), 'share_instance_id': 'd487b88d-e428-4230-a465-a800c2cce5f8', 'snapshot_id': '13ee5cb5-fc53-4539-9431-d983b56c5c40', 'progress': '0%', 'deleted_at': None, 'provider_location': None, }
- Parameters:
share_server – <models.ShareServer> or None
- Returns:
replica_snapshot_model_update: a dictionary. The dictionary must contain values that need to be updated on the database for the snapshot instance that represents the snapshot on the replica.
- Raises:
exception.SnapshotResourceNotFound Raise this exception for snapshots that are not found on the backend and their status was ‘deleting’.