NetApp unified driver¶
The NetApp unified driver is a Block Storage driver that supports multiple storage families and protocols. Currently, the only storage family supported by this driver is the clustered Data ONTAP. The storage protocol refers to the protocol used to initiate data storage and access operations on those storage systems like NVMe, iSCSI and NFS. The NetApp unified driver can be configured to provision and manage OpenStack volumes on a given storage family using a specified storage protocol.
Also, the NetApp unified driver supports over subscription or over provisioning when thin provisioned Block Storage volumes are in use. The OpenStack volumes can then be used for accessing and storing data using the storage protocol on the storage family system. The NetApp unified driver is an extensible interface that can support new storage families and protocols.
Note
With the Juno release of OpenStack, Block Storage has introduced the concept of storage pools, in which a single Block Storage back end may present one or more logical storage resource pools from which Block Storage will select a storage location when provisioning volumes.
In releases prior to Juno, the NetApp unified driver contained some scheduling logic that determined which NetApp storage container (namely, a FlexVol volume for Data ONTAP) that a new Block Storage volume would be placed into.
With the introduction of pools, all scheduling logic is performed completely within the Block Storage scheduler, as each NetApp storage container is directly exposed to the Block Storage scheduler as a storage pool. Previously, the NetApp unified driver presented an aggregated view to the scheduler and made a final placement decision as to which NetApp storage container the Block Storage volume would be provisioned into.
NetApp clustered Data ONTAP storage family¶
The NetApp clustered Data ONTAP storage family represents a configuration group which provides Compute instances access to clustered Data ONTAP storage systems. At present it can be configured in Block Storage to work with NVme, iSCSI and NFS storage protocols.
NetApp iSCSI configuration for clustered Data ONTAP¶
The NetApp iSCSI configuration for clustered Data ONTAP is an interface from OpenStack to clustered Data ONTAP storage systems. It provisions and manages the SAN block storage entity, which is a NetApp LUN that can be accessed using the iSCSI protocol.
The iSCSI configuration for clustered Data ONTAP is a direct interface from Block Storage to the clustered Data ONTAP instance and as such does not require additional management software to achieve the desired functionality. It uses NetApp APIs to interact with the clustered Data ONTAP instance.
Configuration options
Configure the volume driver, storage family, and storage protocol to the
NetApp unified driver, clustered Data ONTAP, and iSCSI respectively by
setting the volume_driver
, netapp_storage_family
and
netapp_storage_protocol
options in the cinder.conf
file as follows:
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family = ontap_cluster
netapp_storage_protocol = iscsi
netapp_vserver = openstack-vserver
netapp_server_hostname = myhostname
netapp_server_port = port
netapp_login = username
netapp_password = password
Note
To use the iSCSI protocol, you must override the default value of
netapp_storage_protocol
with iscsi
.
Note that this is not the same value that is reported by the driver
to the scheduler as storage_protocol, which is always
iSCSI
(case sensitive).
Configuration option = Default value |
Description |
---|---|
[DEFAULT] |
|
|
(String) Administrative user account name used to access the storage system or proxy server. |
|
(String) This option defines the type of operating system that will access a LUN exported from Data ONTAP; it is assigned to the LUN at the time it is created. |
|
(String) This option determines if storage space is reserved for LUN allocation. If enabled, LUNs are thick provisioned. If space reservation is disabled, storage space is allocated on demand. |
|
(String) Password for the administrative user account specified in the netapp_login option. |
|
(String) This option is used to restrict provisioning to the specified pools. Specify the value of this option to be a regular expression which will be applied to the names of objects from the storage backend which represent pools in Cinder. This option is only utilized when the storage protocol is configured to use NVMe, iSCSI or FC. |
|
(Unknown) Multi opt of dictionaries to represent the aggregate mapping between source and destination back ends when using whole back end replication. For every source aggregate associated with a cinder pool (NetApp FlexVol), you would need to specify the destination aggregate on the replication target device. A replication target device is configured with the configuration option replication_device. Specify this option as many times as you have replication devices. Each entry takes the standard dict config form: netapp_replication_aggregate_map = backend_id:<name_of_replication_device_section>,src_aggr_name1:dest_aggr_name1,src_aggr_name2:dest_aggr_name2,… |
|
(String) The hostname (or IP address) for the storage system or proxy server. |
|
(Integer) The TCP port to use for communication with the storage system or proxy server. If not specified, Data ONTAP drivers will use 80 for HTTP and 443 for HTTPS. |
|
(Floating point) The quantity to be multiplied by the requested volume size to ensure enough space is available on the virtual storage server (Vserver) to fulfill the volume creation request. Note: this option is deprecated and will be removed in favor of “reserved_percentage” in the Mitaka release. |
|
(Integer) The maximum time in seconds to wait for existing SnapMirror transfers to complete before aborting during a failover. |
|
(String) The storage family type used on the storage system; the only valid value is ontap_cluster for using clustered Data ONTAP. |
|
(String) The storage protocol to be used on the data path with the storage system. |
|
(String) The transport protocol used when communicating with the storage system or proxy server. |
|
(String) This option specifies the virtual storage server (Vserver) name on the storage cluster on which provisioning of block storage volumes should occur. |
Note
If you specify an account in the netapp_login
that only has
virtual storage server (Vserver) administration privileges (rather
than cluster-wide administration privileges), some advanced features
of the NetApp unified driver will not work and you may see warnings
in the Block Storage logs.
Note
The driver supports iSCSI CHAP uni-directional authentication.
To enable it, set the use_chap_auth
option to True
.
Tip
For more information on these options and other deployment and operational scenarios, visit the NetApp OpenStack website.
NetApp NVMe/TCP configuration for clustered Data ONTAP¶
The NetApp NVMe/TCP configuration for clustered Data ONTAP is an interface from OpenStack to clustered Data ONTAP storage systems. It provisions and manages the SAN block storage entity, which is a NetApp namespace that can be accessed using the NVMe/TCP protocol.
The NVMe/TCP configuration for clustered Data ONTAP is a direct interface from Block Storage to the clustered Data ONTAP instance and as such does not require additional management software to achieve the desired functionality. It uses NetApp APIs to interact with the clustered Data ONTAP instance.
Configuration options
Configure the volume driver, storage family, and storage protocol to the
NetApp unified driver, clustered Data ONTAP, and NVMe respectively by
setting the volume_driver
, netapp_storage_family
and
netapp_storage_protocol
options in the cinder.conf
file as follows:
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family = ontap_cluster
netapp_storage_protocol = nvme
netapp_vserver = openstack-vserver
netapp_server_hostname = myhostname
netapp_server_port = port
netapp_login = username
netapp_password = password
Note
To use the NVMe/TCP protocol, you must override the default value of
netapp_storage_protocol
with nvme
.
Note that this is not the same value that is reported by the driver
to the scheduler as storage_protocol, which is always
NVMe
(case sensitive).
Note
If you specify an account in the netapp_login
that only has
virtual storage server (Vserver) administration privileges (rather
than cluster-wide administration privileges), some advanced features
of the NetApp unified driver will not work and you may see warnings
in the Block Storage logs.
Note
The driver only supports the minimal Cinder driver features: create/delete volume and snapshots, extend volume, attack/detach volume, create volume from volume and create volume from image/snapshot.
Tip
For more information on these options and other deployment and operational scenarios, visit the NetApp OpenStack website.
NetApp NFS configuration for clustered Data ONTAP¶
The NetApp NFS configuration for clustered Data ONTAP is an interface from OpenStack to a clustered Data ONTAP system for provisioning and managing OpenStack volumes on NFS exports provided by the clustered Data ONTAP system that are accessed using the NFS protocol.
The NFS configuration for clustered Data ONTAP is a direct interface from Block Storage to the clustered Data ONTAP instance and as such does not require any additional management software to achieve the desired functionality. It uses NetApp APIs to interact with the clustered Data ONTAP instance.
Configuration options
Configure the volume driver, storage family, and storage protocol to NetApp
unified driver, clustered Data ONTAP, and NFS respectively by setting the
volume_driver
, netapp_storage_family
, and netapp_storage_protocol
options in the cinder.conf
file as follows:
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family = ontap_cluster
netapp_storage_protocol = nfs
netapp_vserver = openstack-vserver
netapp_server_hostname = myhostname
netapp_server_port = port
netapp_login = username
netapp_password = password
nfs_shares_config = /etc/cinder/nfs_shares
Configuration option = Default value |
Description |
---|---|
[DEFAULT] |
|
|
(Integer) This option specifies the threshold for last access time for images in the NFS image cache. When a cache cleaning cycle begins, images in the cache that have not been accessed in the last M minutes, where M is the value of this parameter, will be deleted from the cache to create free space on the NFS share. |
|
(String) This option specifies the path of the NetApp copy offload tool binary. Ensure that the binary has execute permissions set which allow the effective user of the cinder-volume process to execute the file. |
|
(String) This option defines the type of operating system for all initiators that can access a LUN. This information is used when mapping LUNs to individual hosts or groups of hosts. |
|
(String) Administrative user account name used to access the storage system or proxy server. |
|
(String) This option defines the type of operating system that will access a LUN exported from Data ONTAP; it is assigned to the LUN at the time it is created. |
|
(String) Password for the administrative user account specified in the netapp_login option. |
|
(String) This option is used to restrict provisioning to the specified pools. Specify the value of this option to be a regular expression which will be applied to the names of objects from the storage backend which represent pools in Cinder. This option is only utilized when the storage protocol is configured to use iSCSI or FC. |
|
(Unknown) Multi opt of dictionaries to represent the aggregate mapping between source and destination back ends when using whole back end replication. For every source aggregate associated with a cinder pool (NetApp FlexVol), you would need to specify the destination aggregate on the replication target device. A replication target device is configured with the configuration option replication_device. Specify this option as many times as you have replication devices. Each entry takes the standard dict config form: netapp_replication_aggregate_map = backend_id:<name_of_replication_device_section>,src_aggr_name1:dest_aggr_name1,src_aggr_name2:dest_aggr_name2,… |
|
(String) The hostname (or IP address) for the storage system or proxy server. |
|
(Integer) The TCP port to use for communication with the storage system or proxy server. If not specified, Data ONTAP drivers will use 80 for HTTP and 443 for HTTPS. |
|
(Integer) The maximum time in seconds to wait for existing SnapMirror transfers to complete before aborting during a failover. |
|
(String) The storage family type used on the storage system; the only valid value is ontap_cluster for using clustered Data ONTAP. |
|
(String) The storage protocol to be used on the data path with the storage system. |
|
(String) The transport protocol used when communicating with the storage system or proxy server. |
|
(String) This option specifies the virtual storage server (Vserver) name on the storage cluster on which provisioning of block storage volumes should occur. |
|
(Integer) If the percentage of available space for an NFS share has dropped below the value specified by this option, the NFS image cache will be cleaned. |
|
(Integer) When the percentage of available space on an NFS share has reached the percentage specified by this option, the driver will stop clearing files from the NFS image cache that have not been accessed in the last M minutes, where M is the value of the expiry_thres_minutes configuration option. |
Note
Additional NetApp NFS configuration options are shared with the generic NFS driver. These options can be found here: Description of NFS storage configuration options.
Note
If you specify an account in the netapp_login
that only has
virtual storage server (Vserver) administration privileges (rather
than cluster-wide administration privileges), some advanced features
of the NetApp unified driver will not work and you may see warnings
in the Block Storage logs.
NetApp NFS Copy Offload client¶
A feature was added in the Icehouse release of the NetApp unified driver that enables Image service images to be efficiently copied to a destination Block Storage volume. When the Block Storage and Image service are configured to use the NetApp NFS Copy Offload client, a controller-side copy will be attempted before reverting to downloading the image from the Image service. This improves image provisioning times while reducing the consumption of bandwidth and CPU cycles on the host(s) running the Image and Block Storage services. This is due to the copy operation being performed completely within the storage cluster.
The NetApp NFS Copy Offload client can be used in either of the following scenarios:
The Image service is configured to store images in an NFS share that is exported from a NetApp FlexVol volume and the destination for the new Block Storage volume will be on an NFS share exported from a different FlexVol volume than the one used by the Image service. Both FlexVols must be located within the same cluster.
The source image from the Image service has already been cached in an NFS image cache within a Block Storage back end. The cached image resides on a different FlexVol volume than the destination for the new Block Storage volume. Both FlexVols must be located within the same cluster.
To use this feature, you must configure the Image service, as follows:
Set the
default_store
configuration option tofile
.Set the
filesystem_store_datadir
configuration option to the path to the Image service NFS export.Set the
show_image_direct_url
configuration option toTrue
.Set the
show_multiple_locations
configuration option toTrue
.Set the
filesystem_store_metadata_file
configuration option to a metadata file. The metadata file should contain a JSON object that contains the correct information about the NFS export used by the Image service.
To use this feature, you must configure the Block Storage service, as follows:
Set the
netapp_copyoffload_tool_path
configuration option to the path to the NetApp Copy Offload binary.Important
This feature requires that:
The storage system must have Data ONTAP v8.2 or greater installed.
The vStorage feature must be enabled on each storage virtual machine (SVM, also known as a Vserver) that is permitted to interact with the copy offload client.
To configure the copy offload workflow, enable NFS v4.0 or greater and export it from the SVM.
Tip
To download the NetApp copy offload binary to be utilized in conjunction
with the netapp_copyoffload_tool_path
configuration option, please visit
the Utility Toolchest page at the NetApp Support portal
(login is required).
Tip
For more information on these options and other deployment and operational scenarios, visit the NetApp OpenStack website.
NetApp-supported extra specs for clustered Data ONTAP¶
Extra specs enable vendors to specify extra filter criteria. The Block Storage scheduler uses the specs when the scheduler determines which volume node should fulfill a volume provisioning request. When you use the NetApp unified driver with a clustered Data ONTAP storage system, you can leverage extra specs with Block Storage volume types to ensure that Block Storage volumes are created on storage back ends that have certain properties. An example of this is when you configure QoS, mirroring, or compression for a storage back end.
Extra specs are associated with Block Storage volume types. When users request volumes of a particular volume type, the volumes are created on storage back ends that meet the list of requirements. An example of this is the back ends that have the available space or extra specs. Use the specs in the following table to configure volumes. Define Block Storage volume types by using the openstack volume type set command.
Extra spec |
Type |
Description |
---|---|---|
|
String |
Limit the candidate volume list based on one of the following raid
types: |
|
String |
Limit the candidate volume list based on one of the following disk
types: |
|
String |
Specify the name of a QoS policy group, which defines measurable Service Level Objectives, that should be applied to the OpenStack Block Storage volume at the time of volume creation. Ensure that the QoS policy group object within Data ONTAP should be defined before an OpenStack Block Storage volume is created, and that the QoS policy group is not associated with the destination FlexVol volume. |
|
Boolean |
Set to “<is> True” in order to instruct the driver to use an Adaptive QoS policy group for the netapp:qos_policy_group setting. Leave this unset or set to “<is> False” in order to use a standard QoS policy group for the netapp:qos_policy_group setting. |
|
Boolean |
Limit the candidate volume list to only the ones that are mirrored on the storage controller. |
|
Boolean |
Limit the candidate volume list to only the ones that are not mirrored on the storage controller. |
|
Boolean |
Limit the candidate volume list to only the ones that have deduplication enabled on the storage controller. |
|
Boolean |
Limit the candidate volume list to only the ones that have deduplication disabled on the storage controller. |
|
Boolean |
Limit the candidate volume list to only the ones that have compression enabled on the storage controller. |
|
Boolean |
Limit the candidate volume list to only the ones that have compression disabled on the storage controller. |
|
Boolean |
Limit the candidate volume list to only the ones that support thin provisioning on the storage controller. |
|
Boolean |
Limit the candidate volume list to only the ones that support thick provisioning on the storage controller. |
- 1
Please note that this extra spec has a colon (
:
) in its name because it is used by the driver to assign the QoS policy group to the OpenStack Block Storage volume after it has been provisioned.- 2
In the Juno release, these negative-assertion extra specs are formally deprecated by the NetApp unified driver. Instead of using the deprecated negative-assertion extra specs (for example,
netapp_unmirrored
) with a value oftrue
, use the corresponding positive-assertion extra spec (for example,netapp_mirrored
) with a value offalse
.