- Supported OpenStack release
- System requirements
- Supported operations
- Preparation
- Backend configuration
- Authentication
- Restriction of deployment
- Restriction of volume extension
- Provisioning type (thin, thick, deduplicated and compressed)
- Fully automated storage tiering support
- FAST Cache support
- Storage group automatic deletion
- EMC storage-assisted volume migration
- Initiator auto registration
- Read-only volumes
- Multiple pools support
- FC SAN auto zoning
- Multi-backend configuration
EMC VNX direct driver
(consists of EMCCLIISCSIDriver
and EMCCLIFCDriver
) supports both iSCSI and FC protocol.
EMCCLIISCSIDriver
(VNX iSCSI direct driver) and
EMCCLIFCDriver
(VNX FC direct driver) are separately
based on the ISCSIDriver
and FCDriver
defined in Block Storage.
EMCCLIISCSIDriver
and EMCCLIFCDriver
perform the volume operations by executing Navisphere CLI (NaviSecCLI)
which is a command line interface used for management, diagnostics and reporting
functions for VNX.
VNX Operational Environment for Block version 5.32 or higher.
VNX Snapshot and Thin Provisioning license should be activated for VNX.
Navisphere CLI v7.32 or higher is installed along with the driver.
Create, delete, attach, and detach volumes.
Create, list, and delete volume snapshots.
Create a volume from a snapshot.
Copy an image to a volume.
Clone a volume.
Extend a volume.
Migrate a volume.
Retype a volume.
Get volume statistics.
Create and delete consistency groups.
Create, list, and delete consistency group snapshots.
This section contains instructions to prepare the Block Storage nodes to use the EMC VNX direct driver. You install the Navisphere CLI, install the driver, ensure you have correct zoning configurations, and register the driver.
Navisphere CLI needs to be installed on all Block Storage nodes within an OpenStack deployment.
For Ubuntu x64, DEB is available at EMC OpenStack Github.
For all other variants of Linux, Navisphere CLI is available at Downloads for VNX2 Series or Downloads for VNX1 Series.
After installation, set the security level of Navisphere CLI to low:
$ /opt/Navisphere/bin/naviseccli security -certificate -setLevel low
Both EMCCLIISCSIDriver
and
EMCCLIFCDriver
are provided in the installer
package:
emc_vnx_cli.py
emc_cli_fc.py
(forEMCCLIFCDriver
)emc_cli_iscsi.py
(forEMCCLIISCSIDriver
)
Copy the files above to the cinder/volume/drivers/emc/
directory of the OpenStack node(s) where
cinder-volume
is running.
A storage administrator must enable FC SAN auto zoning between all OpenStack nodes and VNX if FC SAN auto zoning is not enabled.
Register the compute nodes with VNX to access the storage in VNX or enable initiator auto registration.
To perform "Copy Image to Volume" and "Copy Volume to Image"
operations, the nodes running the cinder-volume
service(Block Storage nodes) must be registered with the VNX as well.
Steps mentioned below are for a compute node. Please follow the same steps for the Block Storage nodes also. The steps can be skipped if initiator auto registration is enabled.
Steps for EMCCLIFCDriver
:
Assume
20:00:00:24:FF:48:BA:C2:21:00:00:24:FF:48:BA:C2
is the WWN of a FC initiator port name of the compute node whose hostname and IP aremyhost1
and10.10.61.1
. Register20:00:00:24:FF:48:BA:C2:21:00:00:24:FF:48:BA:C2
in Unisphere:Login to Unisphere, go to
.Refresh and wait until the initiator
20:00:00:24:FF:48:BA:C2:21:00:00:24:FF:48:BA:C2
with SP PortA-1
appears.Click the CLARiiON/VNX and enter the hostname and IP address:
button, selectHostname :
myhost1
IP :
10.10.61.1
Click
Then host
10.10.61.1
will appear under as well.
Register the wwn with more ports if needed.
Steps for EMCCLIISCSIDriver
:
On the compute node with IP address
10.10.61.1
and hostnamemyhost1
, execute the following commands (assuming10.10.61.35
is the iSCSI target):Start the iSCSI initiator service on the node
# /etc/init.d/open-iscsi start
Discover the iSCSI target portals on VNX
# iscsiadm -m discovery -t st -p 10.10.61.35
Enter
/etc/iscsi
# cd /etc/iscsi
Find out the iqn of the node
# more initiatorname.iscsi
Login to VNX from the compute node using the target corresponding to the SPA port:
# iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.a0 -p 10.10.61.35 -l
Assume
iqn.1993-08.org.debian:01:1a2b3c4d5f6g
is the initiator name of the compute node. Registeriqn.1993-08.org.debian:01:1a2b3c4d5f6g
in Unisphere:Login to Unisphere, go to
.Refresh and wait until the initiator
iqn.1993-08.org.debian:01:1a2b3c4d5f6g
with SP PortA-8v0
appears.Click the CLARiiON/VNX and enter the hostname and IP address:
button, selectHostname :
myhost1
IP :
10.10.61.1
Click
Then host
10.10.61.1
will appear under as well.
Logout iSCSI on the node:
# iscsiadm -m node -u
Login to VNX from the compute node using the target corresponding to the SPB port:
# iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.b8 -p 10.10.61.36 -l
In Unisphere register the initiator with the SPB port.
Logout iSCSI on the node:
# iscsiadm -m node -u
Register the iqn with more ports if needed.
Make the following changes in the
/etc/cinder/cinder.conf
:
storage_vnx_pool_name = Pool_01_SAS san_ip = 10.10.72.41 san_secondary_ip = 10.10.72.42 #VNX user name #san_login = username #VNX user password #san_password = password #VNX user type. Valid values are: global(default), local and ldap. #storage_vnx_authentication_type = ldap #Directory path of the VNX security file. Make sure the security file is generated first. #VNX credentials are not necessary when using security file. storage_vnx_security_file_dir = /etc/secfile/array1 naviseccli_path = /opt/Navisphere/bin/naviseccli #timeout in minutes default_timeout = 10 #If deploying EMCCLIISCSIDriver: #volume_driver = cinder.volume.drivers.emc.emc_cli_iscsi.EMCCLIISCSIDriver volume_driver = cinder.volume.drivers.emc.emc_cli_fc.EMCCLIFCDriver destroy_empty_storage_group = False #"node1hostname" and "node2hostname" shoule be the full hostnames of the nodes(Try command 'hostname'). #This option is for EMCCLIISCSIDriver only. iscsi_initiators = {"node1hostname":["10.0.0.1", "10.0.0.2"],"node2hostname":["10.0.0.3"]} [database] max_pool_size = 20 max_overflow = 30
where
san_ip
is one of the SP IP addresses of the VNX array andsan_secondary_ip
is the other SP IP address of VNX array.san_secondary_ip
is an optional field, and it serves the purpose of providing a high availability(HA) design. In case that one SP is down, the other SP can be connected automatically.san_ip
is a mandatory field, which provides the main connection.where
Pool_01_SAS
is the pool from which the user wants to create volumes. The pools can be created using Unisphere for VNX. Refer to the the section called “Multiple pools support” on how to manage multiple pools.where
storage_vnx_security_file_dir
is the directory path of the VNX security file. Make sure the security file is generated following the steps in the section called “Authentication”.where
iscsi_initiators
is a dictionary of IP addresses of the iSCSI initiator ports on all OpenStack nodes which want to connect to VNX via iSCSI. If this option is configured, the driver will leverage this information to find an accessible iSCSI target portal for the initiator when attaching volume. Otherwise, the iSCSI target portal will be chosen in a relative random way.Restart
cinder-volume
service to make the configuration change take effect.
VNX credentials are necessary when the driver connects to the VNX system. Credentials in global, local and ldap scopes are supported. There are two approaches to provide the credentials.
The recommended one is using the Navisphere CLI security file to provide the credentials which can get rid of providing the plain text credentials in the configuration file. Following is the instruction on how to do this.
Find out the linux user id of the
/usr/bin/cinder-volume
processes. Assuming the service/usr/bin/cinder-volume
is running by accountcinder
.Switch to
root
accountChange
cinder:x:113:120::/var/lib/cinder:/bin/false
tocinder:x:113:120::/var/lib/cinder:/bin/bash
in/etc/passwd
(This temporary change is to make step 4 work).Save the credentials on behalf of
cinder
user to a security file (assuming the array credentials areadmin/admin
inglobal
scope). In below command, switch-secfilepath
is used to specify the location to save the security file (assuming saving to directory/etc/secfile/array1
).# su -l cinder -c '/opt/Navisphere/bin/naviseccli -AddUserSecurity -user admin -password admin -scope 0 -secfilepath /etc/secfile/array1'
Save the security file to the different locations for different arrays except where the same credentials are shared between all arrays managed by the host. Otherwise, the credentials in the security file will be overwritten. If
-secfilepath
is not specified in the command above, the security file will be saved to the default location which is the home directory of the executor.Change
cinder:x:113:120::/var/lib/cinder:/bin/bash
back tocinder:x:113:120::/var/lib/cinder:/bin/false
in/etc/passwd
.Remove the credentials options
san_login
,san_password
andstorage_vnx_authentication_type
fromcinder.conf
(normally it is/etc/cinder/cinder.conf
). Add the optionstorage_vnx_security_file_dir
and set its value to the directory path supplied with switch-secfilepath
in step 4. Omit this option if-secfilepath
is not used in step 4.#Directory path that contains the VNX security file. Generate the security file first storage_vnx_security_file_dir = /etc/secfile/array1
Restart
cinder-volume
service to make the change take effect.
Alternatively, the credentials can be specified in
/etc/cinder/cinder.conf
through the
three options below:
#VNX user name san_login = username #VNX user password san_password = password #VNX user type. Valid values are: global, local and ldap. global is the default value storage_vnx_authentication_type = ldap
It does not suggest to deploy the driver on a compute node if
cinder upload-to-image --force True
is used
against an in-use volume. Otherwise,
cinder upload-to-image --force True
will
terminate the vm instance's data access to the volume.
VNX does not support to extend the thick volume which has
a snapshot. If the user tries to extend a volume which has a
snapshot, the volume's status would change to
error_extending
.
User can specify extra spec key storagetype:provisioning
in volume type to set the provisioning type of a volume. The provisioning
type can be thick
, thin
,
deduplicated
or compressed
.
thick
provisioning type means the volume is fully provisioned.thin
provisioning type means the volume is virtually provisioned.deduplicated
provisioning type means the volume is virtually provisioned and the deduplication is enabled on it. Administrator shall go to VNX to configure the system level deduplication settings. To create a deduplicated volume, the VNX deduplication license should be activated on VNX first, and use keydeduplication_support=True
to let Block Storage scheduler find a volume backend which manages a VNX with deduplication license activated.compressed
provisioning type means the volume is virtually provisioned and the compression is enabled on it. Administrator shall go to the VNX to configure the system level compression settings. To create a compressed volume, the VNX compression license should be activated on VNX first, and the user should specify keycompression_support=True
to let Block Storage scheduler find a volume backend which manages a VNX with compression license activated. VNX does not support to create a snapshot on a compressed volume. If the user tries to create a snapshot on a compressed volume, the operation would fail and OpenStack would show the new snapshot in error state.
Here is an example about how to create a volume with provisioning type. Firstly create a volume type and specify storage pool in the extra spec, then create a volume with this volume type:
$ cinder type-create "ThickVolume" $ cinder type-create "ThinVolume" $ cinder type-create "DeduplicatedVolume" $ cinder type-create "CompressedVolume" $ cinder type-key "ThickVolume" set storagetype:provisioning=thick $ cinder type-key "ThinVolume" set storagetype:provisioning=thin $ cinder type-key "DeduplicatedVolume" set storagetype:provisioning=deduplicated deduplication_support=True $ cinder type-key "CompressedVolume" set storagetype:provisioning=compressed compression_support=True
In the example above, four volume types are created:
ThickVolume
, ThinVolume
,
DeduplicatedVolume
and CompressedVolume
.
For ThickVolume
, storagetype:provisioning
is set to thick
. Similarly for other volume types.
If storagetype:provisioning
is not specified or an
invalid value, the default value thick
is adopted.
Volume type name, such as ThickVolume
, is user-defined
and can be any name. Extra spec key storagetype:provisioning
shall be the exact name listed here. Extra spec value for
storagetype:provisioning
shall be
thick
, thin
, deduplicated
or compressed
. During volume creation, if the driver finds
storagetype:provisioning
in the extra spec of the volume type,
it will create the volume with the provisioning type accordingly. Otherwise, the
volume will be thick as the default.
VNX supports Fully automated storage tiering which requires the
FAST license activated on the VNX. The OpenStack administrator can
use the extra spec key storagetype:tiering
to set
the tiering policy of a volume and use the extra spec key
fast_support=True
to let Block Storage scheduler find a volume
backend which manages a VNX with FAST license activated. Here are the five
supported values for the extra spec key
storagetype:tiering
:
StartHighThenAuto
(Default option)Auto
HighestAvailable
LowestAvailable
NoMovement
Tiering policy can not be set for a deduplicated volume. The user can check storage pool properties on VNX to know the tiering policy of a deduplicated volume.
Here is an example about how to create a volume with tiering policy:
$ cinder type-create "AutoTieringVolume" $ cinder type-key "AutoTieringVolume" set storagetype:tiering=Auto fast_support=True $ cinder type-create "ThinVolumeOnLowestAvaibleTier" $ cinder type-key "CompressedVolumeOnLowestAvaibleTier" set storagetype:provisioning=thin storagetype:tiering=Auto fast_support=True
VNX has FAST Cache feature which requires the FAST Cache license
activated on the VNX. The OpenStack administrator can use the extra
spec key fast_cache_enabled
to choose whether to create
a volume on the volume backend which manages a pool with FAST Cache
enabled. This feature is only supported by pool-based backend (Refer
to the section called “Multiple pools support”). The value of the
extra spec key fast_cache_enabled
is either
True
or False
. When creating
a volume, if the key fast_cache_enabled
is set to
True
in the volume type, the volume will be created by
a pool-based backend which manages a pool with FAST Cache enabled.
For volume attaching, the driver has a storage group on VNX for each
compute node hosting the vm instances that are going to consume VNX Block
Storage (using the compute node's hostname as the storage group's name).
All the volumes attched to the vm instances in a computer node will be
put into the corresponding Storage Group. If
destroy_empty_storage_group=True
, the driver will
remove the empty storage group when its last volume is detached. For data
safety, it does not suggest to set the option
destroy_empty_storage_group=True
unless the VNX
is exclusively managed by one Block Storage node because consistent
lock_path
is required for operation synchronization for
this behavior.
EMC VNX direct driver
supports storage-assisted volume migration,
when the user starts migrating with
cinder migrate --force-host-copy False volume_id host
or cinder migrate volume_id host
, cinder will try to
leverage the VNX's native volume migration functionality.
In the following scenarios, VNX native volume migration will not be triggered:
Volume migration between backends with different storage protocol, ex, FC and iSCSI.
Volume migration from pool-based backend to array-based backend.
Volume is being migrated across arrays.
If initiator_auto_registration=True
,
the driver will automatically register iSCSI initiators with all
working iSCSI target ports on the VNX array during volume attaching (The
driver will skip those initiators that have already been registered).
If the user wants to register the initiators with some specific ports on VNX but not register with the other ports, this functionality should be disabled.
OpenStack supports read-only volumes. Either of the following commands can be used to set a volume to read-only.
$ cinder metadata volume set 'attached_mode'='ro' $ cinder metadata volume set 'readonly'='True'
After a volume is marked as read-only, the driver will forward the information when a hypervisor is attaching the volume and the hypervisor will have an implementation-specific way to make sure the volume is not written.
Normally a storage pool is configured for a Block Storage backend (named as pool-based backend), so that only that storage pool will be used by that Block Storage backend.
If storage_vnx_pool_name
is not given in the
configuration file, the driver will allow user to use the extra spec key
storagetype:pool
in the volume type to specify the
storage pool for volume creation. If storagetype:pool
is not specified in the volume type and storage_vnx_pool_name
is not found in the configuration file, the driver will randomly
choose a pool to create the volume. This kind of Block Storage backend is
named as array-based backend.
Here is an example about configuration of array-based backend:
san_ip = 10.10.72.41 #Directory path that contains the VNX security file. Make sure the security file is generated first storage_vnx_security_file_dir = /etc/secfile/array1 storage_vnx_authentication_type = global naviseccli_path = /opt/Navisphere/bin/naviseccli default_timeout = 10 volume_driver = cinder.volume.drivers.emc.emc_cli_iscsi.EMCCLIISCSIDriver destroy_empty_storage_group = False volume_backend_name = vnx_41
In this configuration, if the user wants to create a volume on a certain storage pool, a volume type with a extra spec specified the storage pool should be created first, then the user can use this volume type to create the volume.
Here is an example about creating the volume type:
$ cinder type-create "HighPerf" $ cinder type-key "HighPerf" set storagetype:pool=Pool_02_SASFLASH volume_backend_name=vnx_41
Multiple pool support is still an experimental workaround before blueprint pool-aware-cinder-scheduler is introduced. It is NOT recommended to enable this feature since Juno just supports pool-aware-cinder-scheduler. In later driver update, the driver side change which cooperates with pool-aware-cinder-scheduler will be introduced.
EMC direct driver supports FC SAN auto zoning when
ZoneManager is configured. Set zoning_mode
to fabric
in backend configuration section to
enable this feature. For ZoneManager configuration, please refer
to the section called “Fibre Channel Zone Manager”.
[DEFAULT] enabled_backends = backendA, backendB [backendA] storage_vnx_pool_name = Pool_01_SAS san_ip = 10.10.72.41 #Directory path that contains the VNX security file. Make sure the security file is generated first. storage_vnx_security_file_dir = /etc/secfile/array1 naviseccli_path = /opt/Navisphere/bin/naviseccli #Timeout in Minutes default_timeout = 10 volume_driver = cinder.volume.drivers.emc.emc_cli_fc.EMCCLIFCDriver destroy_empty_storage_group = False initiator_auto_registration = True [backendB] storage_vnx_pool_name = Pool_02_SAS san_ip = 10.10.26.101 san_login = username san_password = password naviseccli_path = /opt/Navisphere/bin/naviseccli #Timeout in Minutes default_timeout = 10 volume_driver = cinder.volume.drivers.emc.emc_cli_fc.EMCCLIFCDriver destroy_empty_storage_group = False initiator_auto_registration = True [database] max_pool_size = 20 max_overflow = 30
For more details on multi-backend, see OpenStack Cloud Administration Guide.