The EMC volume drivers, EMCSMISISCSIDriver
and EMCSMISFCDriver
, has
the ability to create/delete and attach/detach
volumes and create/delete snapshots, and so on.
The driver runs volume operations by communicating with the backend EMC storage. It uses a CIM client in Python called PyWBEM to perform CIM operations over HTTP.
The EMC CIM Object Manager (ECOM) is packaged with the EMC SMI-S provider. It is a CIM server that enables CIM clients to perform CIM operations over HTTP by using SMI-S in the back-end for EMC storage operations.
The EMC SMI-S Provider supports the SNIA Storage Management Initiative (SMI), an ANSI standard for storage management. It supports VMAX and VNX storage systems.
EMC SMI-S Provider V4.6.1 and higher is required. You can download SMI-S from the EMC's support web site (login is required). See the EMC SMI-S Provider release notes for installation instructions.
EMC storage VMAX Family and VNX Series are supported.
VMAX and VNX arrays support these operations:
Create volume
Delete volume
Attach volume
Detach volume
Create snapshot
Delete snapshot
Create cloned volume
Copy image to volume
Copy volume to image
Only VNX supports the following operations:
Create volume from snapshot
Extend volume
Procedure 1.3. To set up the EMC SMI-S drivers
Install the python-pywbem package for your distribution. See the section called “Install the python-pywbem package”.
Download SMI-S from PowerLink and install it. Add your VNX/VMAX arrays to SMI-S.
For information, see the section called “Set up SMI-S” and the SMI-S release notes.
Register with VNX. See the section called “Register with VNX for the iSCSI driver” for the VNX iSCSI driver and the section called “Register with VNX for the FC driver” for the VNX FC driver.
Create a masking view on VMAX. See the section called “Create a masking view on VMAX”.
Install the python-pywbem package for your distribution, as follows:
On Ubuntu:
# apt-get install python-pywbem
On openSUSE:
# zypper install python-pywbem
On Fedora:
# yum install pywbem
You can install SMI-S on a non-OpenStack host. Supported platforms include different flavors of Windows, Red Hat, and SUSE Linux. SMI-S can be installed on a physical server or a VM hosted by an ESX server. Note that the supported hypervisor for a VM running SMI-S is ESX only. See the EMC SMI-S Provider release notes for more information on supported platforms and installation instructions.
Note | |
---|---|
You must discover storage arrays on the SMI-S server before you can use the Cinder driver. Follow instructions in the SMI-S release notes. |
SMI-S is usually installed at
/opt/emc/ECIM/ECOM/bin
on
Linux and C:\Program
Files\EMC\ECIM\ECOM\bin
on Windows.
After you install and configure SMI-S, go to that
directory and type
TestSmiProvider.exe.
Use addsys in TestSmiProvider.exe to add an array. Use dv and examine the output after the array is added. Make sure that the arrays are recognized by the SMI-S server before using the EMC Cinder driver.
To export a VNX volume to a Compute node or a Volume node, you must register the node with VNX.
Procedure 1.4. Register the node
On the Compute node or Volume node
1.1.1.1
, do the following (assume10.10.61.35
is the iscsi target):# /etc/init.d/open-iscsi start # iscsiadm -m discovery -t st -p 10.10.61.35 # cd /etc/iscsi # more initiatorname.iscsi # iscsiadm -m node
Log in to VNX from the node using the target corresponding to the SPA port:
# iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.a0 -p 10.10.61.35 -l
Where
iqn.1992-04.com.emc:cx.apm01234567890.a0
is the initiator name of the node. Login to Unisphere, go toVNX00000
->Hosts->Initiators, Refresh and wait until initiatoriqn.1992-04.com.emc:cx.apm01234567890.a0
with SP PortA-8v0
appears.Click the CLARiiON/VNX, and enter the host name
button, selectmyhost1
and IP addressmyhost1
. Click . Now host1.1.1.1
also appears under Hosts->Host List.Log out of VNX on the node:
# iscsiadm -m node -u
Log in to VNX from the node using the target corresponding to the SPB port:
# iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.b8 -p 10.10.10.11 -l
In Unisphere register the initiator with the SPB port.
Log out:
# iscsiadm -m node -u
For a VNX volume to be exported to a Compute node or a Volume node, SAN zoning needs to be configured on the node and WWNs of the node need to be registered with VNX in Unisphere.
For VMAX iSCSI and FC drivers, you need to do initial setup in Unisphere for VMAX. In Unisphere for VMAX, create an initiator group, a storage group, and a port group. Put them in a masking view. The initiator group contains the initiator names of the OpenStack hosts. The storage group will contain volumes provisioned by Block Storage.
Make the following changes in
/etc/cinder/cinder.conf
.
For VMAX iSCSI driver, add the following entries, where
10.10.61.45
is the IP address
of the VMAX iSCSI target:
iscsi_target_prefix = iqn.1992-04.com.emc iscsi_ip_address = 10.10.61.45 volume_driver = cinder.volume.drivers.emc.emc_smis_iscsi.EMCSMISISCSIDriver cinder_emc_config_file = /etc/cinder/cinder_emc_config.xml
For VNX iSCSI driver, add the following entries, where
10.10.61.35
is the IP address
of the VNX iSCSI target:
iscsi_target_prefix = iqn.2001-07.com.vnx iscsi_ip_address = 10.10.61.35 volume_driver = cinder.volume.drivers.emc.emc_smis_iscsi.EMCSMISISCSIDriver cinder_emc_config_file = /etc/cinder/cinder_emc_config.xml
For VMAX and VNX FC drivers, add the following entries:
volume_driver = cinder.volume.drivers.emc.emc_smis_fc.EMCSMISFCDriver cinder_emc_config_file = /etc/cinder/cinder_emc_config.xml
Restart the cinder-volume
service.
Create the /etc/cinder/cinder_emc_config.xml
file. You do not
need to restart the service for this change.
For VMAX, add the following lines to the XML file:
<?xml version="1.0" encoding="UTF-8"?> <EMC> <StorageType>xxxx</StorageType> <MaskingView>xxxx</MaskingView> <EcomServerIp>x.x.x.x</EcomServerIp> <EcomServerPort>xxxx</EcomServerPort> <EcomUserName>xxxxxxxx</EcomUserName> <EcomPassword>xxxxxxxx</EcomPassword> <Timeout>xx</Timeout> </EMC>
For VNX, add the following lines to the XML file:
<?xml version="1.0" encoding="UTF-8"?> <EMC> <StorageType>xxxx</StorageType> <EcomServerIp>x.x.x.x</EcomServerIp> <EcomServerPort>xxxx</EcomServerPort> <EcomUserName>xxxxxxxx</EcomUserName> <EcomPassword>xxxxxxxx</EcomPassword> <Timeout>xx</Timeout> </EMC>
Where:
StorageType
is the thin pool from which the user wants to create the volume. Thin pools can be created using Unisphere for VMAX and VNX. If theStorageType
tag is not defined, you have to define volume types and set the pool name in extra specs.EcomServerIp
andEcomServerPort
are the IP address and port number of the ECOM server which is packaged with SMI-S.EcomUserName
andEcomPassword
are credentials for the ECOM server.Timeout
specifies the maximum number of seconds you want to wait for an operation to finish.
Note | |
---|---|
To attach VMAX volumes to an OpenStack VM, you must create a Masking View by using Unisphere for VMAX. The Masking View must have an Initiator Group that contains the initiator of the OpenStack compute node that hosts the VM. |
Volume type support enables a single instance of
cinder-volume
to support multiple pools
and thick/thin provisioning.
When the StorageType
tag in
cinder_emc_config.xml
is used,
the pool name is specified in the tag.
Only thin provisioning is supported in this case.
When the StorageType
tag is not used in
cinder_emc_config.xml
, the volume type
needs to be used to define a pool name and a provisioning type.
The pool name is the name of a pre-created pool.
The provisioning type could be either thin
or thick
.
Here is an example of how to set up volume type. First create volume types. Then define extra specs for each volume type.
Procedure 1.5. Setup volume types
Create the volume types:
$ cinder type-create "High Performance" $ cinder type-create "Standard Performance"
Setup the volume type extra specs:
$ cinder type-key "High Performance" set storagetype:pool=smi_pool $ cinder type-key "High Performance" set storagetype:provisioning=thick $ cinder type-key "Standard Performance" set storagetype:pool=smi_pool2 $ cinder type-key "Standard Performance" set storagetype:provisioning=thin
In the above example, two volume types are created.
They are High Performance
and
Standard Performance
. For High Performance
, storagetype:pool
is set to
smi_pool
and storagetype:provisioning
is set to thick
. Similarly
for Standard Performance
,
storagetype:pool
. is set to smi_pool2
and storagetype:provisioning
is set to
thin
. If storagetype:provisioning
is not specified, it will default to
thin
.
Note | |
---|---|
Volume type names |