The HDS driver supports the concept of differentiated
services, where a volume type can be associated with the
fine-tuned performance characteristics of an HDP—
the dynamic pool where volumes are created[3]. For instance, an HDP can consist of fast SSDs
to provide speed. HDP can provide a certain reliability
based on things like its RAID level characteristics. HDS
driver maps volume type to the
volume_type
option in its
configuration file.
Configuration is read from an XML-format file. Examples are shown for single and multi back-end cases.
Note | |
---|---|
|
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
hds_cinder_config_file = /opt/hds/hus/cinder_hus_conf.xml |
(StrOpt) The configuration file for the Cinder HDS driver for HUS |
Before using iSCSI services, use the HUS UI to create an iSCSI domain for each EVS providing iSCSI services.
In a single back-end deployment, only one OpenStack Block Storage instance runs on the OpenStack Block Storage server and controls one HUS array: this deployment requires these configuration files:
Set the
hds_cinder_config_file
option in the/etc/cinder/cinder.conf
file to use the HDS volume driver. This option points to a configuration file.[5]volume_driver = cinder.volume.drivers.hds.hds.HUSDriver hds_cinder_config_file = /opt/hds/hus/cinder_hds_conf.xml
Configure
hds_cinder_config_file
at the location specified previously. For example,/opt/hds/hus/cinder_hds_conf.xml
:<?xml version="1.0" encoding="UTF-8" ?> <config> <mgmt_ip0>172.17.44.16</mgmt_ip0> <mgmt_ip1>172.17.44.17</mgmt_ip1> <hus_cmd>hus-cmd</hus_cmd> <username>system</username> <password>manager</password> <svc_0> <volume_type>default</volume_type> <iscsi_ip>172.17.39.132</iscsi_ip> <hdp>9</hdp> </svc_0> <snapshot> <hdp>13</hdp> </snapshot> <lun_start> 3000 </lun_start> <lun_end> 4000 </lun_end> </config>
In a multi back-end deployment, more than one OpenStack Block Storage instance runs on the same server. In this example, two HUS arrays are used, possibly providing different storage performance:
Configure
/etc/cinder/cinder.conf
: thehus1
hus2
configuration blocks are created. Set thehds_cinder_config_file
option to point to a unique configuration file for each block. Set thevolume_driver
option for each back-end tocinder.volume.drivers.hds.hds.HUSDriver
enabled_backends=hus1,hus2 [hus1] volume_driver = cinder.volume.drivers.hds.hds.HUSDriver hds_cinder_config_file = /opt/hds/hus/cinder_hus1_conf.xml volume_backend_name=hus-1 [hus2] volume_driver = cinder.volume.drivers.hds.hds.HUSDriver hds_cinder_config_file = /opt/hds/hus/cinder_hus2_conf.xml volume_backend_name=hus-2
Configure
/opt/hds/hus/cinder_hus1_conf.xml
:<?xml version="1.0" encoding="UTF-8" ?> <config> <mgmt_ip0>172.17.44.16</mgmt_ip0> <mgmt_ip1>172.17.44.17</mgmt_ip1> <hus_cmd>hus-cmd</hus_cmd> <username>system</username> <password>manager</password> <svc_0> <volume_type>regular</volume_type> <iscsi_ip>172.17.39.132</iscsi_ip> <hdp>9</hdp> </svc_0> <snapshot> <hdp>13</hdp> </snapshot> <lun_start> 3000 </lun_start> <lun_end> 4000 </lun_end> </config>
Configure the
/opt/hds/hus/cinder_hus2_conf.xml
file:<?xml version="1.0" encoding="UTF-8" ?> <config> <mgmt_ip0>172.17.44.20</mgmt_ip0> <mgmt_ip1>172.17.44.21</mgmt_ip1> <hus_cmd>hus-cmd</hus_cmd> <username>system</username> <password>manager</password> <svc_0> <volume_type>platinum</volume_type> <iscsi_ip>172.17.30.130</iscsi_ip> <hdp>2</hdp> </svc_0> <snapshot> <hdp>3</hdp> </snapshot> <lun_start> 2000 </lun_start> <lun_end> 3000 </lun_end> </config>
If you use volume types, you must configure them in
the configuration file and set the
volume_backend_name
option to the
appropriate back-end. In the previous multi back-end
example, the platinum
volume type
is served by hus-2, and the regular
volume type is served by hus-1.
cinder type-key regular set volume_backend_name=hus-1 cinder type-key platinum set volume_backend_name=hus-2
You can deploy multiple OpenStack Block Storage instances that each
control a separate HUS array. Each instance has no
volume type associated with it. The OpenStack Block Storage filtering
algorithm selects the HUS array with the largest
available free space. In each configuration file, you
must define the default
volume_type
in the service
labels.