This OpenStack Block Storage volume driver provides iSCSI and NFS support for Hitachi NAS Platform Models 3080, 3090, 4040, 4060, 4080 and 4100.
The NFS and iSCSI drivers support these operations:
Create, delete, attach, and detach volumes.
Create, list, and delete volume snapshots.
Create a volume from a snapshot.
Copy an image to a volume.
Copy a volume to an image.
Clone a volume.
Extend a volume.
Get volume statistics.
Before using iSCSI and NFS services, use the HNAS configuration and
management GUI (SMU) or SSC CLI to create storage pool(s), file system(s),
and assign an EVS. Make sure that the file system used is not
created as replication targets
. Additionally:
- For NFS:
Create NFS exports, choose a path for them (it must be different from "/") and set the Show snapshots option to
hide and disable access
.Also, configure the option
norootsquash
as"* (rw, norootsquash)",
so cinder services can change the permissions of its volumes.In order to use the hardware accelerated features of NFS HNAS, we recommend setting
max-nfs-version
to 3. Refer to HNAS command line reference to see how to configure this option.- For iSCSI:
You need to set an iSCSI domain.
- The HNAS driver is supported for Red Hat, SUSE Cloud and Ubuntu Cloud. The following packages must be installed:
nfs-utils for Red Hat
nfs-client for SUSE
nfs-common, libc6-i386 for Ubuntu (libc6-i386 only required on Ubuntu 12.04)
If you are not using SSH, you need the HDS SSC package (hds-ssc-v1.0-1) to communicate with an HNAS array using the SSC command. This utility package is available in the RPM package distributed with the hardware through physical media or it can be manually copied from the SMU to the Block Storage host.
If you are installing the driver from a RPM or DEB package, follow the steps bellow:
Install SSC:
In Red Hat:
# rpm -i hds-ssc-v1.0-1.rpm
Or in SUSE:
# zypper hds-ssc-v1.0-1.rpm
Or in Ubuntu:
# dpkg -i hds-ssc_1.0-1_all.deb
Install the dependencies:
In Red Hat:
# yum install nfs-utils nfs-utils-lib
Or in Ubuntu:
# apt-get install nfs-common
Or in SUSE:
# zypper install nfs-client
If you are using Ubuntu 12.04, you also need to install libc6-i386
# apt-get install libc6-i386
Configure the driver as described in the "Driver Configuration" section.
Restart all cinder services (volume, scheduler and backup).
The HDS driver supports the concept of differentiated services (also referred as quality of service) by mapping volume types to services provided through HNAS.
HNAS supports a variety of storage options and file system capabilities, which are selected through the definition of volume types and the use of multiple back ends. The driver maps up to four volume types into separated exports or file systems, and can support any number if using multiple back ends.
The configuration for the driver is read from an
XML-formatted file (one per back end), which you need to create
and set its path in the cinder.conf
configuration
file. Below are the configuration needed in
the cinder.conf
configuration file
[1]:
[DEFAULT] enabled_backends = hnas_iscsi1, hnas_nfs1
For HNAS iSCSI driver create this section:
[hnas_iscsi1] volume_driver = cinder.volume.drivers.hds.iscsi.HDSISCSIDriver hds_hnas_iscsi_config_file =/path/to/config/hnas_config_file.xml
volume_backend_name =HNAS-ISCSI
For HNAS NFS driver create this section:
[hnas_nfs1] volume_driver = cinder.volume.drivers.hds.nfs.HDSNFSDriver hds_hnas_nfs_config_file =/path/to/config/hnas_config_file.xml
volume_backend_name =HNAS-NFS
The XML file has the following format:
<?xml version = "1.0" encoding = "UTF-8" ?> <config> <mgmt_ip0>172.24.44.15</mgmt_ip0> <hnas_cmd>ssc</hnas_cmd> <chap_enabled>False</chap_enabled> <ssh_enabled>False</ssh_enabled> <cluster_admin_ip0>10.1.1.1</cluster_admin_ip0> <username>supervisor</username> <password>supervisor</password> <svc_0> <volume_type>default</volume_type> <iscsi_ip>172.24.44.20</iscsi_ip> <hdp>fs01-husvm</hdp> </svc_0> <svc_1> <volume_type>platinun</volume_type> <iscsi_ip>172.24.44.20</iscsi_ip> <hdp>fs01-platinun</hdp> </svc_1> </config>
An OpenStack Block Storage node using HNAS drivers can have up to
four services. Each service is defined by a svc_n
tag (svc_0
, svc_1
,
svc_2
, or svc_3
[2], for example). These are the configuration options
available for each service label:
Option | Type | Default | Description |
|
Required |
|
When a |
|
Required only for iSCSI |
An iSCSI IP address dedicated to the service. |
|
|
Required |
For iSCSI driver: virtual file system label associated with the service. For NFS driver: path to the volume (<ip_address>:/<path>) associated with the service.
Additionally, this entry must be added in the file
used to list available NFS shares. This file is
located, by default, in
|
These are the configuration options available to the
config
section of the XML config file:
Option | Type | Default | Description |
|
Required |
Management Port 0 IP address. Should be the IP address of the "Admin" EVS. |
|
|
Optional |
ssc |
Command to communicate to HNAS array. |
|
Optional (iSCSI only) |
|
Boolean tag used to enable CHAP authentication protocol. |
|
Required |
supervisor |
It's always required on HNAS. |
|
Required |
supervisor |
Password is always required on HNAS. |
|
Optional |
(at least one label has to be defined) |
Service labels: these four predefined names help four different sets of configuration options. Each can specify HDP and a unique volume type. |
|
Optional if |
The address of HNAS cluster admin. |
|
|
Optional |
|
Enables SSH authentication between Block Storage host and the SMU. |
|
Required if |
|
Path to the SSH private key used to authenticate
in HNAS SMU. The public key must be uploaded to
HNAS SMU using |
HNAS driver supports differentiated types of service using the service labels. It is possible to create up to four types of them, as gold, platinun, silver and ssd, for example.
After creating the services in the XML configuration file, you
must configure one volume_type
per service.
Each volume_type
must have the metadata
service_label
with the same name configured in the
<volume_type>
section of that
service. If this is not set, OpenStack Block Storage will
schedule the volume creation to the pool with largest available
free space or other criteria configured in volume filters.
$ cinder type-create 'default' $ cinder type-key 'default' set service_label = 'default' $ cinder type-create 'platinun-tier' $ cinder type-key 'platinun' set service_label = 'platinun'
If you use multiple back ends and intend to enable the creation of
a volume in a specific back end, you must configure volume types to
set the volume_backend_name
option to the
appropriate back end. Then, create volume_type
configurations with the same volume_backend_name
.
$ cinder type-create 'iscsi' $ cinder type-key 'iscsi' set volume_backend_name = 'HNAS-ISCSI' $ cinder type-create 'nfs' $ cinder type-key 'nfs' set volume_backend_name = 'HNAS-NFS'
You can deploy multiple OpenStack HNAS drivers instances that each
control a separate HNAS array. Each service (svc_0, svc_1,
svc_2, svc_3
) on the instances need to have a volume_type
and service_label metadata associated with it. If no metadata is
associated with a pool, OpenStack Block Storage filtering algorithm
selects the pool with the largest available free space.
Instead of using SSC on the Block Storage host and store its credential on the XML configuration file, HNAS driver supports SSH authentication. To configure that:
If you don't have a pair of public keys already generated, create it in the Block Storage host (leave the pass-phrase empty):
$ mkdir -p
/opt/hds/ssh
$ ssh-keygen -f/opt/hds/ssh/hnaskey
Change the owner of the key to
cinder
(or the user the volume service will be run):# chown -R cinder.cinder
/opt/hds/ssh
Create the directory "ssh_keys" in the SMU server:
$ ssh [manager|supervisor]@<smu-ip> 'mkdir -p /var/opt/mercury-main/home/[manager|supervisor]/ssh_keys/'
Copy the public key to the "ssh_keys" directory:
$ scp
/opt/hds/ssh/hnaskey.pub
[manager|supervisor]@<smu-ip>:/var/opt/mercury-main/home/[manager|supervisor]/ssh_keys/Access the SMU server:
$ ssh [manager|supervisor]@<smu-ip>
Run the command to register the SSH keys:
$ ssh-register-public-key -u [manager|supervisor] -f ssh_keys/hnaskey.pub
Check the communication with HNAS in the Block Storage host:
$ ssh -i
/opt/hds/ssh/hnaskey
[manager|supervisor]@<smu-ip> 'ssc <cluster_admin_ip0> df -a'
<cluster_admin_ip0>
is "localhost" for
single node deployments. This should return a list of available
file systems on HNAS.
Set the "username".
Enable SSH adding the line
"<ssh_enabled> True</ssh_enabled>"
under"<config>"
session.Set the private key path:
"<ssh_private_key> /opt/hds/ssh/hnaskey</ssh_private_key>"
under"<config>"
session.If the HNAS is in a multi-cluster configuration set
"<cluster_admin_ip0>"
to the cluster node admin IP. In a single node HNAS, leave it empty.Restart the cinder service.
The
get_volume_stats()
function always provides the available capacity based on the combined sum of all the HDPs that are used in these services labels.After changing the configuration on the storage, the OpenStack Block Storage driver must be restarted.
HNAS iSCSI driver, due to an HNAS limitation, allows only 32 volumes per target.
On Red Hat, if the system is configured to use SELinux, you need to set
"virt_use_nfs = on"
for NFS driver work properly.