Hitachi NAS (HNAS) driver¶
The HNAS driver provides NFS Shared File Systems to OpenStack.
Requirements¶
Hitachi NAS Platform Models 3080, 3090, 4040, 4060, 4080, and 4100.
HNAS/SMU software version is 12.2 or higher.
HNAS configuration and management utilities to create a storage pool (span) and an EVS.
GUI (SMU).
SSC CLI.
Driver options¶
This table contains the configuration options specific to the share driver.
Configuration option = Default value |
Description |
---|---|
[DEFAULT] |
|
|
(String) Specify IP for mounting shares in the Admin network. |
|
(Boolean) By default, CIFS snapshots are not allowed to be taken when the share has clients connected because consistent point-in-time replica cannot be guaranteed for all files. Enabling this might cause inconsistent snapshots on CIFS shares. |
|
(String) The IP of the clusters admin node. Only set in HNAS multinode clusters. |
|
(String) Python class to be used for driver helper. |
|
(Integer) Specify which EVS this backend is assigned to. |
|
(String) Specify IP for mounting shares. |
|
(String) Specify file-system name for creating shares. |
|
(String) HNAS management interface IP for communication between Manila controller and HNAS. |
|
(String) HNAS user password. Required only if private key is not provided. |
|
(String) RSA/DSA private key value used to connect into HNAS. Required only if password is not provided. |
|
(Integer) The time (in seconds) to wait for stalled HNAS jobs before aborting. |
|
(String) HNAS username Base64 String in order to perform tasks such as create file-systems and network interfaces. |
[hnas1] |
|
|
(String) The backend name for a given driver implementation. |
|
(String) Driver to use for share creation. |
Pre-configuration on OpenStack deployment¶
Install the OpenStack environment with manila. See the OpenStack installation guide.
Configure the OpenStack networking so it can reach HNAS Management interface and HNAS EVS Data interface.
Note
In the driver mode used by HNAS Driver (DHSS =
False
), the driver does not handle network configuration, it is up to the administrator to configure it.Configure the network of the manila-share node network to reach HNAS management interface through the admin network.
Configure the network of the Compute and Networking nodes to reach HNAS EVS data interface through the data network.
Example of networking architecture:
Edit the
/etc/neutron/plugins/ml2/ml2_conf.ini
file and update the following settings in their respective tags. In case you use linuxbridge, update bridge mappings at linuxbridge section:
Important
It is mandatory that HNAS management interface is reachable from the Shared File System node through the admin network, while the selected EVS data interface is reachable from OpenStack Cloud, such as through Neutron flat networking.
[ml2] type_drivers = flat,vlan,vxlan,gre mechanism_drivers = openvswitch [ml2_type_flat] flat_networks = physnet1,physnet2 [ml2_type_vlan] network_vlan_ranges = physnet1:1000:1500,physnet2:2000:2500 [ovs] bridge_mappings = physnet1:br-ex,physnet2:br-eth1
You may have to repeat the last line above in another file on the Compute node, if it exists it is located in:
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
.In case openvswitch for neutron agent, run in network node:
# ifconfig eth1 0 # ovs-vsctl add-br br-eth1 # ovs-vsctl add-port br-eth1 eth1 # ifconfig eth1 up
Restart all neutron processes.
Create the data HNAS network in OpenStack:
List the available projects:
$ openstack project list
Create a network to the given project (DEMO), providing the project name, a name for the network, the name of the physical network over which the virtual network is implemented, and the type of the physical mechanism by which the virtual network is implemented:
$ openstack network create --project DEMO \ --provider-network-type flat \ --provider-physical-network physnet2 hnas_network
Optional: List available networks:
$ openstack network list
Create a subnet to the same project (DEMO), the gateway IP of this subnet, a name for the subnet, the network name created before, and the CIDR of subnet:
$ openstack subnet create --project DEMO --gateway GATEWAY \ --subnet-range SUBNET_CIDR --network NETWORK HNAS_SUBNET
Optional: List available subnets:
$ openstack subnet list
Add the subnet interface to a router, providing the router name and subnet name created before:
$ openstack router add subnet SUBNET ROUTER
Pre-configuration on HNAS¶
Create a file system on HNAS. See the Hitachi HNAS reference.
Important
Make sure that the filesystem is not created as a replication target. For more information, refer to the official HNAS administration guide.
Prepare the HNAS EVS network.
Create a route in HNAS to the project network:
$ console-context --evs <EVS_ID_IN_USE> route-net-add \ --gateway <FLAT_NETWORK_GATEWAY> <TENANT_PRIVATE_NETWORK>
Important
Make sure multi-tenancy is enabled and routes are configured per EVS.
$ console-context --evs 3 route-net-add --gateway 192.168.1.1 \ 10.0.0.0/24
Configure the CIFS security.
Before using CIFS shares with the HNAS driver, make sure to configure a security service in the back end. For details, refer to the Hitachi HNAS reference.
Back end configuration¶
Configure HNAS driver.
Configure HNAS driver according to your environment. This example shows a minimal HNAS driver configuration:
[DEFAULT] enabled_share_backends = hnas1 enabled_share_protocols = NFS,CIFS [hnas1] share_backend_name = HNAS1 share_driver = manila.share.drivers.hitachi.hnas.driver.HitachiHNASDriver driver_handles_share_servers = False hitachi_hnas_ip = 172.24.44.15 hitachi_hnas_user = supervisor hitachi_hnas_password = supervisor hitachi_hnas_evs_id = 1 hitachi_hnas_evs_ip = 10.0.1.20 hitachi_hnas_file_system_name = FS-Manila hitachi_hnas_cifs_snapshot_while_mounted = True
Note
The
hds_hnas_cifs_snapshot_while_mounted
parameter allows snapshots to be taken while CIFS shares are mounted. This parameter is set toFalse
by default, which prevents a snapshot from being taken if the share is mounted or in use.
Optional. HNAS multi-backend configuration.
Update the
enabled_share_backends
flag with the names of the back ends separated by commas.Add a section for every back end according to the example bellow:
[DEFAULT] enabled_share_backends = hnas1,hnas2 enabled_share_protocols = NFS,CIFS [hnas1] share_backend_name = HNAS1 share_driver = manila.share.drivers.hitachi.hnas.driver.HitachiHNASDriver driver_handles_share_servers = False hitachi_hnas_ip = 172.24.44.15 hitachi_hnas_user = supervisor hitachi_hnas_password = supervisor hitachi_hnas_evs_id = 1 hitachi_hnas_evs_ip = 10.0.1.20 hitachi_hnas_file_system_name = FS-Manila1 hitachi_hnas_cifs_snapshot_while_mounted = True [hnas2] share_backend_name = HNAS2 share_driver = manila.share.drivers.hitachi.hnas.driver.HitachiHNASDriver driver_handles_share_servers = False hitachi_hnas_ip = 172.24.44.15 hitachi_hnas_user = supervisor hitachi_hnas_password = supervisor hitachi_hnas_evs_id = 1 hitachi_hnas_evs_ip = 10.0.1.20 hitachi_hnas_file_system_name = FS-Manila2 hitachi_hnas_cifs_snapshot_while_mounted = True
Disable DHSS for HNAS share type configuration:
Note
Shared File Systems requires that the share type includes the
driver_handles_share_servers
extra-spec. This ensures that the share will be created on a back end that supports the requesteddriver_handles_share_servers
capability.$ manila type-create hitachi False
Optional: Add extra-specs for enabling HNAS-supported features:
These commands will enable various snapshot-related features that are supported in HNAS.
$ manila type-key hitachi set snapshot_support=True $ manila type-key hitachi set mount_snapshot_support=True $ manila type-key hitachi set revert_to_snapshot_support=True $ manila type-key hitachi set create_share_from_snapshot_support=True
To specify which HNAS back end will be created by the share, in case of multiple back end setups, add an extra-spec for each share-type to match a specific back end. Therefore, it is possible to specify which back end the Shared File System service will use when creating a share.
$ manila type-key hitachi set share_backend_name=hnas1 $ manila type-key hitachi2 set share_backend_name=hnas2
Restart all Shared File Systems services (
manila-share
,manila-scheduler
andmanila-api
).
Manage and unmanage snapshots¶
The Shared File Systems service also has the ability to manage share
snapshots. Existing HNAS snapshots can be managed, as long as the snapshot
directory is located in /snapshots/share_ID
. New snapshots created through
the Shared File Systems service are also created according to this specific
folder structure.
To manage a snapshot, use:
$ manila snapshot-manage [--name <name>] [--description <description>]
[--driver_options [<key=value> [<key=value> ...]]]
<share> <provider_location>
Where:
Parameter |
Description |
---|---|
|
ID or name of the share to be managed. A list
of shares can be fetched with |
|
Location of the snapshot on the back end, such
as |
|
Driver-related configuration, passed such as
|
Note
The mandatory provider_location
parameter uses the same syntax for both
NFS and CIFS shares. This is only the case for snapshot management.
Note
The --driver_options
parameter size
is required for the HNAS
driver. Administrators need to know the size of the to-be-managed
snapshot beforehand.
Note
If the mount_snapshot_support=True
extra-spec is set in the share type,
the HNAS driver will automatically create an export when managing a snapshot
if one does not already exist.
To unmanage a snapshot, use:
$ manila snapshot-unmanage <snapshot>
Where:
Parameter |
Description |
---|---|
|
Name or ID of the snapshot(s). |
Additional notes¶
HNAS has some restrictions about the number of EVSs, filesystems, virtual-volumes, and simultaneous SSC connections. Check the manual specification for your system.
Shares and snapshots are thin provisioned. It is reported to Shared File System only the real used space in HNAS. Also, a snapshot does not initially take any space in HNAS, it only stores the difference between the share and the snapshot, so it grows when share data is changed.
Administrators should manage the project’s quota (manila quota-update) to control the back end usage.
Shares will need to be remounted after a revert-to-snapshot operation.