Hitachi block storage driver

Hitachi block storage driver provides Fibre Channel and iSCSI support for Hitachi VSP storages.

System requirements

Supported storages:

Storage model

Firmware version

VSP E590, E790

93-03-22 or later

VSP E990

93-01-01 or later

VSP E1090, E1090H

93-06-2x or later

VSP F350, F370, F700, F900

VSP G350, G370, G700, G900

88-01-04 or later

VSP F400, F600, F800

VSP G200, G400, G600, G800

83-04-43 or later

VSP N400, N600, N800

83-06-01 or later

VSP 5100, 5500, 5100H, 5500H

90-01-41 or later

VSP 5200, 5600, 5200H, 5600H

90-08-0x or later

VSP F1500

VSP G1000, VSP G1500

80-05-43 or later

Required storage licenses:

  • Hitachi Storage Virtualization Operating System (SVOS)

    • Hitachi LUN Manager

    • Hitachi Dynamic Provisioning

  • Hitachi Local Replication (Hitachi Thin Image)

Optional storage licenses:

  • Deduplication and compression

  • Global-Active Device

Supported operations

  • Create, delete, attach, and detach volumes.

  • Create, list, and delete volume snapshots.

  • Create a volume from a snapshot.

  • Create, list, update, and delete consistency groups.

  • Create, list, and delete consistency group snapshots.

  • Copy a volume to an image.

  • Copy an image to a volume.

  • Clone a volume.

  • Extend a volume.

  • Migrate a volume (host assisted).

  • Migrate a volume (storage assisted).

  • Get volume statistics.

  • Efficient non-disruptive volume backup.

  • Manage and unmanage a volume.

  • Attach a volume to multiple instances at once (multi-attach).

  • Revert a volume to a snapshot.

Hitachi block storage driver also supports the following additional features:

  • Global-Active Device

  • Maximum number of copy pairs and consistency groups

  • Data deduplication and compression

  • Port scheduler

  • Port assignment using extra spec

  • Configuring Quality of Service (QoS) settings

Note

  • A volume having snapshots cannot be extended with this driver.

  • Storage assisted volume migration is only supported between same storage.

Configuration

Set up Hitachi storage

You need to specify settings as described below for storage systems. For details about each setting, see the user’s guide of the storage systems.

Common resources:

  1. All resources

    The name of any storage resource, such as a DP pool or a host group, cannot contain any whitespace characters or else it will be unusable by the driver.

  2. User accounts

    Create a storage device account belonging to the Administrator User Group.

  3. DP Pool

    Create a DP pool that is used by the driver.

  4. Resource group

    If using a new resource group for exclusive use by an OpenStack system, create a new resource group, and assign the necessary resources, such as LDEVs, port, and host group (iSCSI target) to the created resource.

  5. Ports

    Enable Port Security for the ports used by the driver.

If you use iSCSI:

  1. Ports

    Assign an IP address and a TCP port number to the port.

Note

  • Do not change LDEV nickname for the LDEVs created by Hitachi block storage driver. The nickname is referred when deleting a volume or a snapshot, to avoid data-loss risk. See details in bug #2072317.

Set up Hitachi storage volume driver and volume operations

Set the volume driver to Hitachi block storage driver by setting the volume_driver option in the cinder.conf file as follows:

If you use Fibre Channel:

[hitachi_vsp]
volume_driver = cinder.volume.drivers.hitachi.hbsd_fc.HBSDFCDriver
volume_backend_name = hitachi_vsp
san_ip = 1.2.3.4
san_login = hitachiuser
san_password = password
hitachi_storage_id = 123456789012
hitachi_pools = pool0

If you use iSCSI:

[hitachi_vsp]
volume_driver = cinder.volume.drivers.hitachi.hbsd_iscsi.HBSDISCSIDriver
volume_backend_name = hitachi_vsp
san_ip = 1.2.3.4
san_login = hitachiuser
san_password = password
hitachi_storage_id = 123456789012
hitachi_pools = pool0, pool1

Configuration options

This table shows configuration options for Hitachi block storage driver.

Description of Hitachi block storage driver configuration options

Configuration option = Default value

Description

hitachi_async_copy_check_interval = 10

(Integer(min=1, max=600)) Interval in seconds to check asynchronous copying status during a copy pair deletion or data restoration.

hitachi_compute_target_ports = []

(List of String) IDs of the storage ports used to attach volumes to compute nodes. To specify multiple ports, connect them by commas (e.g. CL1-A,CL2-A).

hitachi_copy_check_interval = 3

(Integer(min=1, max=600)) Interval in seconds to check copying status during a volume copy.

hitachi_copy_speed = 3

(Integer(min=1, max=15)) Copy speed of storage system. 1 or 2 indicates low speed, 3 indicates middle speed, and a value between 4 and 15 indicates high speed.

hitachi_discard_zero_page = True

(Boolean) Enable or disable zero page reclamation in a DP-VOL.

hitachi_exec_retry_interval = 5

(Integer) Retry interval in seconds for REST API execution.

hitachi_extend_timeout = 600

(Integer) Maximum wait time in seconds for a volume extention to complete.

hitachi_group_create = False

(Boolean) If True, the driver will create host groups or iSCSI targets on storage ports as needed.

hitachi_group_delete = False

(Boolean) If True, the driver will delete host groups or iSCSI targets on storage ports as needed.

hitachi_group_name_format = None

(String) Format of host groups, iSCSI targets, and server objects.

hitachi_host_mode_options = []

(List of Integer) Host mode option for host group or iSCSI target.

hitachi_ldev_range = None

(String) Range of the LDEV numbers in the format of ‘xxxx-yyyy’ that can be used by the driver. Values can be in decimal format (e.g. 1000) or in colon-separated hexadecimal format (e.g. 00:03:E8).

hitachi_lock_timeout = 7200

(Integer) Maximum wait time in seconds for storage to be logined or unlocked.

hitachi_lun_retry_interval = 1

(Integer) Retry interval in seconds for REST API adding a LUN mapping to the server.

hitachi_lun_timeout = 50

(Integer) Maximum wait time in seconds for adding a LUN mapping to the server.

hitachi_mirror_auth_password = None

(String) iSCSI authentication password

hitachi_mirror_auth_user = None

(String) iSCSI authentication username

hitachi_mirror_compute_target_ports = []

(List of String) Target port names of compute node for host group or iSCSI target

hitachi_mirror_ldev_range = None

(String) Logical device range of secondary storage system

hitachi_mirror_pair_target_number = 0

(Integer(min=0, max=99)) Pair target name of the host group or iSCSI target

hitachi_mirror_pool = None

(String) Pool of secondary storage system

hitachi_mirror_rest_api_ip = None

(String) IP address of REST API server

hitachi_mirror_rest_api_port = 443

(Port(min=0, max=65535)) Port number of REST API server

hitachi_mirror_rest_pair_target_ports = []

(List of String) Target port names for pair of the host group or iSCSI target

hitachi_mirror_rest_password = None

(String) Password of secondary storage system for REST API

hitachi_mirror_rest_user = None

(String) Username of secondary storage system for REST API

hitachi_mirror_snap_pool = None

(String) Thin pool of secondary storage system

hitachi_mirror_ssl_cert_path = None

(String) Can be used to specify a non default path to a CA_BUNDLE file or directory with certificates of trusted CAs, which will be used to validate the backend

hitachi_mirror_ssl_cert_verify = False

(Boolean) If set to True the http client will validate the SSL certificate of the backend endpoint.

hitachi_mirror_storage_id = None

(String) ID of secondary storage system

hitachi_mirror_target_ports = []

(List of String) Target port names for host group or iSCSI target

hitachi_mirror_use_chap_auth = False

(Boolean) Whether or not to use iSCSI authentication

hitachi_pair_target_number = 0

(Integer(min=0, max=99)) Pair target name of the host group or iSCSI target

hitachi_path_group_id = 0

(Integer(min=0, max=255)) Path group ID assigned to the remote connection for remote replication

hitachi_pools = []

(List of String) Pool number[s] or pool name[s] of the DP pool.

hitachi_port_scheduler = False

(Boolean) Enable port scheduling of WWNs to the configured ports so that WWNs are registered to ports in a round-robin fashion.

hitachi_quorum_disk_id = None

(Integer(min=0, max=31)) ID of the Quorum disk used for global-active device

hitachi_replication_copy_speed = 3

(Integer(min=1, max=15)) Remote copy speed of storage system. 1 or 2 indicates low speed, 3 indicates middle speed, and a value between 4 and 15 indicates high speed.

hitachi_replication_number = 0

(Integer(min=0, max=255)) Instance number for REST API

hitachi_replication_status_check_long_interval = 600

(Integer) Interval at which remote replication pair status is checked. This parameter is applied if the status has not changed to the expected status after the time indicated by this parameter has elapsed.

hitachi_replication_status_check_short_interval = 5

(Integer) Initial interval at which remote replication pair status is checked

hitachi_replication_status_check_timeout = 86400

(Integer) Maximum wait time before the remote replication pair status changes to the expected status

hitachi_rest_another_ldev_mapped_retry_timeout = 600

(Integer) Retry time in seconds when new LUN allocation request fails.

hitachi_rest_connect_timeout = 30

(Integer) Maximum wait time in seconds for connecting to REST API session.

hitachi_rest_disable_io_wait = True

(Boolean) This option will allow detaching volume immediately. If set False, storage may take few minutes to detach volume after I/O.

hitachi_rest_get_api_response_timeout = 1800

(Integer) Maximum wait time in seconds for a response against sync methods, for example GET

hitachi_rest_job_api_response_timeout = 1800

(Integer) Maximum wait time in seconds for a response against async methods from REST API, for example PUT and DELETE.

hitachi_rest_keep_session_loop_interval = 180

(Integer) Loop interval in seconds for keeping REST API session.

hitachi_rest_pair_target_ports = []

(List of String) Target port names for pair of the host group or iSCSI target

hitachi_rest_server_busy_timeout = 7200

(Integer) Maximum wait time in seconds when REST API returns busy.

hitachi_rest_tcp_keepalive = True

(Boolean) Enables or disables use of REST API tcp keepalive

hitachi_rest_tcp_keepcnt = 4

(Integer) Maximum number of transmissions for TCP keepalive packet.

hitachi_rest_tcp_keepidle = 60

(Integer) Wait time in seconds for sending a first TCP keepalive packet.

hitachi_rest_tcp_keepintvl = 15

(Integer) Interval of transmissions in seconds for TCP keepalive packet.

hitachi_rest_timeout = 30

(Integer) Maximum wait time in seconds for each REST API request.

hitachi_restore_timeout = 86400

(Integer) Maximum wait time in seconds for the restore operation to complete.

hitachi_set_mirror_reserve_attribute = True

(Boolean) Whether or not to set the mirror reserve attribute

hitachi_snap_pool = None

(String) Pool number or pool name of the snapshot pool.

hitachi_state_transition_timeout = 900

(Integer) Maximum wait time in seconds for a volume transition to complete.

hitachi_storage_id = None

(String) Product number of the storage system.

hitachi_target_ports = []

(List of String) IDs of the storage ports used to attach volumes to the controller node. To specify multiple ports, connect them by commas (e.g. CL1-A,CL2-A).

hitachi_zoning_request = False

(Boolean) If True, the driver will configure FC zoning between the server and the storage system provided that FC zoning manager is enabled.

Required options

  • san_ip

    IP address of SAN controller

  • san_login

    Username for SAN controller

  • san_password

    Password for SAN controller

  • hitachi_storage_id

    Product number of the storage system.

  • hitachi_pools

    Pool number(s) or pool name(s) of the DP pool.

Set up and operation for additional features

Set up Global-Active Device and volume operation

Beginning with the 2023.1, If you use Global-Active Device (GAD), you can make the data of individual volumes redundant between two storage systems, thereby improving the availability of the storage systems. For details, see the Global-Active Device User Guide.

Note

  • You cannot apply Global-Active Device configuration and remote replication configuration to the same backend.

  • You cannot use Asymmetric Logical Unit Access (ALUA).

Storage firmware versions for GAD

If you are using a VSP F350, F370, F700, F900 storage system or a VSP G350, G370, G700,G900 storage system in a Global-Active Device configuration, make sure the firmware version is 88-03-21 or later.

Creating a Global-Active Device environment

Before using Global-Active Device, create the prerequisite environment, such as connecting remote paths, configuring a quorum disk, and creating a virtual storage machine (VSM), by other storage system management tools. Hitachi block storage driver supports the following configurations.

  • Configuration where the P-VOL is not registered to a VSM

  • Configuration where the P-VOL is registered to a VSM

For details, see the Workflow for creating a GAD environment in the Global-Active Device User Guide

Hitachi block storage driver automatically setups following procedures that are described in the section Workflow for creating a GAD environment :

  • The following steps of Setting up the secondary storage system:

    • Setting the GAD reserve attribute on the S-VOL

    • Creating a host group (Only if the configuration option hitachi_group_create is True)

    • Creating the S-VOL

    • Adding an LU path to the S-VOL

  • Updating the CCI configuration definition files

  • Creating the GAD pair

  • Adding an alternate path to the S-VOL

You must register the information about the secondary storage system to the REST API server in the primary site and register the information about the primary storage system to the REST API server in the secondary site. For details about how to register the information, see the Hitachi Command Suite Configuration Manager REST API Reference Guide or the Hitachi Ops Center API Configuration Manager REST API Reference Guide.

Note

  • The users specified for both configuration options san_login and hitachi_mirror_rest_user must have following roles:

    • Storage Administrator (View & Modify)

    • Storage Administrator (Remote Copy)

  • Reserve unused host group IDs (iSCSI target IDs) for the resource groups related on the VSM. Reserve the IDs in ascending order. The number of IDs you need to reserve is 1 plus the sum of the number of controller nodes and the number of compute nodes. For details on how to reserve a host group ID (iSCSI target ID), see Global-Active Device User Guide.

  • The LUNs of the host groups (iSCSI targets) of the specified ports on the primary storage system must match the LUNs of the host groups (iSCSI targets) of the specified ports on the secondary storage system. If they do not match, match the LUNs for the primary storage system with those for the secondary storage system.

  • When you use a same storage system as secondary storage system for Global-Active Device configuration and backend storage system for general use at the same time, you cannot use the same ports between different backend storage systems. Please specify different ports in the configuration options hitachi_target_ports, hitachi_compute_target_ports, or hitachi_rest_pair_target_ports between different backend storage systems.

Create volume in a Global-Active Device configuration

If you create a Cinder volume in a Global-Active Device configuration, each Global-Active Device pair is mapped to a Cinder volume.

In order for you to create volumes with the Global-Active Device attribute specified, you must first create a volume type that contains the hbsd:topology=active_active_mirror_volume extra-spec. You can do this as follows:

$ openstack volume type create <volume type name>
$ openstack volume type set --property \
hbsd:topology=active_active_mirror_volume <volume type name>

You can then create GAD volumes as follows:

$ openstack volume create --type <volume type name> --size <size>

Note

  • In this case, the following restrictions apply:

    • You cannot create a volume for which the deduplication and compression function is enabled, or creating a volume will be failed with the error MSGID0753-E: Failed to create a volume in a GAD environment because deduplication is enabled for the volume type..

  • Note the following if the configuration is “P-VOL registered to a VSM”:

    • Do not create volumes whose volume types do not have hbsd:topology=active_active_mirror_volume extra-spec.

    • While setting up the environment, set a virtual LDEV ID for every LDEV specified by the configuration option hitachi_ldev_range parameter on the primary storage system using storage management software because virtual LDEV IDs are necessary for GAD pair creation.

Unavailable Cinder functions

Following cinder functions are unavailable in a Global-Active Device configuration:

  • Migrate a volume (storage assisted)

  • Manage Volume

  • Unmanage Volume

Note

In addition, if the configuration is “P-VOL registered to a VSM”, the backup creation command of the Backup Volume functions cannot be run with the --snapshot option or the --force option specified.

Maximum number of copy pairs and consistency groups

The maximum number of Thin Image pairs that can be created for each LDEV assigned to a volume (or snapshot) is restricted on a per-storage-system basis. If the number of pairs exceeds the maximum, copying cannot proceed normally.

For information about the maximum number of copy pairs and consistency groups that can be created, see the Hitachi Thin Image User Guide.

Configuring Quality of Service (QoS) settings

By configuring Quality of Service (QoS) settings, you can restrict the I/O processing of each volume, thereby maintaining the required performance and quality levels.

In Hitachi block storage driver, you can configure the following settings for each volume. However, you cannot configure these settings for journal volumes.

  • Throughput (IOPS, amount of data transferred in MB/s)

    You can set the upper and lower limits on throughput. If an upper limit is exceeded, I/O is suppressed. If a lower limit is not met, I/O is adjusted so that the lower limit is met.

  • Priority level of the I/O processing

    You can set priority levels for the I/O processing of multiple volumes. I/O is adjusted for faster I/O response, starting with high-priority volumes.

System requirements for a QoS

Storage firmware versions

Storage model

Firmware version

VSP F350, F370, F700, F900

VSP G350, G370, G700, G900

88-06-01 or later

VSP 5100, 5500, 5100H, 5500H

90-04-01 or later

Storage management software

Configuration Manager REST API version 10.2.0-00 or later is required.

Configuring QoS settings and creating volumes

Create QoS specs that define QoS settings, and then associate the QoS specs with a volume type. You can configure QoS settings for a volume by running the following functions with this volume type specified.

  • Create Volume

  • Create Snapshot

  • Create Volume from Snapshot

  • Create Volume from Volume (Clone)

  • Consistency Group

  • Generic volume group

The following example describes the procedure for configuring QoS settings when creating a new volume using the Create Volume function.

Before you begin, Check the following information.

  • QoS settings

    • Upper or lower limit on throughput (IOPS, amount of data transferred in MB/s)

    • Priority level of I/O processing

  • ID and name of the volume type

    A volume type is needed in order to associate it with the QoS specs. If no volume types exist, create one in advance.

Procedure

  1. Create the QoS specs

    1. If you use the cinder command:

$ cinder qos-create <name-of-the-QoS-specs> [consumer=back-end] \
<name-of-a-QoS-specs-property>=<value-of-the-QoS-specs-property> \
[<name-of-a-QoS-specs-property>=<value-of-the-QoS-specs-property> ...]
  1. If you use the openstack command:

$ openstack volume qos create [--consumer back-end] \
--property \
<name-of-a-QoS-specs-property>=<value-of-the-QoS-specs-property> \
[--property \
<name-of-a-QoS-specs-property>=<value-of-the-QoS-specs-property> ...] \
<name-of-the-QoS-specs>

Specify a name for <name-of-the-QoS-specs>.

Specify <name-of-a-QoS-specs-property> and <value-of-the-QoS-specs-property> as follows. For details on the range of values you can specify, see the overview of QoS operations in the Performance Guide.

QoS specs property

Description

upperIops

The upper limit on IOPS.

upperTransferRate

The upper limit on the amount of data transferred in MB/s.

lowerIops

The lower limit on IOPS.

lowerTransferRate

The lower limit on the amount of data transferred in MB/s.

responsePriority

The priority level of the I/O processing.

The following is an example of running the command.

  1. If you use the cinder command:

$ cinder qos-create test_qos consumer=back-end upperIops=2000
  1. If you use the openstack command:

$ openstack volume qos create --consumer back-end \
--property upperIops=2000 test_qos

When you run this command, the ID of the created QoS specs is also output. Record this ID, because you will need it in a later step.

  1. Associate the QoS specs with a volume type.

    1. If you use the cinder command:

$ cinder qos-associate <ID-of-the-QoS-specs> <ID-of-the-volume-type>
  1. If you use the openstack command:

$ openstack volume qos associate <name-of-the-QoS-specs> \
<name-of-the-volume-type>
  1. Specify the volume type that is associated with the QoS specs, and then create a volume.

    1. If you use the cinder command:

$ cinder create --volume-type <name-of-the-volume-type> <size>
  1. If you use the openstack command:

$ openstack volume create --size <size> --type <name-of-the-volume-type> \
<name>

Changing QoS settings

To change the QoS settings, use the Retype function to change the volume type to one that has different QoS specs.

You can also change a volume type for which no QoS specs are set to a volume type for which QoS specs are set, and vice versa.

Clearing QoS settings

To clear the QoS settings, clear the association between the volume type and QoS specs, and then delete the QoS specs.

Data deduplication and compression

Use deduplication and compression to improve storage utilization using data reduction.

For details, see Capacity saving function: data deduplication and compression in the Provisioning Guide.

Enabling deduplication and compression

To use the deduplication and compression on the storage models, your storage administrator must first enable the deduplication and compression for the DP pool.

For details about how to enable this setting, see the description of pool management in the Hitachi Command Suite Configuration Manager REST API Reference Guide or the Hitachi Ops Center API Configuration Manager REST API Reference Guide.

Note

  • Do not set a subscription limit (virtualVolumeCapacityRate) for the DP pool.

Creating a volume with deduplication and compression enabled

To create a volume with the deduplication and compression setting enabled, enable deduplication and compression for the relevant volume type.

Procedure

1. To enable the deduplication and compression setting, specify the value deduplication_compression for hbsd:capacity_saving in the extra specs for the volume type.

2. When creating a volume of the volume type created in the previous step, you can create a volume with the deduplication and compression function enabled.

Deleting a volume with deduplication and compression enabled

The cinder delete command finishes when the storage system starts the LDEV deletion process. The LDEV cannot be reused until the LDEV deletion process is completed on the storage system.

Port scheduler

You can use the port scheduler function to reduce the number of WWNs, which are storage system resource.

In Hitachi block storage driver, if host groups are created automatically, host groups are created for each compute node or VM (in an environment that has a WWN for each VM). If you do not use the port scheduler function, host groups are created and the same WWNs are registered in all of the ports that are specified for the configuration option hitachi_compute_target_ports or for the configuration option hitachi_target_ports. For Hitachi storage devices, a maximum of 255 host groups and 255 WWNs can be registered for one port. When volumes are attached, the upper limit on the number of WWNs that can be registered might be unexpectedly exceeded.

For the port scheduler function, when the cinder-volume service starts, the Fibre Channel Zone Manager obtains the WWNs of active compute nodes and of active VMs. When volumes are attached, the WWNs are registered in a round-robin procedure, in the same order as the order of ports specified for the configuiration option hitachi_compute_target_ports or for the configuiration option hitachi_target_ports.

If you want to use the port scheduler function, set the configuration option hitachi_port_scheduler.

Note

  • Only Fibre Channel is supported. For details about ports, see Fibre Channel connectivity.

  • If a host group already exists in any of the ports specified for the configuration option hitachi_compute_target_ports or for the configuration option hitachi_target_ports, no new host group will be created on those ports.

  • Restarting the cinder-volume service re-initializes the round robin scheduling determined by the configuration option hitachi_compute_target_ports or the configuration option hitachi_target_ports.

  • The port scheduler function divides up the active WWNs from each fabric controller and registers them to each port. For this reason, the number of WWNs registered may vary from port to port.

Port assignment using extra specs

Defining particular ports in the Hitachi-supported extra spec hbsd:target_ports determines which of the ports specified by the configuration options hitachi_target_ports or the configuration option hitachi_compute_target_ports are used to create LUN paths during volume attach operations for each volume type.

Note

  • Use a comma to separate multiple ports.

  • In a Global-Active Device configuration, use the extra spec hbsd:target_ports for the primary storage system and the extra spec hbsd:remote_target_ports for the secondary storage system.

  • In a Global-Active Device configuration, the ports specified for the extra spec hbsd:target_ports must be specified for both the configuration options for the primary storage system (hitachi_target_ports or hitachi_compute_target_ports) and for the secondary storage system (hitachi_mirror_target_ports or hitachi_mirror_compute_target_ports).