Supported NetApp Storage Combinations¶
About this task
StarlingX OpenStack supports one or more NetApp storage access protocols on the same StarlingX OpenStack system at the same time. This section describes the supported combinations and how each OpenStack service uses storage in those scenario.
Supported Storage Combinations¶
The following table shows the supported NetApp storage combinations and how OpenStack services (Cinder, Glance, Nova) use storage.
Infrastructure |
Cinder Volumes |
Cinder Backup |
Glance |
Nova |
|---|---|---|---|---|
NFS only |
NFS |
NFS |
Cinder or PVC |
NFS or PVC |
iSCSI only |
iSCSI |
iSCSI |
Cinder or PVC |
N/A |
FC only |
FC |
FC |
Cinder or PVC |
N/A |
NFS + iSCSI |
iSCSI |
NFS |
Cinder or PVC |
NFS or PVC |
NFS + FC |
FC |
NFS |
Cinder or PVC |
NFS or PVC |
iSCSI + FC |
FC or iSCSI |
FC or iSCSI |
Cinder or PVC |
N/A |
Key Design Considerations¶
The
volume_storage_class_prioritylist determines which backend is used as the default. The first available backend in the list is selected as the default backend.iSCSI and FC both use the
ontap-sanTrident driver and are differentiated using thesanTypeparameter (iscsiorfcp). Running iSCSI and FC simultaneously is supported only when separate TridentBackendConfig objects are configured for each SAN type.For backups, NFS is generally preferred when available, as iSCSI and FC backups rely on the
PosixBackupDriver, which enforces single-replica constraints due to lack of RWX support onontap-san.When deploying iSCSI and FC together, each TridentBackendConfig and StorageClass must explicitly define
sanType. In single-SAN deployments,sanTypeis optional and StarlingX OpenStack falls back tobackendType-onlyAlthough Glance support images stored in NetApp PVCs; however, approach is not recommended. To avoid issues related to PVC resizing, backup, and restore, configure Glance to use Cinder as the image storage backend instead.
Nova supports PVCs for storing ephemeral volumes; however, this approach is not recommended. Whenever possible, use inline NFS to avoid issues related to PVC resizing, backup, and restore. NetApp PVCs are required only in IPv6 environments, where inline NFS is not yet supported.
Multipath Configuration (iSCSI and FC)¶
Multipath configuration refers to the use of multiple physical storage paths between OpenStack compute nodes and NetApp SAN storage. Both iSCSI and FC support multiple paths for redundancy and performance, and these paths are managed through MPIO.
In StarlingX OpenStack, multipath configuration is integrated with the platform and is applied automatically when iSCSI or FC backends are enabled. During the system application-apply operation, StarlingX OpenStack configures the environment so that all OpenStack services access block storage through multipath devices rather than single paths.
How Multipath Works¶
The host operating system runs
multipathd, which detects all available iSCSI or FC paths and assembles them into a single logical multipath device.OpenStack containers mount the host
/rundirectory, which exposes themultipathdandiscsidsockets, and/dev, which provides access to the multipath block devicesPath failover and ALUA prioritization are handled by the host, ensuring continued I/O if a path, interface, or switch fails.
Cinder, Nova, and Glance (if uses Cinder backend) are configured to use multipath devices when attaching or accessing volumes.
When a volume is attached (for example, through
openstack server add volume), the typical flow is:Cinder maps the LUN on the NetApp backend.
The host kernel detects new SCSI paths from iSCSI sessions or FC targets.
The host’s
multipathdassembles the paths into a single multipath device with ALUA priorities.Nova and Cinder access the multipath device via
/devmount, while communication withmultipathdandiscsidhappens over sockets exposed through/run.
Automatic Storage and Multipath Configuration¶
When you enable iSCSI or Fibre Channel storage backends, StarlingX OpenStack automatically applies the required configuration to ensure block storage works across OpenStack services.
You do not need to manually configure multipath or make host-level changes for these services. StarlingX OpenStack applies these settings (shown below) during the application deployment process and manages them for you.
Cinder (Volume and Backup Pods)
For Cinder volume and backup services, StarlingX OpenStack:
Enables iSCSI support in the service configuration, allowing the containers to access the host’s storage services. The
conf.enable_iscsiis set totrue.Mounts the host’s
/rundirectory into the containers so they can communicate with the host-managed multipath and iSCSI services. Setting theconf.enable_iscsitotruetriggers the OpenStack Helm charts to mount the host’s/rundirectory into the containers, giving them access to the host’smultipathdsocket andiscsidservice.Runs volume and backup pods using the host network to allow direct access to block devices and iSCSI or FC adapters. The
useHostNetworkis enabled for volume and backup pods, allowing direct access to host-level block devices and iSCSI/FC HBAs.Runs backup pods in privileged mode so they can perform block-level operations safely.
Allows the use of
multipathandmultipathdcommands through controlled rootwrap permissions.
Nova (Compute Pods)
For Nova compute services, StarlingX OpenStack:
Enables iSCSI support and mounts the host’s
/rundirectory into the nova-compute container. Theconf.enable_iscsiis set totrue, mounting the host/rundirectory inside the nova-compute container.Exposes the host’s multipath daemon socket so Nova can correctly detect and use multipath block devices.
Configures Nova to attach volumes using multipath device paths rather than single-path devices. The
nova.conf [libvirt] volume_use_multipathis set totrue, allowing Nova’s libvirt driver to use multipath device paths when attaching volumes to VMs.Permits the required multipath commands through rootwrap filters.
This configuration ensures that virtual machines attach and use block storage, even if individual storage paths fail.
Glance (API Pods Using Cinder Store with iSCSI)
When Glance uses Cinder for image storage with iSCSI backends, StarlingX OpenStack:
Enables host networking for the Glance API pod. The
hostNetworkandprivilegedmode are enabled on the Glance API pod to access the host’siscsidservice for block-level image operations.Runs the pod in privileged mode so it can access the host’s iSCSI services.
Verify Multipath Configuration¶
To verify multipath is working, do the following:
For Nova, run the following commands from the nova-compute pod:
$ NOVA_POD=$(kubectl get pods -n openstack -l application=nova,component=compute -o jsonpath='{.items[0].metadata.name}') $ kubectl exec -n openstack $NOVA_POD -c nova-compute -- /usr/local/sbin/multipath -llExpected output for a healthy multipath device:
3600a0980383141765a2b59706a6a426d dm-14 NETAPP,LUN C-Mode size=5.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw |-+- policy='service-time 0' prio=50 status=active | |- 14:0:6:0 sdp 8:240 active ready running | `- 14:0:0:0 sdd 8:48 active ready running `-+- policy='service-time 0' prio=10 status=enabled |- 14:0:7:0 sdv 65:80 active ready running `- 14:0:1:0 sdj 8:144 active ready running
Key indicators of a healthy configuration:
prio=50indicates ALUA-optimized paths.prio=10indicates non-optimized (failover) paths.All paths show active ready running.
hwhandler='1 alua'confirms ALUA hardware handler is active.
For Cinder, run the following commands from the cinder-volume pod:
$ CINDER_POD=$(kubectl get pods -n openstack -l application=cinder,component=volume -o jsonpath='{.items[0].metadata.name}') $ kubectl exec -n openstack $CINDER_POD -c cinder-volume -- multipath -llTo verify the host’s multipath and iSCSI infrastructure:
# Check multipathd is running on the host $ sudo systemctl status multipathd # List all multipath devices on the host $ sudo multipath -ll # For iSCSI: verify active sessions $ sudo iscsiadm -m session # For FC: verify HBA ports are online grep -H . /sys/class/fc_host/host*/{port_state,port_name,fabric_name} # Check kernel detected SCSI paths and ALUA states (from dmesg) $ dmesg | grep -E "alua|NETAPP" | tail -20
Note
Nova volume attach operations complete in approximately 5-6 seconds. If operations exceed 30 seconds, check for the following:
Network connectivity to the Data LIF
multipathdservice healthiSCSI session state (
iscsiadm -m session)FC HBA port state (
grep -H . /sys/class/fc_host/host*/{port_state,port_name,fabric_name})