GlusterFS driver uses GlusterFS, an open source distributed file system, as the storage backend for serving file shares to manila clients.
The following parameters in the manila’s configuration file need to be set:
The following configuration parameters are optional:
If Ganesha NFS server is used (glusterfs_nfs_server_type = Ganesha
),
then by default the Ganesha server is supposed to run on the manila host
and is managed by local commands. If it’s deployed somewhere else, then
it’s managed via ssh, which can be configured by the following parameters:
In lack of glusterfs_ganesha_server_password
ssh access will fall
back to key based authentication, using the key specified by
glusterfs_path_to_private_key
, or, in lack of that, a key at
one of the OpenSSH-style default key locations (~/.ssh/id_{r,d,ecd}sa).
For further (non driver specific) configuration of Ganesha, see Ganesha Library. It is recommended to consult with Ganesha Library: Known Issues too.
Layouts have also their set of parameters, see Layouts about that.
New in Liberty, multiple share layouts can be used with glusterfs driver. A layout is a strategy of allocating storage from GlusterFS backends for shares. Currently there are two layouts implemented:
directory mapped layout (or directory layout, or dir layout for short): a share is backed by top-level subdirectories of a given GlusterFS volume.
Directory mapped layout is the default and backward compatible with Kilo.
The following setting explicitly specifies its usage:
glusterfs_share_layout = layout_directory.GlusterfsDirectoryMappedLayout
.
Options:
gluster
utility. If it’s of the format <username>@<glustervolserver>:/<glustervolid>,
then we ssh to <username>@<glustervolserver> to execute gluster
(<username> is supposed to have administrative privileges on
<glustervolserver>)./mnt
, where
$state_path defaults to /var/lib/manila
)Limitations:
volume mapped layout (or volume layout, or vol layout for short): a share is backed by a whole GlusterFS volume.
Volume mapped layout is new in Liberty. It can be chosen by setting
glusterfs_share_layout = layout_volume.GlusterfsVolumeMappedLayout
.
Options (required):
Volume mapped layout is implemented as a common backend of the glusterfs and glusterfs-native drivers; see the description of these options in GlusterFS Native driver: Manila driver configuration setting.
A special configuration choice is
glusterfs_nfs_server_type = Gluster
glusterfs_share_layout = layout_volume.GlusterfsVolumeMappedLayout
that is, Gluster NFS used to export whole volumes.
All other GlusterFS backend configurations (including GlusterFS set up
with glusterfs-native) require the nfs.export-volumes = off
GlusterFS setting. Gluster NFS with volume layout requires
nfs.export-volumes = on
. nfs.export-volumes
is a cluster-wide
setting, so a given GlusterFS cluster cannot host a share backend with
Gluster NFS + volume layout and other share backend configurations at
the same time.
There is another caveat with nfs.export-volumes
: setting it to on
without enough care is a security risk, as the default access control
for the volume exports is “allow all”. For this reason, while the
nfs.export-volumes = off
setting is automatically set by manila
for all other share backend configurations, nfs.export-volumes = on
is not set by manila in case of a Gluster NFS with volume layout
setup. It’s left to the GlusterFS admin to make this setting in conjunction
with the associated safeguards (that is, for those volumes of the cluster
which are not used by manila, access restrictions have to be manually
configured through the nfs.rpc-auth-{allow,reject}
options).
Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.