CephFS Native driver¶
The CephFS Native driver enables the Shared File Systems service to export shared file systems to guests using the Ceph network protocol. Guests require a Ceph client in order to mount the file system.
Access is controlled via Ceph’s cephx authentication system. When a user requests share access for an ID, Ceph creates a corresponding Ceph auth ID and a secret key, if they do not already exist, and authorizes the ID to access the share. The client can then mount the share using the ID and the secret key.
To learn more about configuring Ceph clients to access the shares created using this driver, please see the Ceph documentation ( http://docs.ceph.com/docs/master/cephfs/). If you choose to use the kernel client rather than the FUSE client, the share size limits set in the Shared File Systems service may not be obeyed.
Requirements¶
Mitaka or later versions of manila.
Jewel or later versions of Ceph.
A Ceph cluster with a file system configured ( http://docs.ceph.com/docs/master/cephfs/createfs/)
ceph-common
package installed in the servers running themanila-share
service.Ceph client installed in the guest, preferably the FUSE based client,
ceph-fuse
.Network connectivity between your Ceph cluster’s public network and the servers running the
manila-share
service.Network connectivity between your Ceph cluster’s public network and guests.
Important
A manila share backed onto CephFS is only as good as the underlying file system. Take care when configuring your Ceph cluster, and consult the latest guidance on the use of CephFS in the Ceph documentation ( http://docs.ceph.com/docs/master/cephfs/).
Configure CephFS back end in manila.conf
¶
Add CephFS to
enabled_share_protocols
(enforced at the Shared File Systems service’s API layer). In this example we leave NFS and CIFS enabled, although you can remove these if you only use CephFS:enabled_share_protocols = NFS,CIFS,CEPHFS
Refer to the following table for the list of all the
cephfs_native
driver-specific configuration options.¶ Configuration option = Default value
Description
[DEFAULT]
cephfs_auth_id
=manila
(String) The name of the ceph auth identity to use.
cephfs_cluster_name
=None
(String) The name of the cluster in use, if it is not the default (‘ceph’).
cephfs_conf_path
=(String) Fully qualified path to the ceph.conf file.
Create a section to define a CephFS back end:
[cephfs1] driver_handles_share_servers = False share_backend_name = CEPHFS1 share_driver = manila.share.drivers.cephfs.cephfs_native.CephFSNativeDriver cephfs_conf_path = /etc/ceph/ceph.conf cephfs_auth_id = manila cephfs_cluster_name = ceph
Also set the
driver-handles-share-servers
toFalse
as the driver does not manage the lifecycle ofshare-servers
.Edit
enabled_share_backends
to point to the driver’s back-end section using the section name. In this example we are also including another back end (generic1
), you would include whatever other back ends you have configured.enabled_share_backends = generic1,cephfs1
Known restrictions¶
Consider the driver as a building block for supporting multi-tenant workloads in the future. However, it can be used in private cloud deployments.
The guests have direct access to Ceph’s public network.
Snapshots are read-only. A user can read a snapshot’s contents from the
.snap/{manila-snapshot-id}_{unknown-id}
folder within the mounted share.To restrict share sizes, CephFS uses quotas that are enforced in the client side. The CephFS clients are relied on to respect quotas.
Security¶
Each share’s data is mapped to a distinct Ceph RADOS namespace. A guest is restricted to access only that particular RADOS namespace.
An additional level of resource isolation can be provided by mapping a share’s contents to a separate RADOS pool. This layout would be preferred only for cloud deployments with a limited number of shares needing strong resource separation. You can do this by setting a share type specification,
cephfs:data_isolated
for the share type used by the cephfs driver.manila type-key cephfstype set cephfs:data_isolated=True
Untrusted manila guests pose security risks to the Ceph storage cluster as they would have direct access to the cluster’s public network.