Jewel Series Release Notes¶
2.4.2-9¶
Bug Fixes¶
Added rgw_keystone_implicit_tenants to ceph::rgw::keystone. Setting to true creates a new tenant per user.
2.4.1¶
New Features¶
Introduced the ability to setup ceph-mgr instances which are requried in the latest stable release of Ceph. This can be done using the ceph::mgr define or the ceph::profile::mgr profile.
2.4.0¶
New Features¶
We can now deploy CentOS SIG repos from an external mirror by re-using the $ceph_mirror parameter in ceph::repo.
Bug Fixes¶
Bug 1665697 non-existent block device should make deploy fail, not create directory whose name starts with /dev
Bug 1687114 puppet-ceph does not configure devices like /dev/nvme0n1 or HP Smart Array controllers (/dev/cciss/c0d0) as OSDs (only as journals)
2.3.0¶
Prelude¶
Improves support for the MDS service and adds a profile class for it.
New Features¶
On OSD nodes you can run out of PIDs during a recovery. Therefore it is recommended that you raise pid_max. This change makes that possible with an enablement flag and the ability to set the value.
The MDS class is extended to allow for binding address and instance ID configuration. It now also ensures that the needed packages for the MDS daemon are installed and that the service is manageable by Puppet.
A new MDS profile class is added which, in addition to deploying the MDS service will create a new keyring for it, allowing MDS profilation and access to the OSD pools.
Adds support for the rbd-mirror service including package inistallation, service enable and service start. Does not include support for generating the client see as ceph::key or other processes creates this key
Deprecation Notes¶
the ceph::rgw::syslog paramter is unused and will be removed
Other Notes¶
The package installed by default for the MDS service can be configured via ceph::params::pkg_mds
The keyring for the MDS service is only created if a key is given, the key to use can be configured via ceph::profile::params::mds_key
2.2.0¶
New Features¶
The ceph cluster FSID is explictly added as a cluster option (–cluster-uuid) to ceph-disk prepare per OSD
An additional check is done prior to preparing an OSD to verify the OSD is not already prepared with a different FSID, which is a symptom of trying to add an OSD from a different ceph cluster
Prior to this change, a deploy might report it is successful even if all of the OSDs fail to activate. The logs will now indicate that the OSD activation failed because a different FSID was found so that the user may then choose zap away the old deploy
Bug Fixes¶
Bug 1604728 Puppet should exit with error if disk activate fails
2.1.0¶
New Features¶
Added options to configure some osd parameters directly (instead of generic ceph::conf) this behavior is disabled by default. osd/osd_max_backfills, osd_max_scrubs, osd/osd_recovery_max_active, osd_recovery_max_single_start, osd/osd_recovery_op_priority, osd/osd_op_threads
Updates ‘ceph::rgw::keystone’ to integrate with keystone V3. Adds new parameters rgw_keystone_admin_domain, rgw_keyston_admin_project, rgw_keystone_admin_user and rgw_keystone_admin_password. Extends rgw_keystone_version to add ‘v3’ as a valid option
use mon_enable and rgw_enable to manage services on boot.
Known Issues¶
OSD define ensure statement has been updated to require either an explicit absent for OSD to be removed.
MON define ensure statement has been updated to require either an explicit absent for MON to be removed.
If ensure is set to any value other than present or absent, it will generate an error message for both OSD and MON resources.
At this time radosgw uses pki to verify Keystone revocation lists. ‘keystone::enable_pki_setup’ must be set to true to provide the needed keystone support
Upgrade Notes¶
when upgrading, move any usage of ceph::conf to ceph::init for the following parameters osd/osd_max_backfills, osd_max_scrubs, osd/osd_recovery_max_active, osd_recovery_max_single_start, osd/osd_recovery_op_priority, osd/osd_op_threads
Deprecation Notes¶
ceph::init set_osd_params is added new as deprecated. This is to support backwards compability in this release. It will be removed in the next release. Please inspect your usage of ceph::conf for osd/osd_max_backfills, osd_max_scrubs, osd/osd_recovery_max_active, osd_recovery_max_single_start, osd/osd_recovery_op_priority, osd/osd_op_threads as these will become actively defined from ceph::init
Other Notes¶
removal of support for sysv service management for ceph-mon and ceph-radosgw since master currently only supports jewel and the current recommended ceph platforms are either systemd or upstart based. http://docs.ceph.com/docs/jewel/start/os-recommendations/#platforms
2.0.0¶
Prelude¶
This is the first release that will support Ceph Jewel deployments.
New Features¶
Full support of Ceph Jewel on both Ubuntu Xenial and Red Hat systems.
Bug Fixes¶
Update the minimum requirements for stdlib to 4.10.0 which includes the service_provider fact which is required to ensure the services are managed correctly.