Rocky Series Release Notes¶
9.0.2¶
Bug Fixes¶
Fixed several bugs which prevented sahara-image-pack from generating Ambari-based Ubuntu images.
The command hdfs fs has been deprecated in favour of hdfs fs. This fixes will allow the use of Hbase service.
This fixes the issue with NTP configuration where a preferred server provided by the user is added to the end of the file and the defaults are not deleted. Here we add the preferred server to the top of the file.
9.0.1¶
New Features¶
Adding the ability to change default timeout parameter for Ambari agent package installation
9.0.0¶
Prelude¶
Sahara APIv2 is reaching a point of maturity. Therefore, new deployments should include an unversioned endpoint in the service catalogue for the “data-processing” service, for the purposes of more intuitive version discovery. Eventually existing deployments should switch to an unversioned endpoint, too, but only after time is given for the use of older clients to be less likely.
Every new release of Sahara we update our plugins list. Some new versions are added and some removed and other marked as deprecated. For Rocky we are deprecating CDH 5.7.0, Spark 1.6.0 and 2.1 as well as Storm 1.0.1. We are also removing CDH 5.5.0, MapR 5.1.0, Spark 1.3.1 and Storm 0.9.2.
New Features¶
Adding the ability to create Ambari 2.6 images on sahara-image-pack
Adding the ability to boot a Sahara cluster from volumes instead of images.
Adding support to CDH 5.13.0 in CDH plugin.
The experimental APIv2 supports simultaneous creation of multiple clusters only through POST /v2/clusters (using the count parameter). The POST /v2/clusters/multiple endpoint has been removed.
Operators can now update the running configuration of Sahara processes by sending the parent process a “HUP” signal. Note: The configuration option must support mutation.
The behaviour of force deletion of clusters (APIv2) has changed. Stack-abandon is no longer used. The response from the force-delete API call now includes the name of the stack which had underlain that deleted cluster.
Implemented support of HDP 2.6 in the Ambari plugin.
Use a new keypair to access to the running cluster when the cluster’s keypair is deleted.
An EDP data source may reference a file stored in a S3-like object store.
Support deploy Hadoop 2.7.5 with vanilla plugin.
Known Issues¶
Remove the step “upload httpclient to oozie/sharelib” in sahara code. User should use latest vanilla-2.8.2 image which is built on SIE “Change-ID: I3a25ee8c282849911089adf6c3593b1bb50fd067”.
Upgrade Notes¶
Adding Spark 2.3 to supported plugins list.
Adding new versions of Storm, 1.2.0 and 1.2.1. Both will exist under the same tag 1.2.
We are removing some plugins versions. Those are CDH 5.5.0, MapR 5.1.0, Spark 1.3.1 and Storm 0.9.2.
Deprecation Notes¶
The sahara-all entry point is now deprecated. Please use the Sahara-API and sahara-engine entry points instead.
We are deprecating CDH 5.7.0, Spark 1.6.0 and 2.1 and Storm 1.0.1.
Bug Fixes¶
Hadoop is now better configured to use the proper Keystone domain for interaction with Swift; previously the ‘default’ domain may have been incorrectly used.
Other Notes¶
A few responses in the experimental (but nearly-stable) APIv2 have been tweaked. To be specific, the key hadoop_version has been replaced with plugin_version, the key job has been replaced with job_template, the key job_execution has been replaced with job, and the key oozie_job_id has been replaced with engine_job_id. In fact, these changes were all previously partially implemented, and are now completely implemented.
The URL of an S3 data source may have s3:// or s3a://, equivalently.