Rocky Series Release Notes¶
9.0.2¶
Corrections de bugs¶
Fixed several bugs which prevented sahara-image-pack from generating Ambari-based Ubuntu images.
The command hdfs fs has been deprecated in favor of hdfs fs. This fixes will allow the use of Hbase service.
This fixes the issue with NTP configuration where a prefered server provided by the user is added to the end of the file and the defaults are not deleted. Here we add the prefered server to the top of the file.
9.0.1¶
Nouvelles fonctionnalités¶
Adding the ability to change default timeout parameter for ambari agent package installation
9.0.0¶
Prelude¶
Sahara APIv2 is reaching a point of maturity. Therefore, new deployments should include an unversioned endpoint in the service catalog for the « data-processing » service, for the purposes of more intuitive version discovery. Eventually existing deployments should switch to an unversioned endpoint, too, but only after time is given for the use of older clients to be less likely.
Every new release of Sahara we update our plugins list. Some new versions are added and some removed and other marked as deprecated. For Rocky we are deprecating CDH 5.7.0, Spark 1.6.0 and 2.1 as well as Storm 1.0.1. We are also removing CDH 5.5.0, MapR 5.1.0, Spark 1.3.1 and Storm 0.9.2.
Nouvelles fonctionnalités¶
Adding the ability to create Ambari 2.6 images on sahara-image-pack
Adding the ability to boot a Sahara cluster from volumes instead of images.
Adding support to CDH 5.13.0 in CDH plugin.
The experimental APIv2 supports simultaneous creation of multiple clusters only through POST /v2/clusters (using the count parameter). The POST /v2/clusters/multiple endpoint has been removed.
Operators can now update the running configuration of Sahara processes by sending the parent process a « HUP » signal. Note: The configuration option must support mutation.
The behavior of force deletion of clusters (APIv2) has changed. Stack-abandon is no longer used. The response from the force-delete API call now includes the name of the stack which had underlain that deleted cluster.
Implemented support of HDP 2.6 in the Ambari plugin.
Use a new keypair to access to the running cluster when the cluster’s keypair is deleted.
An EDP data source may reference a file stored in a S3-like object store.
Support deploy hadoop 2.7.5 with vanilla plugin.
Problèmes connus¶
Remove the step « upload httpclient to oozie/sharelib » in sahara code. User should use latest vanilla-2.8.2 image which is built on SIE « Change-ID: I3a25ee8c282849911089adf6c3593b1bb50fd067 ».
Notes de mises à jours¶
Adding Spark 2.3 to supported plugins list.
Adding new versions of Storm, 1.2.0 and 1.2.1. Both will exist under the same tag 1.2.
We are removing some plugins versions. Those are CDH 5.5.0, MapR 5.1.0, Spark 1.3.1 and Storm 0.9.2.
Notes dépréciées¶
The sahara-all entry point is now deprecated. Please use the sahara-api and sahara-engine entry points instead.
We are deprecating CDH 5.7.0, Spark 1.6.0 and 2.1 and Storm 1.0.1.
Corrections de bugs¶
Hadoop is now better configured to use the proper Keystone domain for interaction with Swift; previously the “default” domain may have been incorrectly used.
Autres notes¶
A few responses in the experimental (but nearly-stable) APIv2 have been tweaked. To be specific, the key hadoop_version has been replaced with plugin_version, the key job has been replaced with job_template, the key job_execution has been replaced with job, and the key oozie_job_id has been replaced with engine_job_id. In fact, these changes were all previously partially implemented, and are now completely implemented.
The URL of an S3 data source may have s3:// or s3a://, equivalently.