Yoga Series Release Notes¶
11.0.0¶
Nouvelles fonctionnalités¶
When we use load balancer policy to attach cluster, members will add to pool by member name.
Notes de mises à jours¶
The default value of
[oslo_policy] policy_file
config option has been changed frompolicy.json
topolicy.yaml
. Operators who are utilizing customized or previously generated static policy JSON files (which are not needed by default), should generate new policy files or convert them in YAML format. Use the oslopolicy-convert-json-to-yaml tool to convert a JSON to YAML formatted policy file in backward compatible way.
Notes dépréciées¶
Use of JSON policy files was deprecated by the
oslo.policy
library during the Victoria development cycle. As a result, this deprecation is being noted in the Wallaby cycle with an anticipated future removal of support byoslo.policy
. As such operators will need to convert to YAML policy files. Please see the upgrade notes for details on migration of any custom policy files.
Corrections de bugs¶
Pass in correct port id parameter when calling interface create on a server.
Find security group profiles by project scope.
Autres notes¶
Fix hacking lower constraints to 3.0.1 Fix jsonschema lower constraints to 3.2.0 Remove Babel requirement Remove six requirement Remove mock requirement, unittest.mock instead
Add Python3 victoria unit tests
9.0.0.0rc1¶
Prelude¶
The Senlin-Engine was responsible for a large number of threaded tasks. To help lower the number of potential threads per process and to make the Engine more resilient, starting with OpenStack Ussuri, the Engine service has been split into three services, senlin-conductor
, senlin-engine
and senlin-health-manager
.
Nouvelles fonctionnalités¶
Add cluster_id as a parameter in query action APIs. This allow we can filter result returned from API instead by received so many result action.
Add availability_zone option for loadbalancers. This is supported by Octavia starting in the Ussuri release.
Added a new config option to specify the timeout for Nova API calls.
Admin role users can now access and modify all resources (clusters, nodes, etc) regardless of which project that belong to.
Add tainted field to nodes. A node with tainted set to True will be selected first for scale-in operations.
Notes de mises à jours¶
Python 2.7 support has been dropped. Last release of Senlin to support python 2.7 is OpenStack Train. The minimum version of Python now supported by Senlin is Python 3.6.
Two new services has been introduced that will need to be started after the upgrade,
senlin-conductor
andsenlin-health-manager
.With the introduction of these new services new configuration options were added to allow operators to change the number of proceses to spawn.
[conductor] workers = 1
[engine] workers = 1
[health_manager] workers = 1
Problèmes de sécurités¶
Removed the restriction for admin role users that prevented access/changes to resources (clusters, nodes, etc) belonging to projects not matching the project used for authentication. Access for non-admin users is still isolated to their project used for authentication.
Corrections de bugs¶
Loadbalancers incorrectly required a VIP subnet, when they should actually accept either a VIP subnet or VIP network. Now either/both is acceptable.
8.0.0¶
Prelude¶
Updated tests to work with updated cluster delete.
Nouvelles fonctionnalités¶
Supported admin user can see details of any cluster profile.
Allows the cluster delete actions to detach policies and delete receivers for the cluster being deleted. This simplifies deleting clusters by not having to detach or delete all dependancies from it beforehand.
Added a new list config option to allow trust roles to be overridden.
Allow the cluster delete action to detach policies and delete receivers instead of erroring.
Bypass lb project restriction for get_details in LBaaS driver.
Added webhook v2 support:Previously webhook API introduced microversion 1.10 to allow callers to pass arbritary data in the body along with the webhook call. This was done so that webhooks would work with aodh again. However, aodh and most webhook callers cannot pass in the header necessary to specify the microversion. Thus, we introduce webhook v2 so that webhooks behave like in microversion 1.10 but without the need to specify that microversion header.
Add Python 3 Train unit tests.Add Python 3 Train unit tests. This is one of global goal in Train cycle.
Corrections de bugs¶
Fixes bug where the webhook rejected additional parameters in the body for mircoversion less than 1.10. Now with new webhook version 2, additional parameters in the body will always be accepted regardless of the microversion API passed in.
Various fixes to the user doc, developer doc and API documentation. Fixed api-ref and docs building. Fixed keystone_authtoken config in docs. Updated docs and examples for health policy v1.1. Updated api-ref location. Updated Cirros Example file.
Fixed when cluster doing resize/scale create nodes, and physcical id of this nodes not found, the cluster will still can do health check.
Fixed get node detail when creating VM is failed
Fixed node leak when creating node failed.
Updates should still be allowed in a DEGRADED state lest LB policy becomes unable to operate on any partially operational cluster.
Autres notes¶
Introduces webhook version 2 that is returned when creating new webhook receivers. Webhook version 1 receivers are still valid and will continue to be accepted.
Simply update the nova server key/value pairs that we need to update rather than completely deleting and recreating the dictionary from scratch.
All the integration testing has been moved to Bionic now and py3.5 is not tested runtime for Train or stable/stein.
Updated sphinx dependency with global requirements. It caps python 2 since sphinx 2.0 no longer supports Python 2.7. Updated hacking version to latest.
7.0.0¶
Prelude¶
This release alters the cluster_scale_in and cluster_scale_out actions to no longer place the action into the actions table when a conflict is detected. This behavior is an improvement on the old way actions are processed as the requester will now receive immediate feedback from the API when an action cannot be processed. This release also honors the scaling action cooldown in the same manner by erring via the API when a scaling action cannot be processed due to cooldown.
This release alters the behavior of cluster and node APIs which create, update or delete either resource. In the previous release those API calls would be accepted even if the target resource was already locked by another action. The old implementation would wait until the other action released the lock and then continue to execute the desired action. With the new implementation any API calls for cluster or node that modify said resource will be rejected with 409 conflict.
Health policy v1.1 implements multiple detection modes. This implementation is incompatible with health policy v1.0.
Added new tool senlin-status upgrade check
.
Nouvelles fonctionnalités¶
A
action_purge
subcommand is added tosenlin-manage
tool for purging actions from the actions table.
[blueprint action-update] A new action update API is added to allow the action status to be updated. The only valid status value for update is CANCELLED.
[bug 1815540] Cluster recovery and node recovery API request bodies are changed to only accept a single operation. Optional parameters for this operation are set in operation_params.
[blueprint scaling-action-acceptance] Scaling actions (IN or OUT) now validate that there is no conflicting action already being processed and will return an error via the API informing the end user if a conflict is detected. A conflicting action is detected when new action of either CLUSTER_SCALE_IN or CLUSTER_SCALE_OUT is attempted while there is already cluster scaling action in the action table in a pending status (READY, RUNNING, WAITING, ACTION_WAITING_LIFECYCLE_COMPLETION). Additionally the cooldown will be checked and enforced when a scaling action is requested. If the cooldown is being observed the requester will be informed of this when submitting the action via an error.
[blueprint fail-fast-locked-resource] POST, PATCH or DELETE API calls for clusters or nodes that require a lock are rejected with 409 resource conflict if another action is already holding a lock on the target resource.
[blueprint multiple-detection-modes] Health policy v1.1 now supports multiple detection types. The user can combine node status poll and node poll url types in the health policy in order to have both checked before a node is considered unhealthy.
New framework for
senlin-status upgrade check
command is added. This framework allows adding various checks which can be run before a Senlin upgrade to ensure if the upgrade can be performed safely.
Notes de mises à jours¶
This release makes changes to the health policy properties that are incompatible with health policy v1.0. Any existing policies of type health policy v1.0 must be removed before upgrading to this release. After upgrading, the health policies conforming to v1.0 must be recreated following health policy v1.1 format.
Operator can now use new CLI tool
senlin-status upgrade check
to check if Senlin deployment can be safely upgraded from N-1 to N release.
Corrections de bugs¶
[bug 1789488] Perform deep validation of profile and policy schemas so that errors in spec properties are detected.
[bug 1811161] Perform policy post-op even if action failed. This allows the health policy to reenable health checks even if an action that failed.
[bug 1811294] Set owner field for actions created to wait for lifecycle completion. This allows these actions to be cleaned up when the engine is restarted.
[bug 1813089] This change picks the address when adding a node to a load balancer based on the subnet ip version. This fix adds supports for nodes with dual stack network.
[bug 1817379] Delete ports before recovering a node.
[bug 1817604] Fixes major performance bugs within senlin by improving database interaction. This was completed by updating the database models to properly take advantage of relationships. Additionally removes unnecessary database calls and prefers joins instead to retrieve object data.
Fixes the logic within the health manager to prevent duplicate health checks from running on the same cluster.
Autres notes¶
Adds a configuration option to the health manager to control the maximum amount of threads that can be created by the health manager.
6.0.0¶
Nouvelles fonctionnalités¶
Added a cluster entity refresh to the cluster action execute wrapper which will make sure the state of the action does not become stale while in queue.
Added a scheduler thread pool size.
Added a new boolean cluster config option to stop node before delete for all cluster.
All REST calls that involve a DB interaction are now automatically retried upon deadlock exceptions.
Added operation support to start a docker container.
Supported update name operation for docker profile.
The engine has been augmented to send event notifications only when a node is active and it has a physical ID associated. This is targeting at the lifecycle hooks and possibly other notifications.
Health policy now contains NODE_STATUS_POLL_URL detection type. This detection type queries the URL specified in the health policy for node health status. This allows the user to integrate Senlin health checks with an external health service.
Added a new detection type that actively pools the node health using a URL specified in the health policy. That way the user can intergate Senlin’s health policy with another custom or 3rd party health check service.
Added dependency relationship between the master cluster and the worker cluster creatd for Kubernetes.
New version of deletion policy (v1.1) is implemented which supports the specification of lifecycle hooks to be invoked before shrinking the size of a cluster. For details, please check the policy documentation.
A new configuration option is exposed for the message topic to use when sending event notifications.
New configuration option « database_retry_limit » is added for customizing the maximum retries for failed operations on the database. The default value is 10.
New configuration option « database_retry_interval » is added for specifying the number of seconds between database operation retries. The default value is 0.1.
New configuration option « database_max_retry_interval » is added for users to specify the maximum number of seconds between database operation retries. The default value is 2.
Added retry logic to post_lifecycle_hook_message when posting a lifecyle hook to Zaqar.
The policy attach and detach actions are improved to automatically retry on failed attempts.
The action scheduler has been refactored so that no premature sleeping will be performed and no unwanted exceptions will be thrown when shutting down workers.
The lifecycle hooks feature added during Queens cycle is improved to handle cases where a node no longer exists. The lifecycle is only effective when the target node exists and active.
Added support to lock and unlock a nova server node.
Added operation support to migrate a nova server node.
Added operation support to pause and unpause a nova server node.
Added operation support to rescue and unrescue a nova server node.
Added operation support to start and stop a nova server node.
Added operation support for suspending and resuming a nova server node.
Known Issues¶
There are cases where the event listener based health management cannot successfully stop all listeners.
Notes de mises à jours¶
The API microversion 1.10 has fixed the webhook trigger API for easier integration with Aodh. In previous microversions, the query parameters are used as action inputs. Starting from 1.10, the key-value pairs in the request body are also considered as request inputs.
Corrections de bugs¶
The UUID used by the block_device_mapping_v2 in nova.server profile is validated.
Fixed cluster lock primary key conflict problem.
Fixed the example of « aodh alarm create » command.
Senlin API/Function/Integration test were moved to senlin-tempest-plugin project before, fixed doc for this change.
Fixed an error when restarting a docker container node.
Fixed bug when deleteing node error.
Fixed bug in health checking which was introduced by oslo.context hanges.
Fixed bug when checking if health policy is attached already.
In openstacksdk 0.14.0 release, a bug related to SDK exception was fixed « https://review.openstack.org/#/c/571101/ ». With that change a SDK exception will contain the detailed message only if the message string is equal to “Error”. Fixed the test_parse_exception_http_exception_no_details to use “Error” as the exception message to make the test case pass.
Enable old versions of builtin policy types to be listed and used.
Fixed openstack-tox-cover which was broken as part of the switch to stestr.
Fixed the error in token generation for kubeadm.
Fixed cluster and node lock management so that failed lock acquire operations are automatically retried. This is an important fix for running multiple service engines.
Node creation request that might break cluster size constraints now results in node ERROR status.
Added exception handling for node-join and node-leave operations.
Fixed the return value from a node operation call.
Fixed defects in node recover operation to ensure node status is properly handled.
Improved logic in rebooting and rebuilding nova server nodes so that exceptions are caught and handled.
Fixed the « role » field used when creating/updating a node.
Fixed nova profile logic when updating image. We will always use the current image as the effective one.
Added scheduler thread pool size configuration value and changed default thread pool size for scheduler from 10 to 1000. This fix prevents problems when a large number of cluster operations are executed simultaneously.
Added exception handling for service status update. This is making service management more stable.
The data type problem related to action start time and end time is fixed. We now use decimal type instead of float for these columns.
The “V” query parameter when triggering a webhook receiver is strictly required.
Fixed a bug where API version negotiation is not effective when invoked via OpenStack SDK. The API impacted is limited to webhook triggering.
Autres notes¶
Health policy v1.0 was moved from EXPERIMENTAL to SUPPORTED status.
5.0.0¶
Nouvelles fonctionnalités¶
Support to forced deletion of cluster and nodes.
Added support to Octavia as the load-balancer driver.
Node details view now includes attached_volumes.
Added cluster config property « node.name.format » where users can specify how cluster nodes are automatically named. Users can use placeholders like « $nI » for node index padded with 0s to the left, or « $nR » for random string of length n.
Senlin now support policy in code, which means if users didn’t modify any of policy rules, they can leave policy file (in json or yaml format) empty or not deploy it at all. Because from now, Senlin keeps all default policies under senlin/common/policies module. Users can modify/generate policy.yaml file which will override policy rules in code if those rules show in policy.yaml file. Users also still use policy.json file but oslo team recommend that we should use the newer YAML format instead.
Added support to unicode availability zone names.
Added support to use Unicode string for cluster names.
Notes de mises à jours¶
The Octavia service must be properly installed and configured to enable load-balancing policy.
Corrections de bugs¶
Fixed a bug related to oslo.versionedobjects change that prevents cluster actions to be properly encoded in JSON requests.
Fixed bug related to reacting to nova vm lifecycle event notifications. The recover flow is no longer called twice when a VM is deleted.
Fixed various defects in managing node pools for loadbalancer policy.
DB lock contentions are alleviated by allowing lock retries.
Fixed a bug related to force delete nodes.
Fixed an error where action name not passed to backend service.
Fixed an error introduced by oslo.versionedobjects change that lead to failures when creating a receiver.
Autres notes¶
Improved Nova VM server health check for cases where physical id is invalid.
Default policy.json file is now removed as Senlin now generate the default policies from code. Please be aware that when using that file in your environment.
4.0.0¶
Nouvelles fonctionnalités¶
When a cluster or a node is deleted, the action records associated with them are now automatically deleted from database.
A new configuration option check_interval_max is added (default=3600) for cluster health check intervals.
The health manager is improved to use dynamic timers instead of fix interval timers when polling cluster’s status.
New logics added to event-list operation so that users can specify the name or short-id of a cluster for filtering.
A event_purge subcommand is added to senlin-manage tool for purging events generated in a specific project.
When a node cannot be added to a load-balancer although desired, or it can not be removed from a load-balancer when requested, the node will be marked as in WARNING status.
When an engine is detected to be dead, the actions (and the clusters/nodes locked by those actions) are now unlocked. Such clusters and nodes can be operated again.
A new recovery action « REBOOT » has been added to the health policy.
Added support to listen to heat event notifications for stack failure detection.
The load-balancing policy now properly supports the CLUSTER_RECOVER action and NODE_RECOVER action.
Added support to adopt an existing object as Senlin node given the UUID and profile type to use.
API microversion 1.6 comes with an optional parameter “check” that tells the engine to perform a health check before doing actual recovery. This applies to both clusters and nodes.
Relaxed constraint on node physical_id property. Any string value is now treated as valid value even if it is not an UUID.
A new feature is introduced in API microversion 1.6 which permits a cluster update operation to change the profile used by the cluster only without actually updating the existing nodes (if any). The new profile will be used when new nodes are created as members of the cluster.
New operation introduced for updating the parameters of a receiver.
The numeric properties in the spec for a scaling policy now have stricter validations.
New API introduced to list the running service engines.
The setup-service script now supports the customization of service project name and service role name.
Notes dépréciées¶
The support to CLUSTER_DELETE action from the experimental batch policy is dropped due to issues on cluster locking. This could be resurected in future when a proper workaround is identified.
The support to py3.4 is dropped. Please use py3.5 instead.
Corrections de bugs¶
The bug where the availability zone info from a nova server deployment was not available has been fixed.
Fixed cluster-recover operation in engine so that it accepts parameters from API requests in addition to policy decision (if any).
Fixed an error in the built-in deletion policy which failed to process NODE_DELETE action.
Various bug fixes to the user manual and sample profiles/policies.
When an action was marked as RETRY, its status is reset to READY for a reschedule. A bug related to this behavior is now fixed.
Fixed immature return from policy cooldown check.
Fixed a bug related to desired_capacity when creating a cluster. The old behavior was having it default to 1, however, the correct behavior should be having it default to min_size if provided.
Fixed a problem related to duplicated event dumps during action execution.
Fixed error in the return value of node-check which prevents node-recover from being triggered.
Fixed a problem when claiming a cluster from health registry if service engine is stopped (killed) and restarted quickly.
Fixed an error in updating stack tags when the stack joins or leaves a cluster.
Fixed an error in the built-in load-balancing policy that caused by regression in getting node details for IP addresses.
Fixed various problems in load-balancer policy so that it can handle node-recover and cluster-recover operations properly.
Fixed an error in parameter checking logic for node-recover operation which prevented valid parameters from being accepted.
Fixed error that raises when no operation is provided during node health recovery.
Fixed an error introduced by openstacksdk when checking/setting the availability zone of a nova server.
The parameter checking for the cluster update operation may incorrectly parse the provided value(s). This bug has been fixed.
When attaching a policy (especially a health policy) to a cluster, users may choose to keep the policy disabled. This has to be considered in the health manager and other places. This issue is fixed.
Fixed bugs in deletion zone policy and region policy which were not able to correctly parse node reference.
Fixed a bug related to webhook ID in the channel info of a receiver. The channel info now always contains valid webhook ID.
A nova server, if booted from volume, will not return a valid image ID. This situation is now taken care of.
Autres notes¶
DB layer operations now feature some retries if there are transient errors.
Sample health policy file was using 60 seconds as the interval which could be misleading. This has been tuned to 600 seconds.
Built-in policies are optimized for reducing DB transactions.
The parameter checking for cluster-resize operation is revised so that min_step will be ignored if the ajustment type is not CHANGE_IN_PERCENTAGE.
3.0.0.0b3¶
Nouvelles fonctionnalités¶
A new API « cluster-op » is introduced to trigger a profile type specific operation on all nodes in a cluster. This API is available since API micro-version 1.4.
Docker container profile now supports operations like restart, pause and unpause.
A new, optional parameter « destroy_after_deletion » is added to the cluster-del-nodes request since API micro-version 1.4.
Error messages returned from API requests are now unified. All parameter validation failures of the same reason returns a similar message.
A configuration option « exclude_derived_actions » is introduced into the « dispatchers » group for controlling whether derived actions should lead into event notifications and/or DB records.
Health policy recovery actions now contains a list of dictionaries instead of a list of simple names. This is to make room for workflow invocations.
Many new operations are added to os.nova.server profile type. These operations can be shown using the « profile-type-ops » API.
Added new node-operation API for performing profile type supported operations on a node.
New API « node-op » is introduced for triggering profile type specific operations on a node. This is available since API micro-version 1.4.
Event notifications (versioned) are added to enable senlin-engine to send out messaging events when configured. The old event repo is adapted to follow the same design.
Versioned request support in API, RPC and engine layers.
Basic support for event/notification.
Enables osprofiler support.
Rally plugin for cluster scaling in.
Batch policy support for cluster actions.
Integration test for message receiver.
A new API « profile-type-ops » is introduced to expose the profile type specific operations” schema to end users.
Profile type list and policy type list now returns the support status for each type since API micro-version 1.5.
RPC requests from the API service to the engine service are fully managed using versioned objects now. This will enable a smooth upgrade for the service in future.
Notes de mises à jours¶
For resources which has a user, a project and a domain property, the lengths of these columns are increased from 32 chars to 64 chars for a better conformance with Keystone.
New setup configuration items are provided to enable the « message » and/or « database » event generation.
Critical Issues¶
The problem of having clusters or nodes still locked by actions executed by a dead engine is fixed.
Problèmes de sécurités¶
Multi-tenancy is enhanced so that an admin role user has to respect project isolation unless explicitly asking for an exception.
Corrections de bugs¶
Fixed the problem that health manager related configuration options were not properly exposed.
Removed LB_STATUS_POLLING from health policy since LBaaS still cannot provide reliable node status update.
The health policy recovery actions is designed to be a list but the current implementation can only handle one action. This is now explicitly checked.
Fixed the problem that the « updated_at » timestamp of a node was not correctly updated.
The notifications of profile type specific operations were not properly reporting the operation’s name. This has been fixed.
Fixed the notification logic so that it uses the proper transport obtained from oslo.messaging.
Fixed bug related to cluster-collect API where the path parameter is None.
Autres notes¶
The retrieval of some resources such as actions and policies are optimized to avoid object instantiation.
3.0.0.0b1¶
Nouvelles fonctionnalités¶
Integrated OSProfiler into Senlin, support using OSProfiler to measure performance of Senlin.
2.0.0¶
Nouvelles fonctionnalités¶
Improved the action scheduler so that it can decide how many node actions will be fired in each batch. Batch control is a throttling measure to avoid raising too many requests in a short interval to the backend services.
A new
cluster_collect
API is added.
The senlin-engine now supports fencing a corrupted VM instance by deleting it forcibly.
A new profile type “container.dockerinc.docker-1.0” is added to support creation and management of docker clusters. This is still an experimental feature. Please use with caution.
The deletion policy is enhanced to handle “NODE_DELETE” actions which derives from a standalone “node_delete” request.
The cluster health manager has gained a new feature where nova server instance failures can be detected and handled, with and without a health policy attached to a cluster.
The health policy was improved so that it will suspend itself when a node deletion comes from senlin-engine or client request. The policy will only effect when node failure is “unexpected”.
The load-balancing policy is improved to handle “NODE_CREATE” and “NODE_DELETE” actions that derive from “node_create” or “node_delete” RPC requests directly.
A new « lb_status_timeout » option is added to the LB policy to cope with load-balancers that are not so responsive.
Added a new type of receiver (i.e. message) which is based on Zaqar message queue.
The region placement policy and the zone placement policy have been augmented with spec validation support.
The affinity policy is improved to handle NODE_CREATE actions which are derived from “node_create” RPC requests.
The availability-zone placement policy is improved to handle NODE_CREATE actions which are derived from “node_create” RPC requests.
The region placement policy is improved to handle the NODE_CREATE action which derives from a “node_create” RPC request.
With the new “profile-validate” API, the nova server profile now supports the validation of its “flavor”, “image” (if provided), “availability_zone” and block device driver properties.
Added support to oslo.versionedobject so that DB interactions are abstracted. It is possible to do live upgrade for senlin service now.
A new policy-validate API has been added to validate the spec of a policy without actually creating an instance of it.
The affinity policy, loadbalancing policy now support spec validation. Invalid properties can be detected using policy-validate API.
A new profile-validate API has been added to validate the spec of a profile without actually creating an instance of it.
Engine scheduler was redesigned to work in « tickless » way.
Tempest API test for all Senlin API interfaces for both positive and negative cases.
Reimplement functional test using tempest.
Added “template_url” support to heat stack profile.
Zaqar resources including « queue », « message », « subscription » and « claim » are now supported in Senlin driver.
Notes de mises à jours¶
The cluster delete API calls may return a 409 status code if there are policies and/or receivers associated with it. Previously, we return a 400 status code.
DB columns obj_id, obj_type and obj_name in the event table are now renamed to oid, otype and oname correspondingly.
The “details/addresses” property of a node output for a nova server used to contain only some trimed information. This has been changed to a faithful dumping of the “addresses” property.
Several configuration options are consolidated into the “senlin_api” group in “senlin.conf” file (“api_paste_config”, “wsgi_keep_alive”, “client_socket_timeout”, “max_json_body_size”).
With the newly added “message” type of receivers, the “cluster” and the “action” property are not always required when creating a receiver. They are still required if the receiver type is “webhook” (the default).
Problèmes de sécurités¶
The configuration option “service_password” is marked as secret so that its value won’t get leaked into log files.
Corrections de bugs¶
Fixed a bug in affinity policy where the calls to nova driver was wrong.
The new API documentation include fixes to the header like “location”, “OpenStack-Request-Id” and responses during version negotiation.
Fixed bug related to the desired_capacity calculation. The base number used now is the current capacity of the cluster instead of previous “desired” capacity. This include all actions that change cluster capacity and all related policies.
The “desired_capacity” reflects the expectation from a requester’s view point. The engine now changes the “desired_capacity” after the request is validated/sanitized, before the action is actually implemented. This means the “desired_capacity” will change event if an action fails.
Fixed cluster status update logic so that cluster status is solely determined by the status of its member nodes. The status is updated each time a cluster operation has completed.
Fix cluster next_index update when adding nodes to cluster.
Fixed DB layer dead lock issue that surfaced recently during concurrent DB operations.
Fixed resource delete operations which should return 204 status code with body length of zero.
Fixed error handling when network is not found in nova server creation.
Fixed node recover operation behavior so that unsupported operations can be detected and handled.
A cluster in the middle of an on-going action should not be deletable. The engine service has been improved to detect this situation.
The unimplemented properties for health policy are masked out.
The node action execution logic is fixed so that it will skip cluster checking for orphan nodes and policy checking will be skipped for derived node actions.
Fixed bug introduced by openstacksdk when updating nova server metadata.
Fixed dead service clean-up logic so that the clean-up operation can be retried.
The “senlin-manage” command has been fixed so that it will report the senlin service status correctly.
The “tools/setup-service” script has been fixed so that it works under keystone v3.
Autres notes¶
Senlin API/Engine configuration options are now documented and published online.
Reworked API documentation which is now published at https://developer.openstack.org/api-ref/clustering