Deletion Policy V1.1¶
The deletion policy is designed to be enforced when a cluster’s size is to be shrunk.
Spec¶
Latest Version¶
Available Versions¶
Version |
Status |
Supported Since |
---|---|---|
1.0 |
SUPPORTED |
2016.04 |
1.1 |
SUPPORTED |
2018.01 |
Applicable Profile Types¶
ANY
Policy Triggers¶
Action |
Phase |
---|---|
CLUSTER_DEL_NODES |
BEFORE |
CLUSTER_RESIZE |
BEFORE |
CLUSTER_SCALE_IN |
BEFORE |
NODE_DELETE |
BEFORE |
Properties¶
- timeout
- Number of seconds before actual deletion happens.
- type
- Type of lifecycle hook
- params
- queue
- Zaqar queue to receive lifecycle hook message
- url
- Url sink to which to send lifecycle hook message
Actions Handled¶
The policy is capable of handling the following actions:
CLUSTER_SCALE_IN
: an action that carries an optional integer value namedcount
in itsinputs
.CLUSTER_DEL_NODES
: an action that carries a list value namedcandidates
in itsinputs
value.CLUSTER_RESIZE
: an action that carries various key-value pairs as arguments to the action in itsinputs
value.NODE_DELETE
: an action that has a node associated with it. This action has to be originated from a RPC request directly so that it will be processed by the deletion policy. The node ID associated with the action obviously become the ‘candidate’ node for deletion.
The policy will be checked BEFORE any of the above mentioned actions is executed.
Scenarios¶
Under different scenarios, the policy works by checking different properties of the action.
S1: CLUSTER_DEL_NODES
¶
This is the simplest case. An action of CLUSTER_DEL_NODES
carries a list of
UUIDs for the nodes to be removed from the cluster. The deletion policy steps
in before the actual deletion happens so to help determine the following
details:
whether the nodes should be destroyed after being removed from the cluster;
whether the nodes should be granted a grace period before being destroyed;
whether the
desired_capacity
of the cluster in question should be reduced after node removal.
After the policy check, the data
field is updated with contents similar to
the following example:
{
"status": "OK",
"reason": "Candidates generated",
"deletion": {
"count": 2,
"candidates": ["<node-id-1>", "<node-id-2"],
"destroy_after_deletion": true,
"grace_period": 0
}
}
S2: CLUSTER_SCALE_IN
without Scaling Policy¶
When the request is about scaling in the target cluster, the Senlin engine
expects that the action carries a count
key in its inputs
. If the
count
key doesn’t exist, it means the requester has no idea (or he/she
doesn’t care) the number of nodes to remove. The decision is left to the
scaling policy (if any) or to the Senlin engine.
When there is no scaling policy attached to the cluster,
Senlin engine takes the liberty to assume that the expectation is to remove
1 node from the cluster. This is equivalent to the case when count
is
specified as 1
.
The policy then continues evaluate the cluster nodes to select count
victim node(s) based on the criteria
property of the policy. Finally it
updates the action’s data
field with the list of node candidates along
with other properties, as described in scenario S1.
S3: CLUSTER_SCALE_IN
with Scaling Policy¶
If there is a scaling policy attached to the cluster, that
policy will yield into the action’s data
property some contents similar to
the following example:
{
"deletion": {
"count": 2
}
}
The senlin engine will use value from the deletion.count
field in the
data
property as the number of nodes to remove from cluster. It selects
victim nodes from the cluster based on the criteria
specified and then
updates the action’s data
property along with other properties, as
described in scenario S1.
S4: CLUSTER_RESIZE
without Scaling Policy¶
If there is no scaling policy attached to the cluster,
the deletion policy won’t be able to find a deletion.count
field in the
action’s data
property. It then checks the inputs
property of the
action directly and generates a deletion.count
field if the request turns
out to be a scaling-in operation. If the request is not a scaling-in
operation, the policy check aborts immediately.
After having determined the number of nodes to remove, the policy proceeds to
select victim nodes based on its criteria
property value. Finally it
updates the action’s data
field with the list of node candidates along
with other properties, as described in scenario S1.
S5: CLUSTER_RESIZE
with Scaling Policy¶
In the case there is already a scaling policy attached to the cluster, the scaling policy will be evaluated before the deletion policy, so the policy works in the same way as described in scenario S3.
S6: Deletion across Multiple Availability Zones¶
When you have a zone placement policy attached to a cluster, the zone placement policy will decide in which availability zone(s) new nodes will be placed and from which availability zone(s) old nodes should be deleted to maintain an expected node distribution. Such a zone placement policy will be evaluated before this deletion policy, according to its builtin priority value.
When scaling in a cluster, a zone placement policy yields a decision into the
action’s data
property that looks like:
{
"deletion": {
"count": 3,
"zones": {
"AZ-1": 2,
"AZ-2": 1
}
}
}
The above data indicate how many nodes should be deleted globally and how many
nodes should be removed from each availability zone. The deletion policy then
evaluates nodes from each availability zone to select specified number of
nodes as candidates. This selection process is also based on the criteria
property of the deletion policy.
After the evaluation, the deletion policy completes by modifying the data
property to something like:
{
"status": "OK",
"reason": "Candidates generated",
"deletion": {
"count": 3,
"candidates": ["node-id-1", "node-id-2", "node-id-3"]
"destroy_after_deletion": true,
"grace_period": 0
}
}
In the deletion.candidates
list, two of the nodes are from availability
zone AZ-1
, one of the nodes is from availability zone AZ-2
.
S6: Deletion across Multiple Regions¶
When you have a region placement policy attached to a cluster, the region placement policy will decide to which region(s) new nodes will be placed and from which region(s) old nodes should be deleted to maintain an expected node distribution. Such a region placement policy will be evaluated before this deletion policy, according to its builtin priority value.
When scaling in a cluster, a region placement policy yields a decision into
the action’s data
property that looks like:
{
"deletion": {
"count": 3,
"region": {
"R-1": 2,
"R-2": 1
}
}
}
The above data indicate how many nodes should be deleted globally and how many
nodes should be removed from each region. The deletion policy then evaluates
nodes from each region to select specified number of nodes as candidates. This
selection process is also based on the criteria
property of the deletion
policy.
After the evaluation, the deletion policy completes by modifying the data
property to something like:
{
"status": "OK",
"reason": "Candidates generated",
"deletion": {
"count": 3,
"candidates": ["node-id-1", "node-id-2", "node-id-3"]
"destroy_after_deletion": true,
"grace_period": 0
}
}
In the deletion.candidates
list, two of the nodes are from region R-1
,
one of the nodes is from region R-2
.
S7: Handling NODE_DELETE
Action¶
If the action that triggered the policy checking is a NODE_DELETE
action,
the action has an associated node as its property. When the deletion policy
has detected this action type, it will copy the policy specification values
into the action’s data
field although the count
and candidates
value are so obvious. For example:
{
"status": "OK",
"reason": "Candidates generated",
"deletion": {
"count": 1,
"candidates": ["node-id-1"]
"destroy_after_deletion": true,
"grace_period": 0
}
}