Example 2. - Split Cell controller/compute Architecture in Train release¶
Warning
Multi cell support is only supported in Stein or later versions. This guide addresses Train release and later!
Contents
This guide assumes that you are ready to deploy a new overcloud, or have already installed an overcloud (min Train release).
Note
Starting with CentOS 8 and the TripleO Stein release, podman is the CONTAINERCLI to be used in the following steps.
In this scenario the cell computes get split off in its own stack, e.g. to manage computes from each edge site in its own stack.
This section only explains the differences to the Example 1. - Basic Cell Architecture in Train release.
Like before the following example uses six nodes and the split control plane method to deploy a distributed cell deployment. The first Heat stack deploys the controller cluster. The second Heat stack deploys the cell controller. The computes will then again be split off in its own stack.
Extract deployment information from the overcloud stack¶
Again like in Export EndpointMap, HostsEntry, AllNodesConfig, GlobalConfig and passwords information information from the control plane stack needs to be exported:
source stackrc
mkdir cell1
export DIR=cell1
openstack overcloud cell export cell1-ctrl -o cell1/cell1-ctrl-input.yaml
Create roles file for the cell stack¶
The same roles get exported as in Create roles file for the cell stack.
Create cell parameter file for additional customization (e.g. cell1/cell1.yaml)¶
The cell parameter file remains the same as in Create cell parameter file for additional customization (e.g. cell1/cell1.yaml) with the only difference that the ComputeCount gets set to 0. This is required as we use the roles file contain both CellController and Compute role and the default count for the Compute role is 1 (e.g. cell1/cell1.yaml):
parameter_defaults:
...
# number of controllers/computes in the cell
CellControllerCount: 1
ComputeCount: 0
...
Create the network configuration for cellcontroller and add to environment file¶
Depending on the network configuration of the used hardware and network architecture it is required to register a resource for the CellController role.
resource_registry:
OS::TripleO::CellController::Net::SoftwareConfig: single-nic-vlans/controller.yaml
Note
For details on network configuration consult Configuring Network Isolation guide, chapter Customizing the Interface Templates.
Deploy the cell¶
Create new flavor used to tag the cell controller¶
Follow the instructions in Create new flavor used to tag the cell controller on how to create a new flavor and tag the cell controller.
Run cell deployment¶
To deploy the cell controller stack we use the same overcloud deploy command as it was used to deploy the overcloud stack and add the created export environment files:
openstack overcloud deploy \
--templates /usr/share/openstack-tripleo-heat-templates \
-e ... additional environment files used for overcloud stack, like container
prepare parameters, or other specific parameters for the cell
...
--stack cell1-ctrl \
-r $HOME/$DIR/cell_roles_data.yaml \
-e $HOME/$DIR/cell1-ctrl-input.yaml \
-e $HOME/$DIR/cell1.yaml
Wait for the deployment to finish:
openstack stack list
+--------------------------------------+--------------+----------------------------------+-----------------+----------------------+----------------------+
| ID | Stack Name | Project | Stack Status | Creation Time | Updated Time |
+--------------------------------------+--------------+----------------------------------+-----------------+----------------------+----------------------+
| 890e4764-1606-4dab-9c2f-6ed853e3fed8 | cell1-ctrl | 2b303a97f4664a69ba2dbcfd723e76a4 | CREATE_COMPLETE | 2019-02-12T08:35:32Z | None |
| 09531653-1074-4568-b50a-48a7b3cc15a6 | overcloud | 2b303a97f4664a69ba2dbcfd723e76a4 | UPDATE_COMPLETE | 2019-02-09T09:52:56Z | 2019-02-11T08:33:37Z |
+--------------------------------------+--------------+----------------------------------+-----------------+----------------------+----------------------+
Create the cell¶
As in Create the cell and discover compute nodes (ansible playbook) create the cell, but we can skip the final host discovery step as the computes are note yet deployed.
Extract deployment information from the cell controller stack¶
The cell compute stack again requires input information from both the control plane stack (overcloud) and the cell controller stack (cell1-ctrl):
source stackrc
export DIR=cell1
Export EndpointMap, HostsEntry, AllNodesConfig, GlobalConfig and passwords information¶
As before the openstack overcloud cell export functionality of the tripleo-client is used to export the required data from the cell controller stack.
openstack overcloud cell export cell1-cmp -o cell1/cell1-cmp-input.yaml -e cell1-ctrl
cell1-cmp is the chosen name for the new compute stack. This parameter is used to set the default export file name, which is then stored on the current directory. In this case a dedicated export file was set via -o. In addition it is required to use the –cell-stack <cell stack> or -e <cell stack> parameter to point the export command to the cell controller stack and indicate that this is a compute child stack. This is required as the input information for the cell controller and cell compute stack is not the same.
Note
If the export file already exists it can be forced to be overwritten using –force-overwrite or -f.
Note
The services from the cell stacks use the same passwords services as the control plane services.
Create cell compute parameter file for additional customization¶
A new parameter file is used to overwrite, or customize settings which are different from the cell controller stack. Add the following content into a parameter file for the cell compute stack, e.g. cell1/cell1-cmp.yaml:
resource_registry:
# Since the compute stack deploys only compute nodes ExternalVIPPorts
# are not required.
OS::TripleO::Network::Ports::ExternalVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml
parameter_defaults:
# number of controllers/computes in the cell
CellControllerCount: 0
ComputeCount: 1
The above file overwrites the values from cell1/cell1.yaml to not deploy a controller in the cell compute stack. Since the cell compute stack uses the same role file the default CellControllerCount is 1. If there are other differences, like network config, parameters, … for the computes, add them here.
Deploy the cell computes¶
Run cell deployment¶
To deploy the overcloud we can use the same overcloud deploy command as it was used to deploy the cell1-ctrl stack and add the created export environment files:
openstack overcloud deploy \
--templates /usr/share/openstack-tripleo-heat-templates \
-e ... additional environment files used for overcloud stack, like container
prepare parameters, or other specific parameters for the cell
...
--stack cell1-cmp \
-n $HOME/$DIR/cell1-cmp/network_data.yaml \
-r $HOME/$DIR/cell_roles_data.yaml \
-e $HOME/$DIR/cell1-ctrl-input.yaml \
-e $HOME/$DIR/cell1-cmp-input.yaml \
-e $HOME/$DIR/cell1.yaml \
-e $HOME/$DIR/cell1-cmp.yaml
Wait for the deployment to finish:
openstack stack list
+--------------------------------------+--------------+----------------------------------+-----------------+----------------------+----------------------+
| ID | Stack Name | Project | Stack Status | Creation Time | Updated Time |
+--------------------------------------+--------------+----------------------------------+-----------------+----------------------+----------------------+
| 790e4764-2345-4dab-7c2f-7ed853e7e778 | cell1-cmp | 2b303a97f4664a69ba2dbcfd723e76a4 | CREATE_COMPLETE | 2019-02-12T08:35:32Z | None |
| 890e4764-1606-4dab-9c2f-6ed853e3fed8 | cell1-ctrl | 2b303a97f4664a69ba2dbcfd723e76a4 | CREATE_COMPLETE | 2019-02-12T08:35:32Z | None |
| 09531653-1074-4568-b50a-48a7b3cc15a6 | overcloud | 2b303a97f4664a69ba2dbcfd723e76a4 | UPDATE_COMPLETE | 2019-02-09T09:52:56Z | 2019-02-11T08:33:37Z |
+--------------------------------------+--------------+----------------------------------+-----------------+----------------------+----------------------+
Perform cell host discovery¶
The final step is to discover the computes deployed in the cell. Run the host discovery as explained in Add a compute to a cell.
Create and add the node to an Availability Zone¶
After a cell got provisioned, it is required to create an availability zone for the compute stack, it is not enough to just create an availability zone for the complete cell. In this used case we want to make sure an instance created in the compute group, stays in it when performing a migration. Check Availability Zones (AZ) on more about how to create an availability zone and add the node.
After that the cell is deployed and can be used.
Note
Migrating instances between cells is not supported. To move an instance to a different cell it needs to be re-created in the new target cell.