This document describes the layout of a deployment with Cells version 2, including deployment considerations for security and scale. It is focused on code present in Pike and later, and while it is geared towards people who want to have multiple cells for whatever reason, the nature of the cellsv2 support in Nova means that it applies in some way to all deployments.
Note
The concepts laid out in this document do not in any way
relate to CellsV1, which includes the nova-cells
service, and the [cells]
section of the configuration
file. For more information on the differences, see the main
Cells page.
A basic Nova system consists of the following components:
All deployments have at least the above components. Small deployments likely have a single message queue that all services share, and a single database server which hosts the API database, a single cell database, as well as the required cell0 database. This is considered a "single-cell deployment" because it only has one "real" cell. The cell0 database mimics a regular cell, but has no compute nodes and is used only as a place to put instances that fail to land on a real compute node (and thus a real cell).
The purpose of the cells functionality in nova is specifically to allow larger deployments to shard their many compute nodes into cells, each of which has a database and message queue. The API database is always and only global, but there can be many cell databases (where the bulk of the instance information lives), each with a portion of the instances for the entire deployment within.
All of the nova services use a configuration file, all of which will
at a minimum specify a message queue endpoint
(i.e. [DEFAULT]/transport_url
). Most of the services also require
configuration of database connection information
(i.e. [database]/connection
). API-level services that need access
to the global routing and placement information will also be
configured to reach the API database
(i.e. [api_database]/connection
).
Note
The pair of transport_url
and [database]/connection
configured for a service defines what cell a service lives
in.
API-level services need to be able to contact other services in all of
the cells. Since they only have one configured transport_url
and
[database]/connection
they look up the information for the other
cells in the API database, with records called cell mappings.
Note
The API database must have cell mapping records that match
the transport_url
and [database]/connection
configuration elements of the lower-level services. See the
nova-manage
Nova Cells v2 commands for more
information about how to create and examine these records.
The services generally have a well-defined communication pattern that dictates their layout in a deployment. In a small/simple scenario, the rules do not have much of an impact as all the services can communicate with each other on a single message bus and in a single cell database. However, as the deployment grows, scaling and security concerns may drive separation and isolation of the services.
This is a diagram of the basic services that a simple (single-cell) deployment would have, as well as the relationships (i.e. communication paths) between them:
All of the services are configured to talk to each other over the same message bus, and there is only one cell database where live instance data resides. The cell0 database is present (and required) but as no compute nodes are connected to it, this is still a "single cell" deployment.
In order to shard the services into multiple cells, a number of things must happen. First, the message bus must be split into pieces along the same lines as the cell database. Second, a dedicated conductor must be run for the API-level services, with access to the API database and a dedicated message queue. We call this super conductor to distinguish its place and purpose from the per-cell conductor nodes.
It is important to note that services in the lower cell boxes do not have the ability to call back to the API-layer services via RPC, nor do they have access to the API database for global visibility of resources across the cloud. This is intentional and provides security and failure domain isolation benefits, but also has impacts on some things that would otherwise require this any-to-any communication style. Check the release notes for the version of Nova you are using for the most up-to-date information about any caveats that may be present due to this limitation.
Currently it is not possible to migrate an instance from a host in one cell to a host in another cell. This may be possible in the future, but it is currently unsupported. This impacts cold migration, resizes, live migrations, evacuate, and unshelve operations.
With multiple cells, the instance list operation may not sort and paginate results properly when crossing multiple cell boundaries. Further, the performance of a sorted list operation will be considerably slower than with a single cell.
With a multi-cell environment with multiple message queues, it is likely that operators will want to configure a separate connection to a unified queue for notifications. This can be done in the configuration file of all nodes. See the oslo.messaging configuration documentation for more details.
The Neutron metadata API proxy should be global across all cells, and
thus be configured as an API-level service with access to the
[api_database]/connection
information.
The consoleauth service should be global across all cells and thus be
configured as an API-level service with access to the
[api_database]/connection
information. The various console proxies
should also be global across all cells but they don't need access to the
API database.
Future work will deprecate the consoleauth service, store token authorizations in the cell databases, and require console proxies running per cell instead of globally.
If you deploy multiple cells with a superconductor as described above, computes and cell-based conductors will not have the ability to speak to the scheduler as they are not connected to the same MQ. This is by design for isolation, but currently the processes are not in place to implement some features without such connectivity. Thus, anything that requires a so-called "upcall" will not function. This impacts the following:
The first is simple: if you boot an instance, it gets scheduled to a
compute node, fails, it would normally be re-scheduled to another
node. That requires scheduler intervention and thus it will not work
in Pike with a multi-cell layout. If you do not rely on reschedules
for covering up transient compute-node failures, then this will not
affect you. To ensure you do not make futile attempts at rescheduling,
you should set [scheduler]/max_attempts=1
in nova.conf
.
The second two are related. The summary is that some of the facilities
that Nova has for ensuring that affinty/anti-affinity is preserved
between instances does not function in Pike with a multi-cell
layout. If you don't use affinity operations, then this will not
affect you. To make sure you don't make futile attempts at the
affinity check, you should set
[workarounds]/disable_group_policy_check_upcall=True
and
[filter_scheduler]/track_instance_changes=False
in nova.conf
.
Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.