This guide is intended for users who use Magnum to deploy and manage clusters of hosts for a Container Orchestration Engine. It describes the infrastructure that Magnum creates and how to work with them.
Section 1-3 describe Magnum itself, including an overview, the CLI and Horizon interface. Section 4-8 describe the Container Orchestration Engine’s supported along with a guide on how to select one that best meets your needs. Section 9-14 describe the low level OpenStack infrastructure that is created and managed by Magnum to support the Container Orchestration Engine’s.
To be filled in
Magnum rationale, concept, compelling features
To be filled in
To be filled in
Follow the instructions in the OpenStack Installation Guide to enable the repositories for your distribution:
Install using distribution packages for RHEL/CentOS/Fedora:
$ sudo yum install python-magnumclient
Install using distribution packages for Ubuntu/Debian:
$ sudo apt-get install python-magnumclient
Install using distribution packages for OpenSuSE and SuSE Enterprise Linux:
$ sudo zypper install python-magnumclient
Execute the magnum command with the –version argument to confirm that the client is installed and in the system path:
$ magnum --version
1.1.0
Note that the version returned may differ from the above, 1.1.0 was the latest available version at the time of writing.
Refer to the OpenStack Command-Line Interface Reference for a full list of the commands supported by the magnum command-line client.
To be filled in with screenshots
To be filled in
To be filled in
To be filled in
To be filled in
There are two components that make up the networking in a cluster.
The two components are deployed and managed separately. The Neutron infrastructure is the integration with OpenStack; therefore, it is stable and more or less similar across different COE types. The networking model, on the other hand, is specific to the COE type and is still under active development in the various COE communities, for example, Docker libnetwork and Kubernetes Container Networking. As a result, the implementation for the networking models is evolving and new models are likely to be introduced in the future.
For the Neutron infrastructure, the following configuration can be set in the baymodel:
For the networking model to the container, the following configuration can be set in the baymodel:
The network driver name for instantiating container networks. Currently, the following network drivers are supported:
Driver | Kubernetes | Swarm | Mesos |
---|---|---|---|
Flannel | supported | supported | unsupported |
Docker | unsupported | supported | supported |
If not specified, the default driver is Flannel for Kubernetes, and Docker for Swarm and Mesos.
Particular network driver may require its own set of parameters for configuration, and these parameters are specified through the labels in the baymodel. Labels are arbitrary key=value pairs.
When Flannel is specified as the network driver, the following optional labels can be added:
To be filled in
To be filled in
When a COE is deployed, an image from Glance is used to boot the nodes in the cluster and then the software will be configured and started on the nodes to bring up the full cluster. An image is based on a particular distro such as Fedora, Ubuntu, etc, and is prebuilt with the software specific to the COE such as Kubernetes, Swarm, Mesos. The image is tightly coupled with the following in Magnum:
Collectively, they constitute the driver for a particular COE and a particular distro; therefore, developing a new image needs to be done in conjunction with developing these other components. Image can be built by various methods such as diskimagebuilder, or in some case, a distro image can be used directly. A number of drivers and the associated images is supported in Magnum as reference implementation. In this section, we focus mainly on the supported images.
All images must include support for cloud-init and the heat software configuration utility:
Additional software are described as follows.
This image is built manually by the instructions provided in this Atomic guide The Fedora site hosts the current image fedora-21-atomic-5.qcow2. This image has the following OS/software:
OS/software | version |
---|---|
Fedora | 21 |
Docker | 1.8.1 |
Kubernetes | 1.0.4 |
etcd | 2.0.10 |
Flannel | 0.5.0 |
The following software are managed as systemd services:
The following software are managed as Docker containers:
The login for this image is minion.
CoreOS publishes a stock image that is being used to deploy Kubernetes. This image has the following OS/software:
OS/software | version |
---|---|
CoreOS | 4.3.6 |
Docker | 1.9.1 |
Kubernetes | 1.0.6 |
etcd | 2.2.3 |
Flannel | 0.5.5 |
The following software are managed as systemd services:
The following software are managed as Docker containers:
The login for this image is core.
This image is built manually using diskimagebuilder. The scripts and instructions are included in Magnum code repo. Currently Ironic is not fully supported yet, therefore more details will be provided when this driver has been fully tested.
This image is the same as the image for Kubernetes on Fedora Atomic and was built manually by the instructions provided in this Atomic guide The Fedora site hosts the current image fedora-21-atomic-5.qcow2. This image has the following OS/software:
OS/software | version |
---|---|
Fedora | 21 |
Docker | 1.8.1 |
Kubernetes | 1.0.4 |
etcd | 2.0.10 |
Flannel | 0.5.0 |
The login for this image is fedora.
This image is built manually using diskimagebuilder. The instructions are provided in this Mesos guide. The Fedora site hosts the current image ubuntu-14.04.3-mesos-0.25.0.qcow2.
OS/software | version |
---|---|
Ubuntu | 14.04 |
Docker | 1.8.1 |
Mesos | 0.25.0 |
Marathon |