Install kuryr-k8s-controller
in a virtualenv:
$ mkdir kuryr-k8s-controller
$ cd kuryr-k8s-controller
$ virtualenv env
$ git clone http://git.openstack.org/openstack/kuryr-kubernetes
$ . env/bin/activate
$ pip install -e kuryr-kubernetes
In neutron or in horizon create subnet for pods, subnet for services and a security-group for pods. You may use existing if you like.
Create /etc/kuryr/kuryr.conf
:
$ cd kuryr-kubernetes
$ ./tools/generate_config_file_samples.sh
$ cp etc/kuryr.conf.sample /etc/kuryr/kuryr.conf
Edit kuryr.conf
:
[DEFAULT]
use_stderr = true
bindir = {path_to_env}/libexec/kuryr
[kubernetes]
api_root = http://{ip_of_kubernetes_apiserver}:8080
[neutron]
auth_url = http://127.0.0.1:35357/v3/
username = admin
user_domain_name = Default
password = ADMIN_PASSWORD
project_name = service
project_domain_name = Default
auth_type = password
[neutron_defaults]
ovs_bridge = br-int
pod_security_groups = {id_of_secuirity_group_for_pods}
pod_subnet = {id_of_subnet_for_pods}
project = {id_of_project}
service_subnet = {id_of_subnet_for_k8s_services}
Note that the service_subnet and the pod_subnet should be routable and that the pods should allow service subnet access.
Octavia supports two ways of performing the load balancing between the Kubernetes load balancers and their members:
To support the L3 mode (both for Octavia and for the deprecated Neutron-LBaaSv2):
There should be a router between the two subnets.
The pod_security_groups setting should include a security group with a rule granting access to all the CIDR of the service subnet, e.g.:
openstack security group create --project k8s_cluster_project \
service_pod_access_sg
openstack --project k8s_cluster_project security group rule create \
--remote-ip cidr_of_service_subnet --ethertype IPv4 --protocol tcp \
service_pod_access_sg
The uuid of this security group id should be added to the comma separated list of pod security groups. pod_security_groups in [neutron_defaults].
Alternatively, to support Octavia L2 mode:
The pod security_groups setting should include a security group with a rule granting access to all the CIDR of the pod subnet, e.g.:
openstack security group create --project k8s_cluster_project \
octavia_pod_access_sg
openstack --project k8s_cluster_project security group rule create \
--remote-ip cidr_of_pod_subnet --ethertype IPv4 --protocol tcp \
octavia_pod_access_sg
The uuid of this security group id should be added to the comma separated list of pod security groups. pod_security_groups in [neutron_defaults].
Run kuryr-k8s-controller:
$ kuryr-k8s-controller --config-file /etc/kuryr/kuryr.conf -d
Alternatively you may run it in screen:
$ screen -dm kuryr-k8s-controller --config-file /etc/kuryr/kuryr.conf -d
On every kubernetes minion node (and on master if you intend to run containers there) you need to configure kuryr-cni.
Install kuryr-cni
in a virtualenv:
$ mkdir kuryr-k8s-cni
$ cd kuryr-k8s-cni
$ virtualenv env
$ . env/bin/activate
$ git clone http://git.openstack.org/openstack/kuryr-kubernetes
$ pip install -e kuryr-kubernetes
Create /etc/kuryr/kuryr.conf
:
$ cd kuryr-kubernetes
$ ./tools/generate_config_file_samples.sh
$ cp etc/kuryr.conf.sample /etc/kuryr/kuryr.conf
Edit kuryr.conf
:
[DEFAULT]
use_stderr = true
bindir = {path_to_env}/libexec/kuryr
[kubernetes]
api_root = http://{ip_of_kubernetes_apiserver}:8080
Link the CNI binary to CNI directory, where kubelet would find it:
$ mkdir -p /opt/cni/bin
$ ln -s $(which kuryr-cni) /opt/cni/bin/
Create the CNI config file for kuryr-cni: /etc/cni/net.d/10-kuryr.conf
.
Kubelet would only use the lexicographically first file in that directory, so
make sure that it is kuryr’s config file:
{
"cniVersion": "0.3.0",
"name": "kuryr",
"type": "kuryr-cni",
"kuryr_conf": "/etc/kuryr/kuryr.conf",
"debug": true
}
Install os-vif
and oslo.privsep
libraries globally. These modules
are used to plug interfaces and would be run with raised privileges. os-vif
uses sudo
to raise privileges, and they would need to be installed globally
to work correctly:
deactivate
sudo pip install 'oslo.privsep>=1.20.0' 'os-vif>=1.5.0'
Kuryr CNI Daemon is an optional service designed to increased scalability of the Kuryr operations done on Kubernetes nodes. More information can be found on CNI Daemon page.
If you want to use Kuryr CNI Daemon, it needs to be installed on every Kubernetes node, so following steps need to be repeated.
Edit kuryr.conf
:
[cni_daemon]
daemon_enabled=True
Note
You can tweak configuration of some timeouts to match your environment. It’s crucial for scalability of the whole deployment. In general the timeout to serve CNI request from kubelet to Kuryr is 180 seconds. After that time kubelet will retry the request. Additionally there are two configuration options:
[cni_daemon]
vif_annotation_timeout=60
pyroute2_timeout=10
vif_annotation_timeout
is time the Kuryr CNI Daemon will wait for Kuryr
Controller to create a port in Neutron and add information about it to Pod’s
metadata. If either Neutron or Kuryr Controller doesn’t keep up with high
number of requests, it’s advised to increase this timeout. Please note that
increasing it over 180 seconds will not have any effect as the request will
time out anyway and will be retried (which is safe).
pyroute2_timeout
is internal timeout of pyroute2 library, that is
responsible for doing modifications to Linux Kernel networking stack (e.g.
moving interfaces to Pod’s namespaces, adding routes and ports or assigning
addresses to interfaces). When serving a lot of ADD/DEL CNI requests on a
regular basis it’s advised to increase that timeout. Please note that the
value denotes maximum time to wait for kernel to complete the operations.
If operation succeeds earlier, request isn’t delayed.
Run kuryr-daemon:
$ kuryr-daemon --config-file /etc/kuryr/kuryr.conf -d
Alternatively you may run it in screen:
$ screen -dm kuryr-daemon --config-file /etc/kuryr/kuryr.conf -d
Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.