Rocky Series Release Notes¶
7.2.0¶
New Features¶
To get a better cluster template versioning and relieve the pain of maintaining public cluster template, now the name of cluster template can be changed.
Add heat_container_agent_tag label to allow users select the heat-agent tag. Rocky default: rocky-stable
Now cloud-provider-openstack of Kubernetes has a webhook to support Keystone authorization and authentication. With this feature, user can use a new label ‘keystone-auth-enabled’ to enable the keystone authN and authZ.
Add a new option ‘octavia’ for the label ‘ingress_controller’ and a new label ‘octavia_ingress_controller_tag’ to enable the deployment of octavia-ingress-controller in the kubernetes cluster. The ‘ingress_controller_role’ label is not used for this option.
k8s_fedora_atomic_v1 Add PodSecurityPolicy for privileged pods. Use privileged PSP for calico and node-problem-detector. Add PSP for flannel from upstream.
Bug Fixes¶
Fixes the problem with Mesos cluster creation where the nodes_affinity_policy was not properly conveyed as it is required in order to create the corresponding server group in Nova. https://storyboard.openstack.org/#!/story/2005116
Add iptables -P FORWARD ACCEPT unit. On node reboot, kubelet and kube-proxy set iptables -P FORWARD DROP which doesn’t work with flannel in the way we use it. Add a systemd unit to set the rule to ACCEPT after flannel, docker, kubelet, kube-proxy.
In kubernetes cluster, a floating IP is created and associated with the vip of a load balancer which is created corresponding to the service of LoadBalancer type inside kubernetes, it should be deleted when the cluster is deleted.
7.1.0¶
New Features¶
This will add the octavia client code for client to interact with the Octavia component of OpenStack
Start Kubernetes workers installation right after the master instances are created rather than waiting for all the services inside masters, which could decrease the Kubernetes cluster launch time significantly.
Use the external cloud provider in k8s_fedora_atomic. The cloud_provider_tag label can be used to select the container tag for it, together with the cloud_provider_enabled label. The cloud provider runs as a DaemonSet on all master nodes.
Upgrade Notes¶
The cloud config for kubernets has been renamed from /etc/kubernetes/kube_openstack_config to /etc/kubernetes/cloud-config as the kubelet expects this exact name when the external cloud provider is used. A copy of /etc/kubernetes/kube_openstack_config is in place for applications developed for previous versions of magnum.
7.0.2¶
New Features¶
Deploy kubelet in master nodes for the k8s_fedora_atomic driver. Previously it was done only for calico, now kubelet will run in all cases. Really useful, for monitoing the master nodes (eg deploy fluentd) or run the kubernetes control-plance self-hosted.
Add ‘grafana_tag’ and ‘prometheus_tag’ labels for the k8s_fedora_atomic driver. Grafana defaults to 5.1.5 and Prometheus defaults to v1.8.2.
Bug Fixes¶
Add a new label service_cluster_ip_range for kubernetes so that user can set the IP range for service portals to avoid conflicts with pod IP range.
When doing a cluster update magnum is now passing the existing parameter to heat which will use the heat templates stored in the heat db. This change will prevent heat from replacacing all nodes when the heat templates change, for example after an upgrade of the magnum server code. https://storyboard.openstack.org/#!/story/1722573
Fixed a bug where –live-restore was passed to Docker daemon causing the swarm init to fail. Magnum now ensures the –live-restore is not passed to the Docker daemon if it’s default in an image.
7.0.1¶
Deprecation Notes¶
Currently, Magnum is running periodic tasks to collect k8s cluster metrics to message bus. Unfortunately, it’s collecting pods info only from “default” namespace which makes this function useless. What’s more, even Magnum can get all pods from all namespaces, it doesn’t make much sense to keep this function in Magnum. Because operators only care about the health of cluster nodes. If they want to know the status of pods, they can use heapster or other tools to get that. So the feauture is being deprecated now and will be removed in Stein release. And the default value is changed to False, which means won’t send the metrics.
7.0.0¶
New Features¶
k8s_fedora_atomic clusters are deployed with RBAC support. Along with RBAC Node authorization is added so the appropriate certificates are generated.
This release introduces ‘federations’ endpoint to Magnum API, which allows an admin to create and manage federations of clusters through Magnum. As the feature is still under development, the endpoints are not bound to any driver yet. For more details, please refer to bp/federation-api [1].
[1] https://review.openstack.org/#/q/topic:bp/federation-api
This is allowing no floating IP to be usable with a multimaster configuration in terms of load balancers.
Add new label ‘cert_manager_api’ enabling the kubernetes certificate manager api.
Embed certificates in kubernetes config file when issuing ‘cluster config’, instead of generating additional files with the certificates. This is now the default behavior. To get the old behavior and still generate cert files, pass –output-certs.
Add ‘cloud_provider_enabled’ label for the k8s_fedora_atomic driver. Defaults to true. For specific kubernetes versions if ‘cinder’ is selected as a ‘volume_driver’, it is implied that the cloud provider will be enabled since they are combined.
Add new labels ‘ingress_controller’ and ‘ingress_controller_role’ enabling the deployment of a Kubernetes Ingress Controller backend for clusters. Default for ‘ingress_controller’ is ‘’ (meaning no controller deployed), with possible values being ‘traefik’. Default for ‘ingress_controller_role’ is ‘ingress’.
In the OpenStack deployment with Octavia service enabled, the Octavia service should be used not only for master nodes high availability, but also for k8s LoadBalancer type service implementation as well.
Update kubernetes dashboard to v1.8.3 which is compatible via kubectl proxy. Addionally, heapster is deployed as standalone deployemt and the user can enable a grafana-influx stack with the influx_grafana_dashboard_enabled label. See the kubernetes dashboard documenation for more details. https://github.com/kubernetes/dashboard/wiki
Update k8s_fedora_atomic driver to the latest Fedora Atomic 27 release and run etcd and flanneld in system containers which are removed from the base OS.
Known Issues¶
Adding ‘calico’ as network driver for Kubernetes so as to support network isolation between namespace with k8s network policy.
Currently, the replicas of coreDNS pod is hardcoded as 1. It’s not a reasonable number for such a critical service. Without DNS, probably all workloads running on the k8s cluster will be broken. Now Magnum is making the coreDNS pod autoscaling based on the nodes and cores number.
Upgrade Notes¶
Using the queens (>=2.9.0) python-magnumclient, when a user executes openstack coe cluster config, the client certificate has admin as Common Name (CN) and system:masters for Organization which are required for authorization with RBAC enabled clusters. This change in the client is backwards compatible, so old clusters (without RBAC enabled) can be reached with certificates generated by the new client. However, old magnum clients will generate certificates that will not be able to contact RBAC enabled clusters. This issue affects only k8s_fedora_atomic clusters and clients <=2.8.0, note that 2.8.0 is still a queens release but only 2.9.0 includes the relevant patch. Finally, users can always generate and sign the certificates using this [0] procedure even with old clients since only the cluster config command is affected. [0] https://docs.openstack.org/magnum/latest/user/index.html#interfacing-with-a-secure-cluster
New clusters should be created with kube_tag=v1.9.3 or later. v1.9.3 is the default version in the queens release.
New clusters will be created with kube_tag=v1.11.1 or later. v1.11.1 is the default version in the Rocky release.
Security Issues¶
k8s_fedora Remove cluster role from the kubernetes-dashboard account. When accessing the dashboard and skip authentication, users login with the kunernetes-dashboard service account, if that service account has the cluster role, users have admin access without authentication. Create an admin service account for this use case and others.
Bug Fixes¶
Add region parameter to the Global configuration section of the Kubernetes configuration file. Setting this parameter will allow Magnum cluster to be created in the multi-regional OpenStack installation.
Add trustee_keystone_region_name optional parameter to the trust section. This parameter is useful for multi-regional OpenStack installations with different Identity service for every region. In such installation it is necessary to specify a region when searching for auth_url to authenticate a trustee user.
Now user can update labels in cluster-template. Previously string is passed as a value to labels, but we know that labels can only hold dictionary values. Now we are parsing the string and storing it as dictionary for labels in cluster-template.
Fix etcd configuration in k8s_fedora_atomic driver. Explicitly enable client and peer authentication and set trusted CA (ETCD_TRUSTED_CA_FILE, ETCD_PEER_TRUSTED_CA_FILE, ETCD_CLIENT_CERT_AUTH, ETCD_PEER_CLIENT_CERT_AUTH). Only new clusters will benefit from the fix.
When creating a multi-master cluster, all master nodes will attempt to create kubernetes resources in the cluster at this same time, like coredns, the dashboard, calico etc. This race conditon shouldn’t be a problem when doing declarative calls instead of imperative (kubectl apply instead of create). However, due to [1], kubectl fails to apply the changes and the deployemnt scripts fail causing cluster to creation to fail in the case of Heat SoftwareDeployments. This patch passes the ResourceGroup index of every master so that resource creation will be attempted only from the first master node. [1] https://github.com/kubernetes/kubernetes/issues/44165
Create admin cluster role for k8s_fedora_atomic, it is defined in the configuration but it wasn’t applied.
Fix bug #1758672 [1] to protect kubelet in the k8s_fedora_atomic driver. Before this patch kubelet was listening to 0.0.0.0 and for clusters with floating IPs the kubelet was exposed. Also, even on clusters without fips the kubelet was exposed inside the cluster. This patch allows access to the kubelet only over https and with the appropriate roles. The apiserver and heapster have the appropriate roles to access it. Finally, all read-only ports have been closed to not expose any cluster data. The only remaining open ports without authentication are for healthz. [1] https://bugs.launchpad.net/magnum/+bug/1758672
Strip signed certificate. Certificate (ca.crt) has to be striped for some application parsers as they might require pure base64 representation of the certificate itself, without empty characters at the beginning nor the end of file.
Multi master deployments for k8s driver use different service account keys for each api/controller manager server which leads to 401 errors for service accounts. This patch will create a signed cert and private key for k8s service account keys explicitly, dedicatedly for the k8s cluster to avoid the inconsistent keys issue.
6.0.1¶
Known Issues¶
Kubernetes client is incompatible with evenlet and breaks the periodic tasks. After kubernetes client 4.0.0 magnum is affected by the bug below. https://github.com/eventlet/eventlet/issues/147 Magnum has three periodic tasks, one to sync the magnum service, one to update the cluster status and one send cluster metrics The send_metrics task uses the kubernetes client for kubernetes clusters and it crashes the sync_cluster_status and send_cluster_metrics tasks. https://bugs.launchpad.net/magnum/+bug/1746510 Additionally, the kubernetes scale manager needs to be disabled to not break the scale down command completely. Note, that when magnum scales down the cluster will pick the nodes to scale randomly.
Upgrade Notes¶
In magnum configuration, in [drivers] set send_cluster_metrics = False to to avoid collecting metrics using the kubernetes client which crashes the periodic tasks.