Victoria Series Release Notes¶
11.2.0¶
New Features¶
Support hyperkube_prefix label which defaults to k8s.gcr.io/. Users now have the option to define alternative hyperkube image source since the default source has discontinued publication of hyperkube images for kube_tag greater than 1.18.x. Note that if container_infra_prefix label is define, it still takes precedence over this label.
11.1.0¶
Upgrade Notes¶
Now the default admission controller list is updated by as “NodeRestriction, PodSecurityPolicy, NamespaceLifecycle, LimitRanger, ServiceAccount, ResourceQuota, TaintNodesByCondition, Priority, DefaultTolerationSeconds, DefaultStorageClass, StorageObjectInUseProtection, PersistentVolumeClaimResize, MutatingAdmissionWebhook, ValidatingAdmissionWebhook, RuntimeClass”
The default containerd version is updated with 1.4.3.
Bug Fixes¶
Fixes a regression which left behind trustee user accounts and certificates when a cluster is deleted.
Fixes database migrations with SQLAlchemy 1.3.20.
Fixes an issue with cluster deletion if load balancers do not exist. See story 2008548 <https://storyboard.openstack.org/#!/story/2008548> for details.
11.0.0¶
New Features¶
Users can enable or disable master_lb_enabled when creating a cluster.
The default 10 seconds health polling interval is too frequent for most of the cases. Now it has been changed to 60s. A new config health_polling_interval is supported to make the interval configurable. Cloud admin can totally disable the health polling by set a negative value for the config.
Expose autoscaler prometheus metrics on pod port metrics (8085).
Add a new label named master_lb_allowed_cidrs to control the IP ranges which can access the k8s API and etcd load balancers of master. To get this feature, the minimum version of Heat is stable/ussuri and the minimum version of Octavia is stable/train.
A new boolean flag is introduced in the CLuster and Nodegroup create API calls. Using this flag, users can override label values when clusters or nodegroups are created without having to specify all the inherited values. To do that, users have to specify the labels with their new values and use the flag –merge-labels. At the same time, three new fields are added in the cluster and nodegroup show outputs, showing the differences between the actual and the inherited labels.
Magnum now cascade deletes all the load balancers before deleting the cluster, not only including load balancers for the cluster services and ingresses, but also those for Kubernetes API/etcd endpoints.
Support Helm v3 client to install helm charts. To use this feature, users will need to use helm_client_tag>=v3.0.0 (default helm_client_tag=v3.2.1). All the existing chart used to depend on Helm v2, e.g. nginx ingress controller, metrics server, prometheus operator and prometheus adapter are now also installable using v3 client. Also introduce helm_client_sha256 and helm_client_url that users can specify to install non-default helm client version (https://github.com/helm/helm/releases).
Kubernetes cluster owner can now do CA cert rotate to re-generate CA of the cluster, service account keys and the certs of all nodes will be regenerated as well. Cluster user needs to get a new kubeconfig to access kubernetes API. This function is only supported by Fedora CoreOS driver.
Cloud admin user now can do rolling upgrade on behalf of end user so as to do urgent security patching when it’s necessary.
Add to Prometheus federation exported metrics the cluster_uuid label.
Upgrade Notes¶
If it’s still preferred to have 10s health polling interval for Kubernetes cluster. It can be set by config health_polling_interval under kubernetes section.
Label cinder_csi_enabled defaults to True from V cycle.
The default version of the Kubernetes dashboard has been upgraded to v2.0.0 and the metrics-server is supported by the k8s dashboard now.
Default tiller_tag is set to v2.16.7. The charts remain compatible but helm_client_tag will also need to be set to the same value as tiller_tag, i.e. v2.16.7. In this case, the user will also need to provide helm_client_sha256 for the helm client binary intended for use.
Bumped prometheus-operator chart tag to 8.12.13. Added container_infra_prefix to missing prometheusOperator images.
Deprecation Notes¶
Deprecate in-tree Cinder volume driver for removal in X cycle in favour of out-of-tree Cinder CSI plugin.
The devicemapper and overlay storage driver is deprecated in favour of overlay2 in Docker and will be removed in a future release from docker. Users of the devicemapper and overlay storage driver are recommended to migrate to a different storage driver, such as overlay2. overlay2 will be set as the default storage driver from the Victoria cycle.
Support for Helm v2 client will be removed in X release.
Bug Fixes¶
Deploy Traefik from the heat-agent
Use kubectl from the Heat agent to apply the Traefik deployment. The current behaviour was to create a systemd unit to send the manifests to the API.
This way we will have only one way of applying manifests to the API.
This change is triggered to address the kubectl change [0] that is not using 127.0.0.1:8080 as the default Kubernetes API.
[0] https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md#kubectl
Fixes an edge case where when a cluster with additional nodegroups is patched with health_status and health_status_reason, it was leading to the default-worker nodegroup being resized.
Now the label fixed_network_cidr have been renamed with fixed_subnet_cidr. And it can be passed in and set correctly.
Fix an issue with private clusters getting stuck in CREATE_IN_PROGRESS status where floating_ip_enabled=True in the cluster template but this is disabled when the cluster is created.
There was a corner case when floating_ip_enabled=False, master_lb_enabled=True,master_lb_floating_ip_enabled=False in cluster template, but setting floating_ip_enabled=True when creating the cluster, which caused a missing IP address in the api_address of the cluster. Now the issue has been fixed.
Prometheus server now scrape metrics from traefik proxy. Prometheus server now scrape metrics from cluster autoscaler.
Scrape metrics from kube-{controller-manager,scheduler}. Disable PrometheusRule for etcd.