Automated High Availability in kubeadm v1.15: Batteries Included But Swappable
Authors:
- Lucas Käldström,
- Fabrizio Pandini,
since 2016 and graduated it from beta to .
After this important milestone, the kubeadm team is now focused on the stability of the core feature set and working on maturing existing features.
With this post, we are introducing the improvements made in the v1.15 release of kubeadm.
The scope of kubeadm
kubeadm is focused on performing the actions necessary to get a minimum viable, secure cluster up and running in a user-friendly way. kubeadm's scope is limited to the local machine’s filesystem and the Kubernetes API, and it is intended to be a composable building block for higher-level tools.
The core of the kubeadm interface is quite simple: new control plane nodes are created by you running
kubeadm init
, worker nodes are joined to the control plane by you running
kubeadm join
. Also included are common utilities for managing already bootstrapped
clusters, such as control plane upgrades, token and certificate renewal.
To keep kubeadm lean, focused, and vendor/infrastructure agnostic, the following tasks are out of scope:
- Infrastructure provisioning
- Third-party networking
- Non-critical add-ons, e.g. monitoring, logging, and visualization
- Specific cloud provider integrations
Those tasks are addressed by other SIG Cluster Lifecycle projects, such as the
Instead, kubeadm covers only the common denominator in every Kubernetes cluster: the
control plane. We are delighted to announce that automated support for High Availability clusters is graduating to Beta in kubeadm v1.15. Let’s give a great shout out to all the contributors that helped in this effort and to the early adopter users for the great feedback received so far! But how does automated High Availability work in kubeadm? The great news is that you can use the familiar A 3-minute screencast of this feature is here: In a nutshell: Set up a Load Balancer. You need an external load balancer; providing this however, is out of scope of kubeadm. Run kubeadm init on the first control plane node, with small modifications: Run kubeadm join --control-plane at any time when you want to expand the set of control plane nodes For those interested in the details, there are many things that make this functionality possible. Most notably: Automated certificate transfer. kubeadm implements an automatic certificate copy feature to automate the distribution of all the certificate authorities/keys that must be shared across all the control-planes nodes in order to get your cluster to work. This feature can be activated by passing Dynamically-growing etcd cluster. When you're not providing an external etcd cluster, kubeadm automatically adds a new etcd member, running as a static pod. All the etcd members are joined in a “stacked” etcd cluster that grows together with your high availability control-plane \ Concurrent joining. Similarly to what already implemented for worker nodes, you join control-plane nodes whenever, in any order, or even in parallel. \ Upgradable. The kubeadm upgrade workflow was improved in order to properly handle the HA scenario, and, after starting the upgrade with Finally, it is also worthy to notice that an entirely new test suite has been created specifically for ensuring High Availability in kubeadm will stay stable over time. Certificate management has become more simple and robust in kubeadm v1.15. If you perform Kubernetes version upgrades regularly, kubeadm will now take care of keeping your cluster up to date and reasonably secure by
If instead, you prefer to renew your certificates manually, you can opt out from the automatic certificate renewal by passing But there is more. A new command You should expect also more work around certificate management in kubeadm in the next releases, with the introduction of ECDSA keys and with improved support for CA key rotation. Additionally, the commands staged under You can argue that there are hardly two Kubernetes clusters that are configured equally, and hence there is a need to customize how the cluster is set up depending on the environment. One way of configuring a component is via flags. However, this has some scalability limitations: This is a key problem for Kubernetes components in general, as some components have 150+ flags. With kubeadm we’re pioneering the ComponentConfig effort, and providing users with a small set of flags, but most importantly, a declarative and versioned configuration file for advance use-cases. We call this ComponentConfig. It has the following characteristics: In kubeadm v1.15, we have improved the structure and are releasing the new v1beta2 format. Important to note that the existing v1beta1 format released in v1.13 will still continue to work for several releases. This means you can upgrade kubeadm to v1.15, and still use your existing v1beta1 configuration files. When you’re ready to take advantage of the improvements made in v1beta2, you can perform an automatic schema migration using the During the course of the year, we’re looking forward to graduate the schema to General Availability We are focusing our efforts around graduating the configuration file format to GA ( In addition to these three key milestones of our charter, we want to improve the following areas: We make no guarantees that these deliverables will ship this year though, as this is a community effort. If you want to see these things happen, please join our SIG and start contributing! The ComponentConfig issues in particular need more attention.
If this all sounds exciting, join us!
Some handy links if you want to start contribute: This release wouldn’t have been possible without the help of the great people that have been contributing to SIG Cluster Lifecycle
and kubeadm. We would like to thank all the kubeadm contributors and companies making it possible for their developers to work
on Kubernetes!What’s new in kubeadm v1.15?
High Availability to Beta
kubeadm init
or kubeadm join
workflow for creating high availability cluster as well, with the only difference that you have to pass the --control-plane
flag to kubeadm join
when adding more control plane nodes.
kubeadm init
above, and is of the form:kubeadm join [LB endpoint] \
--token ... \
--discovery-token-ca-cert-hash sha256:... \
--control-plane --certificate-key ...
--upload-certs
to kubeadm init
; see
kubeadm upgrade apply
as usual, users can now complete the upgrade process by using kubeadm upgrade node
both on the remaining control-plane nodes and worker nodesCertificate Management
--certificate-renewal=false
to kubeadm upgrade
commands. Then you can perform
kubeadm alpha certs check-expiration
was introduced to allow users to
CERTIFICATE EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
admin.conf May 15, 2020 13:03 UTC 364d false
apiserver May 15, 2020 13:00 UTC 364d false
apiserver-etcd-client May 15, 2020 13:00 UTC 364d false
apiserver-kubelet-client May 15, 2020 13:00 UTC 364d false
controller-manager.conf May 15, 2020 13:03 UTC 364d false
etcd-healthcheck-client May 15, 2020 13:00 UTC 364d false
etcd-peer May 15, 2020 13:00 UTC 364d false
etcd-server May 15, 2020 13:00 UTC 364d false
front-proxy-client May 15, 2020 13:00 UTC 364d false
scheduler.conf May 15, 2020 13:03 UTC 364d false
kubeadm alpha
are expected to move top-level soon.Improved Configuration File Format
--key=value
syntax.
kubeadm config migrate
command.v1
.` If you’re interested in this effort, you can also join .What’s next?
2019 plans
kubeadm.k8s.io/v1
)`, graduating this super-easy High Availability flow to stable, and providing better tools around rotating certificates needed for running the cluster automatically.
kubeadm now has a logo!
Contributing
Thank You