Developing on Kubernetes
Authors:
How do you develop a Kubernetes app? That is, how do you write and test an app that is supposed to run on Kubernetes? This article focuses on the challenges, tools and methods you might want to be aware of to successfully write Kubernetes apps alone or in a team setting. We’re assuming you are a developer, you have a favorite programming language, editor/IDE, and a testing framework available. The overarching goal is to introduce minimal changes to your current workflow when developing the app for Kubernetes. For example, if you’re a Node.js developer and are used to a hot-reload setup—that is, on save in your editor the running app gets automagically updated—then dealing with containers and container images, with container registries, Kubernetes deployments, triggers, and more can not only be overwhelming but really take all the fun out if it. In the following, we’ll first discuss the overall development setup, then review tools of the trade, and last but not least do a hands-on walkthrough of three exemplary tools that allow for iterative, local app development against Kubernetes. As a developer you want to think about where the Kubernetes cluster you’re developing against runs as well as where the development environment sits. Conceptually there are four development modes: A number of tools support pure offline development including Minikube, Docker for Mac/Windows, Minishift, and the ones we discuss in detail below. Sometimes, for example, in a microservices setup where certain microservices already run in the cluster, a proxied setup (forwarding traffic into and from the cluster) is preferable and Telepresence is an example tool in this category. The live mode essentially means you’re building and/or deploying against a remote cluster and, finally, the pure online mode means both your development environment and the cluster are remote, as this is the case with, for example,
Minikube is a popular choice for those who prefer to run Kubernetes in a local VM. More recently Docker for
Running a local cluster allows folks to work offline and that you don’t have to pay for using cloud resources. Cloud provider costs are often rather affordable and free tiers exists, however some folks prefer to avoid having to approve those costs with their manager as well as potentially incur unexpected costs, for example, when leaving cluster running over the weekend. Some developers prefer to use a remote Kubernetes cluster, and this is usually to allow for larger compute and storage capacity and also enable collaborative workflows more easily. This means it’s easier for you to pull in a colleague to help with debugging or share access to an app in the team. Additionally, for some developers it can be critical to mirror production environment as closely as possible, especially when it comes down to external cloud services, say, proprietary databases, object stores, message queues, external load balancer, or mail delivery systems. In summary, there are good reasons for you to develop against a local cluster as well as a remote one. It very much depends on in which phase you are: from early prototyping and/or developing alone to integrating a set of more stable microservices. Now that you have a basic idea of the options around the runtime environment, let’s move on to how to iteratively develop and deploy your app. We are now going to review tooling allowing you to develop apps on Kubernetes with the focus on having minimal impact on your existing workflow. We strive to provide an unbiased description including implications of using each of the tools in general terms. Note that this is a tricky area since even for established technologies such as, for example, JSON vs YAML vs XML or REST vs gRPC vs SOAP a lot depends on your background, your preferences and organizational settings. It’s even harder to compare tooling in the Kubernetes ecosystem as things evolve very rapidly and new tools are announced almost on a weekly basis; during the preparation of this post alone, for example,
Implications: More info:
Implications: More info:
Implications: More info:
Implications: More info:
Implications: More info: The app we will be using for the hands-on walkthroughs of the tools in the following is a simple
Overall, the default setup of the app looks as follows: In the following we’ll do a hands-on walkthrough for a representative selection of tools discussed above: ksync, Minikube with local build, as well as Skaffold. For each of the tools we do the following: Note that for the target Kubernetes cluster we’ve been using Minikube locally, but you can also a remote cluster for ksync and Skaffold if you want to follow along. As a preparation, install
With the basic setup completed we're ready to tell ksync’s local client to watch a certain Kubernetes namespace and then we create a spec to define what we want to sync (the directory Now we deploy the stock generator and the stock consumer microservice: Once both deployments are created and the pods are running, we forward the With that we should be able to consume the Now change the code in the For the following you will need to have Minikube up and running and we will leverage the Minikube-internal Docker daemon for building images, locally. As a preparation, do the following Now we deploy the stock generator and the stock consumer microservice: Once both deployments are created and the pods are running, we forward the Now change the code in the Overall you should have something like the following in the end: To perform this walkthrough you first need to install
Now we deploy the stock generator (but not the stock consumer microservice, that is done via Skaffold): Note that initially we experienced an authentication error when doing Next, we had to patch Add a The resulting The final step before we can start development is to configure Skaffold. So, create a file Now we’re ready to kick off the development. For that execute the following in the Above command triggers a build of the If you now change the code in the By now you should have a feeling how different tools enable you to develop apps on Kubernetes and if you’re interested to learn more about tools and or methods, check out the following resources: With that we wrap up this post on how to go about developing apps on Kubernetes, we hope you learned something and if you have feedback and/or want to point out a tool that you found useful, please let us know via Twitter:
Where to run your cluster?
The tools of the trade
Draft
Skaffold
Squash
Telepresence
Ksync
Hands-on walkthroughs
stock-gen
microservice is written in Go and generates stock data randomly and exposes it via HTTP endpoint /stockdata
.
* A second microservice, stock-con
is a Node.js app that consumes the stream of stock data from stock-gen
and provides an aggregation in form of a moving average via the HTTP endpoint /average/$SYMBOL
as well as a health-check endpoint at /healthz
.
stock-con
microservice./healthz
endpoint in the stock-con
microservice and observe the updates.Walkthrough: ksync
$ mkdir -p $(pwd)/ksync
$ kubectl create namespace dok
$ ksync init -n dok
$(pwd)/ksync
locally with /app
in the container). Note that target pod is specified via the selector parameter:$ ksync watch -n dok
$ ksync create -n dok --selector=app=stock-con $(pwd)/ksync /app
$ ksync get -n dok
$ kubectl -n=dok apply \
-f https://raw.githubusercontent.com/kubernauts/dok-example-us/master/stock-gen/app.yaml
$ kubectl -n=dok apply \
-f https://raw.githubusercontent.com/kubernauts/dok-example-us/master/stock-con/app.yaml
stock-con
service for local consumption (in a separate terminal session):$ kubectl get -n dok po --selector=app=stock-con \
-o=custom-columns=:metadata.name --no-headers | \
xargs -IPOD kubectl -n dok port-forward POD 9898:9898
stock-con
service from our local machine; we do this by regularly checking the response of the healthz
endpoint like so (in a separate terminal session):$ watch curl localhost:9898/healthz
ksync/stock-con
directory, for example update the
Walkthrough: Minikube with local build
$ git clone https://github.com/kubernauts/dok-example-us.git && cd dok-example-us
$ eval $(minikube docker-env)
$ kubectl create namespace dok
$ kubectl -n=dok apply -f stock-gen/app.yaml
$ kubectl -n=dok apply -f stock-con/app.yaml
stock-con
service for local consumption (in a separate terminal session) and check the response of the healthz
endpoint:$ kubectl get -n dok po --selector=app=stock-con \
-o=custom-columns=:metadata.name --no-headers | \
xargs -IPOD kubectl -n dok port-forward POD 9898:9898 &
$ watch curl localhost:9898/healthz
stock-con
directory, for example, update the
$ docker build -t stock-con:dev -f Dockerfile .
$ kubectl -n dok set image deployment/stock-con *=stock-con:dev
Walkthrough: Skaffold
$ git clone https://github.com/kubernauts/dok-example-us.git && cd dok-example-us
$ kubectl create namespace dok
$ kubectl -n=dok apply -f stock-gen/app.yaml
skaffold dev
and needed to apply a fix as described in
{
"auths": {}
}
stock-con/app.yaml
slightly to make it work with Skaffold:namespace
field to both the stock-con
deployment and the service with the value of dok
.
Change the image
field of the container spec to quay.io/mhausenblas/stock-con
since Skaffold manages the container image tag on the fly.app.yaml
file stock-con looks as follows:apiVersion: apps/v1beta1
kind: Deployment
metadata:
labels:
app: stock-con
name: stock-con
namespace: dok
spec:
replicas: 1
template:
metadata:
labels:
app: stock-con
spec:
containers:
- name: stock-con
image: quay.io/mhausenblas/stock-con
env:
- name: DOK_STOCKGEN_HOSTNAME
value: stock-gen
- name: DOK_STOCKGEN_PORT
value: "9999"
ports:
- containerPort: 9898
protocol: TCP
livenessProbe:
initialDelaySeconds: 2
periodSeconds: 5
httpGet:
path: /healthz
port: 9898
readinessProbe:
initialDelaySeconds: 2
periodSeconds: 5
httpGet:
path: /healthz
port: 9898
---
apiVersion: v1
kind: Service
metadata:
labels:
app: stock-con
name: stock-con
namespace: dok
spec:
type: ClusterIP
ports:
- name: http
port: 80
protocol: TCP
targetPort: 9898
selector:
app: stock-con
skaffold.yaml
in the stock-con/
directory with the following content:apiVersion: skaffold/v1alpha2
kind: Config
build:
artifacts:
- imageName: quay.io/mhausenblas/stock-con
workspace: .
docker: {}
local: {}
deploy:
kubectl:
manifests:
- app.yaml
stock-con/
directory:$ skaffold dev
stock-con
image and then a deployment. Once the pod of the stock-con
deployment is running, we again forward the stock-con
service for local consumption (in a separate terminal session) and check the response of the healthz
endpoint:$ kubectl get -n dok po --selector=app=stock-con \
-o=custom-columns=:metadata.name --no-headers | \
xargs -IPOD kubectl -n dok port-forward POD 9898:9898 &
$ watch curl localhost:9898/healthz
stock-con
directory, for example, by updating the