Kong Ingress Controller and Service Mesh: Setting up Ingress to Istio on Kubernetes
Author: Kevin Chen, Kong
Kubernetes has become the de facto way to orchestrate containers and the services within services. But how do we give services outside our cluster access to what is within? Kubernetes comes with the Ingress API object that manages external access to services within a cluster.
Ingress is a group of rules that will proxy inbound connections to endpoints defined by a backend. However, Kubernetes does not know what to do with Ingress resources without an Ingress controller, which is where an open source controller can come into play. In this post, we are going to use one option for this: the Kong Ingress Controller. The Kong Ingress Controller was open-sourced a year ago and recently reached one million downloads. In the recent 0.7 release, service mesh support was also added. Other features of this release include:
- Built-In Kubernetes Admission Controller, which validates Custom Resource Definitions (CRD) as they are created or updated and rejects any invalid configurations.
- In-memory Mode - Each pod’s controller actively configures the Kong container in its pod, which limits the blast radius of failure of a single container of Kong or controller container to that pod only.
- Native gRPC Routing - gRPC traffic can now be routed via Kong Ingress Controller natively with support for method-based routing.
If you would like a deeper dive into Kong Ingress Controller 0.7, please check out the .
But let’s get back to the service mesh support since that will be the main focal point of this blog post. Service mesh allows organizations to address microservices challenges related to security, reliability, and observability by abstracting inter-service communication into a mesh layer. But what if our mesh layer sits within Kubernetes and we still need to expose certain services beyond our cluster? Then you need an Ingress controller such as the Kong Ingress Controller. In this blog post, we’ll cover how to deploy Kong Ingress Controller as your Ingress layer to an Istio mesh. Let’s dive right in:
Part 0: Set up Istio on Kubernetes
This blog will assume you have Istio set up on Kubernetes. If you need to catch up to this point, please check out the
First, we need to label the namespaces that will host our application and Kong proxy. To label our default namespace where the bookinfo app sits, run this command: Then create a new namespace that will be hosting our Kong gateway and the Ingress controller: Because Kong will be sitting outside the default namespace, be sure you also label the Kong namespace with istio-injection enabled as well: Having both namespaces labeled Now deploy your BookInfo application with the following command: Let’s double-check our Services and Pods to make sure that we have it all set up correctly:1. Install the Bookinfo Application
$ kubectl label namespace default istio-injection=enabled
namespace/default labeled
$ kubectl create namespace kong
namespace/kong created
$ kubectl label namespace kong istio-injection=enabled
namespace/kong labeled
istio-injection=enabled
is necessary. Or else the default configuration will not inject a sidecar container into the pods of your namespaces.$ kubectl apply -f http://bit.ly/bookinfoapp
service/details created
serviceaccount/bookinfo-details created
deployment.apps/details-v1 created
service/ratings created
serviceaccount/bookinfo-ratings created
deployment.apps/ratings-v1 created
service/reviews created
serviceaccount/bookinfo-reviews created
deployment.apps/reviews-v1 created
deployment.apps/reviews-v2 created
deployment.apps/reviews-v3 created
service/productpage created
serviceaccount/bookinfo-productpage created
deployment.apps/productpage-v1 created
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
details ClusterIP 10.97.125.254 <none> 9080/TCP 29s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 29h
productpage ClusterIP 10.97.62.68 <none> 9080/TCP 28s
ratings ClusterIP 10.96.15.180 <none> 9080/TCP 28s
reviews ClusterIP 10.104.207.136 <none> 9080/TCP 28s