Inside JD.com's Shift to Kubernetes from OpenStack
Editor's note: Today’s post is by the Infrastructure Platform Department team at JD.com about their transition from OpenStack to Kubernetes. JD.com is one of China’s largest companies and the first Chinese Internet company to make the Global Fortune 500 list.
History of cluster building
The era of physical machines (2004-2014)
Before 2014, our company's applications were all deployed on the physical machine. In the age of physical machines, we needed to wait an average of one week for the allocation to application coming on-line. Due to the lack of isolation, applications would affected each other, resulting in a lot of potential risks. At that time, the average number of tomcat instances on each physical machine was no more than nine. The resource of physical machine was seriously wasted and the scheduling was inflexible. The time of application migration took hours due to the breakdown of physical machines. And the auto-scaling cannot be achieved. To enhance the efficiency of application deployment, we developed compilation-packaging, automatic deployment, log collection, resource monitoring and some other systems.
Containerized era (2014-2016)
Function | Product |
---|---|
Source Code Management | Gitlab |
Container Tool | Docker |
Container Networking | Cane |
Container Engine | Kubernetes |
Image Registry | Harbor |
CI Tool | Jenkins |
Log Management | Logstash + Elastic Search |
Monitor | Prometheus |
In JDOS 2.0, we define two levels, system and application. A system consists of several applications and an application consists of several Pods which provide the same service. In general, a department can apply for one or more systems which directly corresponds to the namespace of Kubernetes. This means that the Pods of the same system will be in the same namespace.
Most of the JDOS 2.0 components (GitLab / Jenkins / Harbor / Logstash / Elastic Search / Prometheus) are also containerized and deployed on the Kubernetes platform.
One Stop Solution
- 1.JDOS 2.0 takes docker image as the core to implement continuous integration and continuous deployment.
- 2.Developer pushes code to git.
- 3.Git triggers the jenkins master to generate build job.
- 4.Jenkins master invokes Kubernetes to create jenkins slave Pod.
- 5.Jenkins slave pulls the source code, compiles and packs.
- 6.Jenkins slave sends the package and the Dockerfile to the image build node with docker.
- 7.The image build node builds the image.
- 8.The image build node pushes the image to the image registry Harbor.
- 9.User creates or updates app Pods in different zone.
The docker image in JDOS 1.0 consisted primarily of the operating system and the runtime software stack of the application. So, the deployment of applications was still dependent on the auto-deployment and some other tools. While in JDOS 2.0, the deployment of the application is done during the image building. And the image contains the complete software stack, including App. With the image, we can achieve the goal of running applications as designed in any environment.
Networking and External Service Load Balancing
JDOS 2.0 takes the network solution of JDOS 1.0, which is implemented with the VLAN model of OpenStack Neutron. This solution enables highly efficient communication between containers, making it ideal for a cluster environment within a company. Each Pod occupies a port in Neutron, with a separate IP. Based on the Container Network Interface standard (
At the same time, Cane is also responsible for the management of LoadBalancer in Kubernetes service. When a LoadBalancer is created / deleted / modified, Cane will call the creating / removing / modifying interface of the lbaas service in Neutron. In addition, the Hades component in the Cane project provides an internal DNS resolution service for the Pods. The source code of the Cane project is currently being finished and will be released on GitHub soon. Flexible Scheduling