Reading Time: 14 minutes

This is the first post in a series that intends to provide context and practical information about emerging enterprise technologies. First up is Kubernetes, the container orchestration platform that is sweeping the cloud-native world.

What is Kubernetes?

According to its own homepage, Kubernetes (AKA “K8s”) is “an open-source system for automating deployment, scaling, and management of containerized applications.” To pick up on the careful wording in reverse, Kubernetes’ focus is on applications constructed through containers (especially Docker containers). The deployment, scaling, and management focus is operational, but the automation aspect underlines the affinity of K8s-based systems with organizations adopting a DevOps approach. Lastly, the fact that Kubernetes is an open source project maximized its accessibility and extensibility.

latest report
Learn why we are the Leaders in API management and iPaaS

Notably, Kubernetes is aimed at solving issues for the largest infrastructure environments — “planet scale” according to the site. Kelsey Hightower, Google’s leading Kubernetes evangelist calls it “a platform for building platforms.” This hints at the implementation complexity of the platform, a consequence of its flexibility and ambition.

Aren’t containers enough?

Containers – specifically Docker – became popular because they provide an optimal abstraction that allows developers to build modules of code that can be deployed consistently across many platforms and environments. Those containers can be assembled to form distributed applications. This approach has benefits over the old monolithic style, which would require you to compile all of your code together into one hard-to-manage-and-deploy chunk of executable code. Operations people also like containers, since they can be deployed easily without having to know all the details of what’s inside. Docker’s rapid adoption helped drive the microservices craze, and my former colleague Irakli Nadareishvili dubbed containers the “gateway drug to microservices.”

However, once people started breaking things down into containers, and distributing them around dynamic cloud infrastructure, they realized that it took a lot of work to manage them. How many containers would you need? How can they talk to each other? Which containers are running right now, and where? Also, what we would consider to be a full “microservice” would also likely be made up of more than one container (e.g. one for core logic, one for persisting data, a few more for handling things like security and logging). So just having containers wasn’t enough.

Where did Kubernetes come from, and why is that important?

Containers were being used at Google for quite a while. It’s how they were able to use highly distributed commodity hardware to run their massively scaled systems. They built an internal system called “Borg” that managed their containers. Recognizing the industry need for the problems they’d already dealt with, they turned Borg into the open source Kubernetes project.

The fact that Kubernetes evolved out of Google means it is clearly battle-tested. Furthermore, the legion of Google engineers familiar with the K8s approach created an existing community of experts and advocates to promote its usage. However, Google’s workload profile is unique in its composition of activities, and also possibly the highest scale computing environment on the planet. That means that Kubernetes may feel over-engineered to smaller environments.

Why is everyone so excited about Kubernetes?

As mentioned above, the need for container orchestration is a logical next step when adopting containers. The continued excitement toward microservice architecture and its affinity with containers is steering microservices implementers toward Kubernetes as their default platform. In response to this demand – and also driving it – every major cloud and cloud-native middleware vendor is offering support for Kubernetes in different forms. Red Hat’s OpenShift platform was built from the outset using Kubernetes as its foundation. Amazon, Microsoft, and (obviously) Google offer hosted versions. VMWare recently acquired Heptio, a company founded by two Kubernetes co-creators, Joe Beda and Craig McLuckie. Even Docker-skeptic Pivotal’s container services is built using Kubernetes. So there is clearly a lot of hype around Kubernetes, but also a lot of support and information out there.

People also view Kubernetes as a way of normalizing the cloud landscape, to pave the road for hybrid cloud computing that can help organizations in transition to cloud and also avoid vendor lock-in. However, I think the biggest reason to get excited about Kubernetes is the abstractions it provides for distributed software systems. K8s’s organic evolution has honed these abstractions, as proven by the industry uptake. The most important concept in Kubernetes is the “pod,” a grouping mechanism for sets of containers into a single deployment unit. You can think of a pod as a single microservice. Other useful concepts include “services” as a logical representation of pods for reusability, the self-explanatory “cluster,” and “ingress controller” for regulation of inbound traffic to a cluster. Used together, these abstractions provide a means for modeling a distributed system architecture to handle many different workloads and processing patterns.

Who maintains the Kubernetes open source project?

The Cloud Native Computing Foundation was founded in 2015 as a home for container-related open source projects. The CNCF is the steward of the Kubernetes open source project, as well as the ecosystem of Kubernetes-related technologies. Check out the (daunting) CNCF cloud-native landscape here.

What other technologies are commonly used with Kubernetes?

Docker containers are the most obvious one. Docker donated its runtime to the CNCF as the containerd project. Other CNCF projects commonly used in conjunction with Kubernetes include Prometheus for monitoring, Fluentd for logging, Envoy as a service proxy (possibly to be used within a service mesh*), the gRPC transport protocol, Helm for package management, and the OpenTracing standard. Although not a CNCF project, Zipkin is often used to implement distributed tracing. This list gives you an idea of the diversity – and complexity – of the Kubernetes and cloud-native tools ecosystem.

What does Kubernetes have to do with APIs and API management?

First off, Kubernetes’ primary interface is itself a web API. K8s administrators typically interact through the kubectl command line, but may have a need to automate activities directly through the web API. The Kubernetes administrative API is robust from a security standpoint and offers a consistent experience across the many actions and objects it exposes.

More importantly in the context of API management, many of the business applications deployed to enterprise Kubernetes environments are API-fronted microservices. That means that the capabilities provided by a full lifecycle API management solution – access control for APIs, traffic management, transformation and protocol bridging, dynamic routing – are all relevant in K8s systems. It is therefore vital to use an API management solution that can handle the application patterns in these environments, and also run natively in Kubernetes.

Is there anything that can stop Kubernetes from becoming the foundation of enterprise computing?

As mentioned a few times above, Kubernetes and its surrounding ecosystem of technologies can be quite complex and the number of accompanying capabilities and tools needed to get everything to hang together can be intimidating for an enterprise that does not have the engineering fortitude of a web-native shop. The managed or packaged versions offered by the major cloud providers are helpful in dealing with this complexity, but it’s likely that there are many enterprise applications and even systems of applications where the flexibility/scalability vs. complexity tradeoff is not worth it.

The counter movement to Kubernetes is the growth of the functions-as-a-service paradigm, also known as “serverless” computing**. Serverless takes a very different approach to distributed, cloud-native software architecture that hides some of the complexity inherent in K8s systems, but also introduces some new complexity when deployed at scale. It’s too early to tell whether one will prevail over the other, but that’s not stopping some prognosticators from making predictions. If the history of computer science teaches us anything, it’s that the answer will likely be that both approaches will have their place. Hopefully, this article has given you some background on Kubernetes that will help you make some informed decisions for your organization.