Advertisement

Kubernetes 101: What Is It and What Value Does It Offer Me?

By on

Click to learn more about author Pete Johnson.

If you remember a time when the de facto target for application deployments transitioned from bare metal to virtual machines (VMs), this story is going to sound familiar to you. That’s not to say that you should think of containers as VMs, only smaller, but that’s a decent enough place to start as long as you’re willing to expand your mind a bit after accepting the initial definitions. The truth of the matter is, there’s a revolution going on in the way that large-scale applications get designed and deployed. And Kubernetes, the open source container clustering technology founded by Google, is at the center of it.

If you haven’t been using Kubernetes—known as K8S to the cool kids—you probably have some questions like:

  • What is K8S exactly?
  • How does it benefit you?
  • Why is it fundamentally different from VM-based deployments?
  • Where do microservices come into play?

Welcome to Kubernetes 101!

In the Beginning, There Was the Linux Kernel

Don’t get me wrong, VMs are great, and they will continue to be part of a deployment strategy for many years in the same way that people still make money off of AM radio, but they have their limitations. Specifically, I’m talking about the time it takes to spin one up.

You can create a new VM in roughly 10 minutes, which is something we used to all be amazed by because it was several orders of magnitude faster than the three months it took to order, have delivered, and configure a bare metal server. But the hypervisor is 20-plus-year-old technology at this point, and where that speed used to be a huge benefit, it is now the slow lane of making a unit of compute available to a developer—whether that be in a production, staging, test or dev environment.

The culprit that prevents faster creation speeds for VMs is the same hero that gave them to us in the first place: the hypervisor. Whether KVM, Hyper-V or something else, the core responsibility of the hypervisor is to make use of physical hardware under its control to provide strong lines of resource separation between VMs. This is what prevents a memory leak in one VM from impacting another. But in doing so, it abstracts the underlying host operating system, and that’s what prevents the minutes it takes to spin up a VM from being seconds.

What if there were a way to, instead, make use of resource separation techniques in the Linux kernel like cgroups and namespaces so that this abstraction was no longer necessary? Then, maybe units of compute could be made available in seconds instead of minutes.

Welcome to containers, which have been a part of the Linux kernel for quite awhile. What they lacked was a method for injecting them with software and managing their lifecycle in a way that made them useful. That sounds like a good idea for a startup, circa 2010.

Then There Was Docker

While Docker has certainly done a ton of work on the core container engine itself, a much bigger deal has to do with packaging techniques and the socialization of those techniques. A Dockerfile describes the list of software assets that are to be installed in a particular container. The resulting Docker Image can be shared on Docker Hub, mimicking the model that has made GitHub so successful. Baked into the command set for a Dockerfile is that you can reference other Dockerfiles. That’s significant because it enables a developer to very easily leverage work that others have done with a rich baseline so that the uniqueness of the use case being solved can be focused on.

For example, suppose you have something simple like LAMP stack application. You can directly use the Apache web server Docker Image from Docker Hub as your baseline and add your binaries to form your own unique Docker Image. No additional commands to install Apache are required on your part other than to make the correct reference—unlike the VM days when you had to script the Apache installation yourself. It just works, which is one of the reasons why developers love Docker.

Docker Compose files can be used to tie together multiple Docker Images to form an application, but what about setting up a container engine that spans multiple hosts? How do you control access to that? What about the networking, which needs to be optimized in such a way that containers on the same host don’t have to have packets between them go all the way out to the network interface card?

Which Begat K8S

Welcome to Kubernetes, the container clustering platform that Google has been using internally for many years before turning it into an open source project in 2014. Now governed by the Cloud Native Computing Foundation, it ties together multiple physical or virtual container hosts into a cohesive cluster in, loosely speaking, much the same way that OpenStack or VMware manage multiple hypervisors into a private cloud.

Using K8S, administrators can control access to different parts of the cluster, get metrics on its usage, move containers around among the different hosts and a variety of other tasks. K8S is the foundation of Google’s GKE public Cloud offering as well as the newly announced EKS service. While K8S currently lacks the ability to span a cluster between public Cloud and a private data center, that nirvana is within reach and would provide enormous flexibility in how these little container units of compute are managed.

What does all that mean for the application architectures that sit on top of a K8S cluster?

Microservices and Innovation Through Iteration

As client-server computing matured in the early 1990s, most enterprise-scale applications broke out of a monolithic architecture that had all components locked together in the same memory space, written in the same language, and all tied to the same deployment schedule. In its place were multi-tiered applications that were slightly better, in that they separated business logic from databases and fronted the whole thing with load balancers.

That business logic tier was still pretty monolithic, though, and even as VMs became the de facto target for deployments in the early 2000s, the application architectures being deployed on top of them often didn’t take advantage of the fact that compute was no longer a scarce resource as it was with bare metal deployments.

What evolved in the VM world were things like horizontal auto-scaling, where you could add VMs to a pool at times of high demand, or blue/green deployments where you spun up VMs with new software and turned off ones with old software instead of patching the old ones. Still, because of the monolithic nature of the business logic, if you were doing quarterly releases, that was considered fast.

Parallel to this, the Agile Software Development movement took off. Based on the fundamental tenant that it’s better to release smaller pieces of software far more often, with a user feedback loop providing critical direction on what should be done next, “fail quickly” became a mantra that developers started to use to guide their daily lives. Put another way, if 90 percent of ideas are terrible, it’s better to have 100 product releases a year so you can find 10 good ideas than to release quarterly and find one every two and a half years.

It turns out that if you design an application as a set of microservices, whose components expose functionality to each other using APIs and are composed of a set of containers behind a load balancer, it is easier to iterate more quickly. Components can be released independently as long as the API doesn’t change. They can even be written in different languages so teams with different skill sets can work in parallel instead of being lumped into a serial release train.

In sum, the logic goes something like this:

  • Innovation requires rapid iterations
  • Microservices applications are easier to iterate over
  • Kubernetes provides the ideal platform for microservices applications

That’s why you should care about Kubernetes. If you have any interaction with a development team building software that is looking to innovate in a competitive environment, they will want to build microservices applications on top of K8S clusters. Otherwise, they will look elsewhere rather than be slowed down.

Leave a Reply