Advertisement

Kubernetes Fundamentals: Facilitating Cloud Deployment and Container Simplicity

By on
Kubernetes

Kubernetes (sometimes abbreviated to “kube”) is open-sourced, was originally developed by Google, and organizes containers into logical units for transport and use in the cloud. Containers support the construction of self-contained environments capable of transporting data, and the software supporting it.

Containers are, ultimately, a way to package software and other application components. It is predictable, repeatable, and there are no surprises when the container is moved to a new machine. The Kubernetes container comes with 15+ years of experience in running workloads at Google, and now shares the combined ideas and concepts of the open-source community.

Kubernetes is the result of an enormous, ongoing realignment of software resources that is centered around a concept referred to as the “workload.” The workload is a broad concept describing a job that is accomplished by one or more “applications,” and across several processors. Kubernetes orchestration supports applications across multiple containers, scheduling the containers across a cluster, and scaling those containers.

Google Software Engineer Janet Kuo explained, “You can think of Kubernetes as a platform for application patterns. The patterns make your application easy to deploy, easy to run, and easy to keep running.”

The inventors of containers reasoned that since any application needs only a simple, minimal operating system to run, then a container could exist as a bare bones version of an operating system. Kubernetes containers were originally designed and developed by engineers at Google. Their system, called Borg, was the technological base for Google’s cloud services, which normally generates over two billion container deployments each week.

Borg became the open-sourced Kubernetes in 2014, and the years of experience gained from working with Borg became the primary influence in developing additional Kubernetes technology. (Amusing aside: the Kubernetes logo contains seven spokes, referring to their original title Project Seven of Nine, per Star Trek-Voyager.)

The Strengths of Kubernetes

Manually coordinating the right containers at just the right time can be difficult. Setting up communications between containers can also be difficult. Fortunately, Kubernetes can deal with these issues. In the last year, Kubernetes has become the standard for container orchestration. It offers several strengths other containers lack. For example:

  • It allows researchers to run multiple containers simultaneously across multiple machines, which is quite useful when working with microservices.
  • Scaling and autoscaling offers the vertical scaling of applications automatically (or manually).
  • Each container is assigned its own IP address, a unique DNS title is given to each set of containers, and loads are balanced.
  • A self-repair component allows a failed container to be restarted automatically. Containers can be replaced when a node dies.
  • An application needing to be updated (or have its configuration changed) is changed progressively, and if the application fails, an automatic rollback is initiated.
  • It automatically chooses which node in each container will be used based on the resources needed and other restrictions.
  • Kubernetes “Secrets” hides sensitive information, such ssh keys or passwords. Sensitive information can be updated without having to expose confidential information.
  • Storage can be automatically adjusted.
  • Can manage and organize batch and continuous integration workloads, while replacing containers that have failed.

Kubernetes vs. Virtual Machines

Kubernetes has made containers so popular, they are threatening to make VMs (virtual machines) obsolete. A VM is an operating system or software program that imitates the behavior of a physical computer, and can run applications and programs as though it were a separate computer. A virtual machine can be unplugged from one computer, and plugged into another, bringing its software environment with it.

Both containers and VMs can be customized and designed to any specifications desired and provide isolated processes. Both VMs and containers offer complete isolation, providing an environment for experimentation that will not affect the “real” computer. Typically, containers do not include a guest operating system, and usually come with only the application code, and only run the necessary operations needed. (VMs tend to be larger, bulkier, and include a guest platform.) This is made possible by using “kernel features” from the physical computer.

A kernel is the core program of a computer operating system, and has complete control over the entire system. On most computers, it is often the first program (after the bootloader) to be loaded on start-up. The kernel also initiates the rest of start-up, including input/output requests, and it handles memory and peripherals, such as monitors, keyboards, printers, and speakers.

Kubernetes and Docker

Kubernetes allows several containers to work in harmony, reducing operational burden. Interestingly, this includes docker containers. Kubernetes can be integrated with the docker engine, and uses “Kubelets” to coordinate the scheduling of docker containers. The Docker engine runs the container image, which is created by running docker build. The higher-level concepts (load balancing, service discovery, and network policies) are controlled by Kubernetes. When combined, both Docker and Kubernetes can develop a modern cloud architecture. However, it should be remembered the two systems, at their core, are fundamentally different.

Kubernetes Terminology

Kubelet: This function runs on nodes, reads container manifests, and assures defined containers have started and are running.

Node: These perform the assigned tasks, with the Kubernetes master controlling them.

Master: This controls the Kubernetes nodes and is the source of all task assignments.

Pod: When one or more containers are deployed to one node. Containers in a pod will share a host name, an IP address, IPC, and other resources.

Replication Controller: Controls the number of “identical” copies in a pod that should be running in different locations on the cluster.

Service: This will decouple the work definitions from the pods. Service requests are automatically sent to the right pod, regardless of location.

Kubectl: The primary configuration tool for Kubernetes.

Kubernetes Objects: These are persistent entities within the Kubernetes system. They are used to represent the state of the cluster. Specifically, these objects can describe which containerized applications are running, the resources available, and the behavior of those applications (restart policies, fault-tolerance, and upgrades). A Kubernetes object can be described as a “record of intent.” The system constantly works to assure the object exists. When creating an object, the Kubernetes system adjusts the cluster’s workload to match the cluster’s desired state.

A Secret Object: This supports privacy for small amounts of sensitive data such as passwords, tokens, or keys. Placing sensitive data in a Secret Object, rather than an image or pod specification, offers greater control on how it’s used and minimizes accidental exposure.

Kubernetes Basics

When setting up Kubernetes, API objects are used to describe the cluster’s work state, which includes applications and workloads, the container images used, network and disk resources, etc. API objects can be created using the Kubernetes API, normally by way of the command-line interface, “kubectl.” The Kubernetes API can also be used to directly interact with the cluster, setting or modifying the work state.

After the desired work state has been achieved, the Kubernetes Control Plane comes into play. This makes the cluster match the desired state. In doing this, Kubernetes accomplishes a variety of tasks automatically (scaling the total of replicas for a given application, starting or restarting containers, etc.).

The parts of the Kubernetes Control Plane (the kubelet processes and Kubernetes Master) govern communications with its cluster. The Control Plane keeps a record of all Kubernetes Objects within the system and operates continuous control loops for managing those objects’ state. These control loops respond to changes made in the cluster and will strive to achieve the desired state.

The Kubernetes Master is made up of three processes running on one node in the cluster (designated the master node). These processes are:

  • Kube-apiserver
  • Kube-controller-manager
  • Kube-scheduler

Individual non-master nodes in a cluster run two processes:

  • The kubelet, which provides communication with the Kubernetes Master.
  • The kube-proxy, which provides the Kubernetes networking services on individual nodes.

The Advantages of Kubernetes

The primary advantage of Kubernetes is its ability to provide a platform designed to schedule and operate containers. More generally, it helps to fully implement a container-based infrastructure. Because Kubernetes focuses on automation, it can offer many of the same tools other application platforms come with, but for containers. Additionally, Kubernetes can:

  • Mount and add storage.
  • Maximize the use of hardware for greater efficiency.
  • Control application deployments and updates.
  • Orchestrate containers across multiple computers.
  • Manage services to guarantee deployed applications are consistently running.
  • Provide health-checks on the containers and repair the apps using autoscaling, autorestart, autoreplication, autoscaling, and autoplacement.
  • Scale applications as needed.

Google and RedHat

Container platforms are being used to replace the operating system, explaining the growing popularity of Kubernetes. Though the open-sourced Kubernetes is becoming increasingly popular, oddly, only two organizations, Google and Red Hat, seem to be taking the development and marketing of this system seriously. Considering the central role Kubernetes has taken in reshaping enterprise infrastructures, it can be assumed more entrepreneurs will see its potential for profit, and begin developing new tools for it.

As competition grows, prices will lower and new tools will be developed. Red Hat has become a leading contributor to Kubernetes, and has designed key features and tools for the open-source project, as well as OpenShift, which offers fixes for hundreds of security, defect, and performance issues.

Image used under license from Shutterstock.com

Leave a Reply