Advertisement

Role of Containers in the Enterprise Data Center

By on

Click to learn more about author Sushil Kumar.

The core value of the use of containers lies in the fact that it enables an application-centric computing paradigm by unshackling applications from the underlying infrastructure.

Containers should now be seen as something more than a tool to make developers more agile and productive.

Container Hype and Confusion

While this sentiment is inspiring to those of us working in this new technology, it is interesting that the industry is undergoing and falling victim to attempts to create buzz around a product/company without any new developments or announcements to back it. Such hype is one contributor to growing confusion in the industry with regard to containers. This problem stems from the fact that the vendor community has not helped customers understand the role of containers in the enterprise data center, even though everyone is trying to sell something for Docker (storage, security, management, etc.) to CIOs and CTOs.

Clearly containers have had a phenomenal impact on how the new cloud-scale applications are built and deployed. But for most IT ops folks and enterprise data center managers, the hype around containers has been a bit of a nightmare. Are containers a replacement for VMs? Is it about going back to the dark ages of bare metal? Or is it just a developer’s toy that needs “adult supervision” so as not to compromise security and governance? These are persistent questions.

The Real Value of Containers: Application-Centric IT

The core value of the use of containers lies in the fact that it enables an application-centric computing paradigm by unshackling applications from the underlying infrastructure. Docker in particular deserves kudos for turning geeky concepts such as cgroups, linux namespaces and aufs into a simple but extremely powerful toolset that allows us to de-hyphenate applications from the underlying infrastructure.

Just as hypervisors abstracted OS from underlying hardware and turned what used to be physical machines into a software package, containers extract applications from OS and everything below it. Containers therefore can herald an application-defined data center era, extending the gain of the software-defined journey all the way to what really matters — the applications!

Realizing that dream, however, requires the reach of containers to extend beyond the niche of web applications to all the classes of enterprise applications — including mission-critical databases and business applications. Containers have as much to offer to these applications, if not more, as they do to web-scale applications. For IO-intensive data applications, containers on bare metal provide a high-performance, lightweight alternative to the traditional hypervisor virtualization with potential performance gains up to 50 percent. Containers also help consolidate multiple applications per machine, which helps maximize hardware utilization. And just in case you thought hypervisors made server utilization a thing of the past, think again. The majority of your servers are probably still running within the 10 percent to 30 percent utilization range, and this is only going to get worse as processors become more and more powerful. (The latest Xeon processors support 22 cores per socket, which with hyper threading goes up to 44 “virtual cores” per socket!)  But most importantly, application management simplicity and agility gains are so very needed for the enterprise applications which account for a significant part of the IT OPEX budget, and can often be the roadblocks to rapid business innovation and time to market.

But Are Containers Enough?

It is also important to recognize that containers are just one element of the solution that is required to create an Application-Defined Data Center. Just as containers allow application-driven, fine-grain slicing of the compute resources, the storage subsystems also need to evolve to “understand” applications and manage resources at the application level — whether that application runs in a single container or is composed of many containers spread across multiple servers. In spite of the rampant “container washing” by storage vendors, the fact remains that the current storage subsystems are simply not designed to handle the containerized workload. They have neither any understanding of applications nor were they designed to operate at the container scale and speed. And this is currently a major roadblock to containers going mainstream.

Can’t We All Just Get Along

What the industry needs is container-based, software-defined compute and storage infrastructure that is designed to run all classes of enterprise applications — no exceptions. Anything that runs on Linux should run on it as well. The storage layer should be the first scale-out block storage that operates at the unit of containers, thus making machine or server boundaries irrelevant. It should be 100 percent application-driven, “invisible storage” that allows everything from initial volume provisioning to ongoing lifecycle management, as well as IOPS control, at the container/application-level.

A shared services platform that enables consolidation of applications with guaranteed application-to-spindle QoS for each application is key since application consolidation is a much-cherished goal but is often a non-starter in the absence of predictable resource and QoS management. The most exciting part of this idea is that the entire infrastructure fabric (compute, storage, network, etc.) is governed from an application perspective, empowering the applications innately with application-aware, application-driven, and application-optimized infrastructure the industry needs. With this mindset, developers and application-administrators can – for the very first time – describe their needs purely in terms of application-centric concepts instead of infrastructure components such as VMs, storage volumes, etc. And it runs on your existing commodity hardware.

Leave a Reply