A hyperconverged infrastructure (HCI) uses software and/or hardware to create an IT infrastructure that virtualizes all of the needed elements to create an efficient and highly functional computer processing system. An HCI includes compute, networking, and a storage combined together in a virtualized environment, and normally runs on easily purchased, off-the-shelf commercial servers. The primary goal of a hyperconverged infrastructure is to simplify the process of managing data centers, while working a variety of platforms. It does this by redesigning them—by transforming them into data centers with PCs and laptops acting as extensions that transport transactions and software.
The data center has become the driving force behind modern business, and as needs change, it evolves to meet those changes. The HCI is one of those evolutionary steps. A dramatic difference is how the environments are managed. For example, in HCI, the management layer is handled directly within the cluster itself. While some HCI platforms use a “virtual” appliance approach to run within the cluster where a virtual controller is used on each node inside the cluster, the better and more efficient approach is to build the management layer into the underlying HCI OS itself on bare metal to assure resource efficiency, resiliency, uptime, and better failover protection.
While it might not be immediately obvious, a hyperconverged infrastructure can reduce complexity, while minimizing fragmentation. One of the wonderful benefits of virtualization is its integration of the hardware layer. Virtual management appliances that run within the hypervisor promote further integration. These evolutionary changes can improve the data center’s efficiency, simplify IT work, and even improve user experiences.
HCI is considered affordable, and should be investigated by any organizations considering updating or “refreshing” their data center (startups should also consider HCI at their very beginning).
Gartner predicts that, “In the next two years hardware vendors will support HCI on ruggedized server platforms to address edge use cases, specifically in the areas of retail, mining, and manufacturing.” It is one of the most valuable and fastest growing technology solutions in the industry today.
Alan Conboy is a thought leader for hyperconvergence, and the office of the CTO at Scale Computing. He described his experience in the development of hyperconvergence, saying:
“The guys that invented HCI in the first place didn’t just invent HCI. We invented autonomous HCI — a Scale Computing cluster. Every node knows every other node directly. We don’t need things like the center servers or any of the mother-in-law servers that you find out there or any of the supporting cast of characters. With a hyperconverged infrastructure, drives die, no big deal. Entire nodes go up in smoke. Again, no big deal. Everything just keeps going and you don’t have to have technical resources on-site for it.”
In 1998, a startup called VMware offered a new technological arrangement by providing a platform that created non-physical machine virtualizations (Linux, Windows, and some others). As server processing capacity increased, basic applications were not able to maximize on the abundance of new resources. Enter the virtual machine (VM), which is designed to run software atop a physical server, imitating a different specific hardware platform.
A VM running different operating platforms can use the same server. For instance, a Linux VM can be run on a server capable of running operating systems, such a UNIX VM. Each VM is designed with its own libraries and applications. Rather than purchasing a new computer system capable of running specific software, an organization can keep its old system, and simply add new software. This upgrade is much simpler and much less expensive than replacing the entire system. Commenting on developing and innovating the approach to virtual machines, Conboy said:
“Here’s the huge difference between Scale Computing and pretty much everybody else in the space. We did it with a huge eye toward not just ease of use, but efficiency. Essentially, the competition has basically taken the SAN component from the legacy solution and rewritten as Java Virtual Machine (JVM). Then, they embedded that JVM in an actual virtual machine running on each one of their nodes. This is called the VSA-based approach. But essentially, it’s a virtual appliance approach. It sucks up a bunch of resources. You’ve heard that old saw about with HCI you lose about 20 percent of your resources out of the gate. It’s because of that approach. We didn’t do that. We started from scratch.”
The Benefits of Hyperconvergence
There are a number of benefits to be gained by upgrading to a hyperconverged infrastructure that Conboy discussed. Lower costs and simplicity are the general theme. Some of the other benefits are:
- Spending Less: Hyperconvergence avoids large up-front costs and infrastructure purchases. This is achieved through the use of low-cost commodity hardware, and through scaling the data center by way of easy-to-manage steps.
- Smaller IT Staff: The management software for hyperconvergence uses virtual machines, while all other resources (backup, storage, replication, load balancing, etc.) are only there to support the VMs. The policies managing the underlying resources are constructed and organized by the software, allowing for a minimal IT staff.
- Automation: A basic part of the hyperconvergence package. Combined data center resources and centralized management tools support streamlined scheduling opportunities and scripting options.
- Improved Performance: Hyperconvergence allows organizations to use many different kinds of applications and programs, without concerns about reduced performance.
- Increased Data Protection: It provides data protection as part of the infrastructure, with backup, data recovery, and disaster data recovery built in. Both spinning-disk and solid-state storage are in each appliance, providing a mix of storage that allows the systems to deal with both sequential and random workloads, easily.
Describing their development process and the benefits they wanted to build into Scale Computing’s hyperconverged infrastructure, Conboy commented that:
“What we set out to do was literally take a clean sheet of paper approach to how ‘highly available’ virtualization was being done in the first place. Keep the goals — high availability, fault tolerance, live migration, you know — the killer apps. But, with a completely clean sheet of paper on what the infrastructure to support those benefits actually should look like.
But they also wanted to make it “self-aware, self-healing, and self-load-balancing.” The system needed to autonomous enough that you didn’t need “to be a VCP, or a CCNE, or this SNIA-certified guy,” he said.
“But rather a kid straight out of the local community college still waving around his A-plus cert like he’s the first human to ever get one. He could sit down with this thing, be able to rack it up, stack it up, start it up and go, without really needing any additional training. We did exactly that. We spent the next two or three in change years, creating this thing.”
Virtual Desktop Infrastructures
Virtual desktop infrastructure is an approach using a desktop operating system, often Microsoft Windows, which is run and managed at a data center. A desktop image is sent through a network and received at an endpoint extension device. The user can interact with the operating system and its applications as though it were running locally.
The data center can be connected to a traditional PC, laptop, or other mobile device. (Imagine setting up a Microsoft system with Dell laptops that can operate using a variety of platforms.)
Very little computing actually takes place on the extension device. All data is stored in the data center, with none of it remaining at the endpoint. A thief stealing a laptop using VDI can’t take data from the machine, as there is no data on it. This aspect of VDI creates a remarkably secure data storage system.
There are two basic forms of VDI — persistent and nonpersistent. The persistent version of VDI offers each user their own desktop image. This can be altered and stored for future use, similar to a traditional physical desktop.
The nonpersistent version of VDI offers a series of uniform desktops that users can access as needed. Non-persistent desktops return to their original state whenever the user logs out.
In the interview, Conboy described some of successes he has seen with virtual desktop infrastructures at smaller scale:
“One of my case studies was the Paris Community Hospital. It’s a 600-bed hospital in Illinois that had been trying to do Citrix for years, and just kind of stumbling along with it. Now you had a tool that you could literally say, ‘Hey, give me X number of seats. I’ve got this many power users and this many task workers.’ Okay. Poof, here you go.”
Another is the City of Harlingen. They had been struggling with trying to get something going with IVR payment for two and a half years. They implemented Scale Computing VDI and within about a month, they got everything working flawlessly, and the entire city moved over.
It’s clear that hyperconvergence and VDIs are not just built for large organizations anymore. While they have certainly been getting the benefits from such technologies for many years, small and medium companies are now seeing the many benefits accorded by these allied technologies, said Conboy in closing:
“So, for many years now various folks on the ‘inside, in the know’ have been saying it’s the year of VDI. Realistically, what? 10 years, 12 years now. You know what it’s never actually been, The year of VDI. The reason is simple. If you look at the legacy vendors in the VDI space, their offerings have been way too expensive, way too complex, with way too much licensing.”
That’s changed now with Scale Computing’s push to bring hyperconvergence and VDI to everyone. Conboy said there has been a “gaping hole in the market” for around 2000 seats and under:
“Of people that have a firm desire to do VDI, cost savings, cost containment, control plane stuff, there just wasn’t a good solution available. I knew this had long been the case, so we sat down and figured out how to make it work for everyone.”
Image used under license from Shutterstock.com