Loading...
You are here:  Home  >  Data Education  >  Big Data News, Articles, & Education  >  Big Data Blogs  >  Current Article

The Evolution of Smart Availability

By   /  March 19, 2018  /  No Comments

Click to learn more about author Don Boxley.

Regardless of computing environment, workload, or OS, availability is an essential part of operational success. Outages, system or application failures, and unplanned downtime are simply not acceptable for today’s pace of business in which even 10 minutes of inoperability translates to substantial data loss.

What organizations are beginning to realize, however, is that even traditional options for high availability have limits. The continuous operational efficiency required to capitalize on digital transformation should not monopolize an organization’s financial or personnel resources with endless testing and retesting of availability.

What’s needed is a new approach to dynamically transfer workloads in IT environments based on optimizing the particular job at hand. Achieving this objective requires an innate flexibility, lack of downtime, and cost-effective methodology. In essence, what’s required is Smart Availability, which builds upon some of the basic principles of high availability to provide the previously mentioned advantages—and more.

Smart Availability is the future of high availability and a critical component in the blueprint for creating business value through digital transformation.

Conventional High Availability Limits

By definition, high availability is the continuous operation of applications and system components. Traditionally this goal was achieved in a variety of different ways attended by an assortment of drawbacks. One of the more common involves failovers, in which system components are transferred to those of a secondary system for scheduled downtime or unplanned failures. Clustering techniques are often used with this approach to make resources between systems—including databases, servers, processors and others—available to one another. Clustering is applicable to VMs and physical servers and can help enable resilience for OS, host, and guest failures. Failovers involve a degree of redundancy, which entails maintaining high availability by involving backups of system components. Redundant networking and storage options may be leveraged with VMs to encompass system components or data copies.

The most pressing problem with many of these issues is cost, especially since there are several instances in which high availability is unnecessary. These pertain to the actual use and importance of servers, as well as additional factors pertaining to what virtualization techniques are used. Low priority servers that don’t affect end users—such as those for testing—don’t need high availability, nor do those with recovery time objectives significantly greater than their restore times. Certain high availability solutions, such as some of the more comprehensive hypervisor-based platforms, are indiscriminate in this regard. Therefore, users may end up paying for high availability for components that don’t need them. Also, traditional high availability approaches involve constant testing that can drain human and financial resources. Even worse, neglecting this duty can result in unplanned downtime. Arbitrarily implementing redundancy for system components broadens organization’s data landscapes, resulting in more copies and potential weaknesses for security and data governance.

Implementing Digital Transformation

Many of these virtualization measures for high availability are losing relevance because of digital transformation. To truly transform the way your company does business with digitization technologies, you must implement them strategically. Traditional high availability approaches simply do not allow for the fine-grained flexibility needed to optimize business value from digitization. Digital transformation means accounting for the varied computing environments of Linux and Windows operating systems alongside containers. It means integrating an assortment of legacy systems with newer ones specifically designed to handle the influx of Big Data and modern transactions systems.

Most of all, it means aligning that infrastructure for business objectives in an adaptive way for evolving domain or customer needs. Such flexibility is critical to optimizing IT processes around the goals of end users. The reality is most conventional methods of high availability simply add to the infrastructural complexity of digital transformation, but don’t address the primary need of adapting to changing business requirements. In the wake of digital transformation, organizations need to streamline their various IT systems around domain objectives as opposed to doing the opposite, which simply decreases efficiency while increasing cost.

Smart Availability

Smart Availability is ideal for digital transformation because it enables workloads to always run on the best execution environment. It couples this advantage with the continuous operations of high availability, but takes a radically different approach in doing so. Smart Availability takes the central idea of high availability, to dedicate resources between systems to prevent downtime, and extends it to moving them for maximizing competitive advantage. It allows organizations to move workloads between operating systems, servers, and physical and virtual environments with minimal downtime. The core of this approach is in the capacity of Smart Availability technologies to move workloads independent of one another, which is a fundamental drawback to traditional physical or virtualized approaches to workload management. By disengaging an array of system components (containers, application workloads, services and share files) without having to standardize on just one OS or database, these technologies transfer them to the environment which works best.

It’s important to remember that this judgment call is based on how to best achieve a defined business objective. Furthermore, these technologies provide this flexibility for individual instances to ensure negligible downtime and a smooth transition from one environment to another. The use cases for this instantaneous portability are plentiful. Companies can use these techniques for uninterrupted availability, integration with new or legacy systems, or the incorporation of additional data sources. Most of all, they can do so with the assurance that the intelligent routing of the underlying technologies are selecting the optimal setting to execute workloads. Once properly architected, the process takes no longer than a simple stop and start of a container or an application.

The Best Choice

Smart Availability is important for a number of reasons. It creates the advantages of high availability at a lower cost with a greater degree of efficiency. Moreover, it provides the agility required to capitalize on digital transformation, enabling organizations to move systems, applications, and workloads where they can create the greatest competitive advantage. Smart Availability provides the flexibility needed to adapt to today’s business climate, which is changing faster than ever.

About the author

Don Boxley is a DH2i co-founder and CEO. Prior to DH2i, Don held senior marketing roles at Hewlett-Packard where he was instrumental in sales and marketing strategies that resulted in significant revenue growth in the scale-out NAS business. Don spent more than 20 years in management positions for leading technology companies, including Hewlett-Packard, CoCreate Software, Iomega, TapeWorks Data Storage Systems and Colorado Memory Systems. Don earned his MBA from the Johnson School of Management, Cornell University. Follow Don and DH2i at: Twitter, LinkedIn, Facebook

You might also like...

Benchmarking the Full AI Hardware/Software Stack

Read More →