Advertisement

Don’t Be Held Hostage by Legacy Data Backup

By on
Read more about author Jason Lohrey.

Today is World Backup Day, our annual reminder of the importance of being diligent about backing up data to prevent data loss. But the fact is, the rate at which data levels are exploding in size is far outpacing what legacy data backup systems can achieve. The way we have done backup for the past 20 years is broken, especially at scale. As the amount of data continues to increase – both in the number of files and the amount of data generated – backup systems that scan file systems are no longer feasible, particularly as we enter the realms of billions of files and petabytes or more of data.

IDC’s Global DataSphere, which forecasts the amount of data that will be created annually, predicts that data will grow at a compound annual growth rate (CAGR) of 21.2% to reach more than 221,000 exabytes (an exabyte is 1,000 petabytes) by 2026.

Keeping large-scale data sets secure and resilient is a significant challenge for organizations, and traditional backup solutions aren’t equipped to meet this challenge. Furthermore, companies become increasingly vulnerable to corruption, malware, accidental file deletions, and more as data grows. Losing important data can be devastating and result in financial losses, personal and business disruptions, or even legal issues. 

Other consequences can include reputational damage and the cost of implementing new security measures. Ransomware will cost its victims close to $265 billion annually by 2031, with a fresh attack on a consumer or business every two seconds as ransomware perpetrators aggressively refine their malware payloads and related extortion activities.

Legacy Data Backup Is No Longer Viable

Traditional backup works by scanning a file system to find and create copies of new and changed files. However, scanning takes longer as the number of files grows – so much so that it’s becoming impossible to complete scans within a reasonable time frame. They usually run at night when systems are likely less volatile.  

In addition, backups are set to run in intervals, which means any change before the next scan will be lost if there’s a system failure. Traditional backup does not meet the objective of zero data loss, and recovering data in petabyte-sized repositories is time extensive. And the recovery process is not what it should be – it’s tedious and slow.  

Even “incremental forever with synthetic full backups” may not be able to process changed data within the necessary backup window. Although cloud is now commonly used for data backup, the approach of transferring petabytes of data to the cloud and storing that data in the cloud is neither practical nor cost-effective, reports IDC.

Achieving data resilience at scale is increasingly critical in today’s data-driven world. Organizations need to spring back seamlessly, reduce the risk of data loss, and minimize the impact of downtime, outages, data breaches, and natural disasters. For far too long, business leaders have had to accept a level of data loss, as defined by recovery point objectives (RPO), and some downtime, as defined by recovery time objectives (RTO).

A Radical New Approach: Shifting Focus from Successful Backups to Successful Recoveries

Many of the general concepts and practices for data protection, and backup and recovery in particular, have not changed since they were developed in the client/server era. But with significant technology advancements and the constant surge of cyber attacks acting as a catalyst, the backup horizon is about to change. Traditional backup is designed to be independent of the file system as a separate entity. A fresh new approach makes the file system and backup one and the same – the backup resides inline and within the data path. As a result, every change in the file system is recorded as it happens, end users can recover lost data without the assistance of IT, and finding files is easy, regardless of when they may have existed, and across the entire time continuum. 

This model will redefine enterprise storage by converging storage and data resilience in one system so that every change in the data path is captured. It increases data resilience and provides a strong first line of defense against ransomware cyber locking, enabling organizations to recover compromised data easily and swiftly. Users or IT administrators can literally go back to any point in time to recover needed files – even in the event of a cyberattack where files have been encrypted.

Think of how you might treat a house in the mountains and protect it against a fire. You could take precautions like removing trees around the house, ensuring fire breaks, cleaning roofs and gutters of dead leaves, and other preventative measures. Or you could passively take no action and just wait for the house to burn down, hoping the insurance is adequate to recover the loss. The first approach is proactive – avoid the disaster in the first place. The second is reactive – a bad thing happened and now we will spend a lot of time, effort, and money and hope we can recover to where we were before the event. 

This example illustrates the difference between recovery from a state of continuity (continuous data availability) and discontinuity (a disaster that strikes). A proactive strategy leverages continuous inline data access, eliminating the cost and business impact of lost data, and allows for the following benefits: 

  • The ability to roll back ransomware attacks and provide the first line of defense against corporate loss and strong protection against criminals holding a business and its data hostage in, at most, minutes rather than days, weeks, or more. 
  • Continuous data protection makes it possible to achieve continuity of service at scale with the ability to instantly unwind the file system to appear as it was at the selected point in time before the data corruption, hardware failure, or malicious event. It delivers data security and resilience at scale with extremely fast data recovery and zero data loss accomplished by uniting the file system and data fabric.
  • Expedited data recovery enables users to interactively find and recover what they need – a “do-it-yourself” data search and recovery process that eliminates the need for IT intervention. 

Continuous data availability focuses on recovery rather than backup. It assists organizations in harnessing monumental amounts of data for resiliency and overcoming the obstacles of legacy backup that is becoming increasingly obsolete. Don’t be held hostage to a backup approach that is decades old, hoping a disaster won’t happen. Adopt a proactive strategy that relies on continuous inline data access to eliminate the cost and business impact of lost data and the risk of cyber threats.