Click to learn more about author George Williams.
This is the third article in a series of articles that focus on data storage management. In this article series, we’re discussing tips and insights on making life easier for data center owners. This article is focused on the necessity of tiering.
The first article was about the importance of flexibility in data storage management and the second article focused on data storage management centralization.
For the sake of context, we highly recommend reading the aforementioned articles before reading this one.
Before discussing the necessity of tiering, let’s define what we mean by “tiering”.
What does “Tiering” Mean in Data Storage Management?
In data management, tiering is the assignment of performance-based and cost-based storage tiers to different types or tiers of data for storage, processing, and retention.
There are two types of data tiering: hardware-level tiering and software-level tiering.
What’s Hardware-Level Tiering?
Hardware-level tiering is the configuration of different types of storage media, such as SATA drives, SAS drives, SSDs or cloud-tiers, as storage tiers for different types of data.
An example of a hardware tiered storage architecture is a NAS appliance populated with SAS drives and SSDs.
What’s Software-Level Tiering?
Software-level tiering is the identification and assignment of different types of data based on access frequency, age, and importance in mission-critical business processes.
Software-level tiering priorities vary depending on business model and IT environment. Following are examples of the different types of data generally present in the data fabric of a digitally transformed organization:
Tier 0 / Hot Tier (Mission-Critical Data)
This is the type of data that is essential for daily processes. If access to this data is lost critical applications are stopped and the businesses’ functions are disrupted, thereby experiencing downtime.
Such data is usually best fit for SSDs and hot tier cloud repositories such as Azure Hot blob or AWS S3.
Tier 1 / Cold Tier (Important Data)
This is the type of data that is important but not critical for business operations. In other words, daily operations continue if access to this type of data is temporarily lost. But this data cannot be lost permanently as that would result in loss of business, financial losses, loss of customers, and loss of reputation.
Simply put, these are files, folders, videos, etc. that your organization’s daily operations can make-do without for a day but that are needed for it to continue to function.
Such data is usually best fit for enterprise SAS or SATA and cold tier cloud repositories such as Azure Cold blob or AWS S3-IA.
Tier 3 / Archive Tier
This is the type of data that is stored and retained for future reference and compliance reasons. Losing this data results in compliance issues and fines (if applicable).
Such data is usually best fit for SATA, tape drives, and archive tier cloud repositories such as Azure Archive Blob and AWS Glacier.
Please note, the aforementioned software-level tiers are meant to be generic. They vary based on organization preference and industry.
Now that we’ve established the meaning of data tiering, let’s discuss why it’s necessary for data storage management.
What Makes Tiering a Necessary Part of Data Storage Management?
Tiering facilitates the overall performance and cost-effectiveness of a data storage infrastructure. By storing different types of data on storage tiers built to support their specific requirements, businesses save money and ensure that the overall result is better than a single type of storage for all types of workloads.
For instance, if mission-critical workloads are running on SATA or tape drives, then the overall performance will not be able to keep up. Comparatively, if mission-critical workloads are tiered on SSDs, the storage will be able to keep up and facilitate high IOPS accordingly. That complements overall productivity.
Similarly, storing archival data on an all-flash storage appliance costs more than is necessary to spend – not to mention that all the performance capabilities of the SSDs will go to waste.
Contrary to this, in a tiered storage system, mission-critical workloads run on SSDs to get the maximum results and archive data is stored on SAS, SATA or tape drives, which delivers the best cost-effective data retention experience.
Automated Tiering – An Important Feature
We’ve established that tiering can potentially deliver maximum utilization of available storage resources. One feature that can really help get the most out of your tiered storage is automated tiering.
Automated tiering enables users to define policies and automatically transfer data between tiers based on the age of the file (when the file was created), folder, volume and/or access frequency of the file (how often a file is accessed and edited). This simplifies data management and introduces efficiency to a tiered storage system.
Tiered storage infrastructures enable users to effectively improve performance and enhance the cost-effectiveness of their storage system. By using configured tiered storage, users can make the most of their available storage resources.
The choice of a capable vendor for tiering is always important. It’s important that the storage solution provider has experience dealing with demanding enterprise workloads.
Some examples of veteran storage vendors who offer storage tiering as part of their solutions are Dell, HPE, StoneFly and StrongBox.