Enterprise data storage professionals are increasingly finding themselves at the center of business discussions: finding important files, showing departmental usage, recalling files from archives for an audit or new research, moving distinct data sets into the cloud to import into a new analytics environment, and deleting data to satisfy regulations. Meanwhile, these tasks have become more complex. Hybrid, multi-cloud, and edge infrastructure means that data increasingly lives across many different silos and is difficult to find and move when needed. The data storage administrator or architect must navigate many decisions and do the right analysis to ensure that data is protected and where it needs to be for both users and budget requirements.
Of course, users and executives don’t care much about these details. They want cost-efficient storage and fast and secure access to the right data sets to facilitate better decisions and new ideas. Data storage IT teams have an important role in collaborating with departments and end-users to help everyone meet their goals. Rather than focusing solely on data storage technology procurement, configuration, and spend management, storage professionals now must focus on right-placing data and empowering line-of-business users to help themselves.
JOIN OUR DATA ARCHITECTURE BOOTCAMP
Save your seat for this live online training and accelerate your path to modern Data Architecture – February 27-March 2, 2023.
Consider the following tactics when looking for ways to better serve and collaborate with users, department heads, and executives:
- Showback/chargeback: By generating reports to show departments their data usage, department heads can understand their data consumption and which data/files are in most demand based on access patterns. When chargeback is in place, these reports are critical. A new trend is self-service, whereby authorized data owners can run their own reports and searches in the data management platform to see trends and billing information.
- Finding duplicates: It’s in everyone’s best interest to get rid of duplicate data. It clogs up storage capacity (wasting money) and makes it hard for users to find the single version of truth. This is a common occurrence in research organizations, for instance, when data must be copied for testing and/or distributed to different departments and locations where cloud storage isn’t authorized for sharing files. It’s a hair-pulling exercise to locate duplicates across distinct data silos and delete them, but automatic data tagging is one way to quickly search for similar files and then confine them for deletion.
- Facilitating research projects: End users may want to search for and move specific data sets to analytics tools, such as in the cloud. A hospital finance team might want to export copies of billing data for certain demographics or disease conditions into a tool for analysis and then delete them once the study is completed. Other users may just want to search and tag files to see how much data is available for a potential project. A secure, self-service program could allow users to do these tasks without actually moving, copying, or deleting data – but instead create a workflow that IT can review and approve the request and push a button to execute the required action.
- User-driven retention/deletion: When it comes to nuanced retention and deletion policies, departments know best. For instance, a company may have a policy to delete data if it hasn’t been accessed for two years, but departments might determine that certain data sets (such as lab data) isn’t valid after 10 days and can be deleted sooner. Giving authorized users the ability to create specific retention plans for exceptions is a smart idea.
- Data tagging and segmentation: It is difficult for people to search and find files across a large organization and too often the frustration results in loads of calls to IT. Most users and departments don’t have wonderful data organization practices – but this is where a smart storage expert can help by introducing best practices on how to tag files with new metadata, such as project name. Data management tools can also automate this process by creating a plan where all files with certain characteristics will be tagged with the desired metadata. This is enormously useful when creating data management policies: such as move all files from project C and from Europe region to cloud archival storage 30 days after the project’s completion.
By engaging departmental users and providing visibility into their data so they can better use it, data storage teams can build trust and partnership with the business, which can lead to a more cost-efficient and effective data management practice.