Loading...
You are here:  Home  >  Data Education  >  Data Architecture News, Articles, & Education  >  Data Architecture Blogs  >  Current Article

Five Ways Data Engineers Are Leveraging Self-Service Data Prep Solutions

By   /  October 23, 2018  /  No Comments

Click to learn more about author Farnaz Erfan.

With the growth of Big Data Analytics, data engineers are now gaining a lot of popularity.  And, while the majority of them have coding and technical skills, ETL knowledge, or can program in MapReduce, many have found that applying a Self-service Data Prep solution can help them for a number of reasons, including these top five:

  1. Ensure Trust in Data and Create Self-service Capabilities for Business Users

At their core, data engineers are tasked with enabling line of business users to create the self-service analytics needed to make informed business decisions.  However, while Tableau, Power BI and other data visualization tools are there to support business users, the data these solutions rely upon often lacks a level of confidence in its accuracy. This is because these tools start and end at visualizations, leaving many to wonder how the dashboard data is derived.

Data Prep solutions allow data engineers to create trusted datasets for their business counterparts, including a full lineage of how the data is sourced, what preparation steps have been performed, and whether or not its certified and passes the necessary Data Quality checkpoints. With embedded catalogs, these Data Prep solutions also allow data engineers to create certified and traceable data assets for the business teams.

  1. Collaborate with the Analyst or Data Curation Teams

Data engineers often work as liaisons between business and IT teams; therefore, it is important for them to collaborate on a shared version of the data across groups. Desktop-oriented Data Prep tools cannot achieve proper collaboration because they proliferate different copies of the data sitting on everyone’s personal computer. Cloud-based Data Prep solutions provide a browser interface similar to Google Sheets so when one person makes an update or adds a notation, other teams members see it in real time.

This is where preparing data for analysis can start from the grass roots. For example, a business analyst can create a Data Prep project, source their own data using intelligent ingest, and publish a clean dataset. A data engineer then takes the prepared data, verifies its rules and validity, and certifies it as a standard for others to use. Only a Data Prep solution with centralized governance capabilities to serve different roles within the organization is capable of this.

  1. Leverage Automation and Scheduling to Streamline Data Pipelines

Once data transformations are created and activated, there comes a time when a data engineer need to constantly refresh, update, and keep the information current. This is where automation and large-scale batch processing comes into the picture.

While Data Prep solutions have earned their fame for the self-service aspects of their interface, they also have rich automation and scheduling capabilities. For example, one can change data source configurations such as using bulk load operations for ingesting data to optimize performance or use REST APIs to configure new data sources or integrate with external orchestration frameworks. Sophisticated Data Prep solutions also allow for scaling jobs across shared, ephemeral clusters.

  1. Establish Processes to Properly Govern Pipelines

Setting up the right policies for data, managing privileges, and ensuring that the right access is provided to the right individual, are all aspects of operationalizing and democratizing information: again, part of the data engineering role.

This area might also involve testing and verification of ongoing data transformation jobs. Data Prep solutions provide an interactive experience for data engineers to examine and validate the data produced out of their automated jobs, to ensure quality. In this case, data engineers can also use data profiling scorecards of Data Prep solutions to monitor data deterioration.

They can also set up policies to manage the information life cycle, so they can purge older or dormant data sets, create exception reports and get alerts on data anomalies in order to refine polices on the data.

  1. Prototype New Data Products and Discover Tangible Business Improvements

Last, but not least, is the area of ideation and innovation. Many data engineers are tasked with creating and prototyping new information services or products, or are simply looking at ways in which data can improve existing processes.

An interactive Data Prep solution can help data engineers develop new concepts by exploring and blending data from Data lakes, business applications, and legacy sources.

However, this level of introspection requires the solution to interact with data at scale, or the entire data volume, so a data engineer can see all data values simultaneously to discover new opportunities. While some Data Prep solutions can profile and interact with all data at scale, others are unable to do this and are tied into small data samples.

In their recent research, Ehtisham Zaidi, Roxane Edjlali, Nick Heudecker, and Mark Beyer from Gartner, have defined the role of a data engineer, writing that:

“Data and analytics leaders are advised to move quickly in hiring, training and promoting this vital persona as a separate and unique role within their data and analytics charter for success.”[1]

 

[1] Gartner, Toolkit: Job Description for the Role of a Data Engineer, Ehtisham Zaidi, Roxane Edjlali, Nick Heudecker, Mark Beyer, 14 September 2018.

About the author

Farnaz Erfan is the Senior Director, Product Marketing at Paxata. She has over 17 years of experience in both engineering and marketing roles within data and analytics domain at companies such as IBM, Pentaho, Birst, and Paxata. Follow Farnaz and Paxata at: Farnaz Twitter, Paxata Twitter, Farnaz LinkedIn, Paxata LinkedIn

You might also like...

RWDG Webinar: Data Governance and Three Levels of Metadata

Read More →