Best Practices for Master Data Management

By on

master data management best practices x300by Jelani Harper

There are a number of Master Data Management (MDM) solutions that cater to specific domains (customers or products), multiple domains or even certain industries.

The deployment of these options; however, can frequently shift the burden of MDM to technology instead of holistically addressing the consensus required of workers across various departments regarding governance, Data Modeling, integration, and analysis that is essential for trustworthy MDM across entire business units or the enterprise itself.

In a 2014 Enterprise Data World session entitled “MDM Through Processes Rather Than Tools”, representatives from Fidelity Investments described a series of best practices for MDM that is focused on the organizational structure, people, and processes that are essential for creating Master Data Management with the following benefits:

  • Cost Reductions
  • Simplified/Swifter ETL
  • Improved Data Quality
  • Reduced Risk

Application First

The principle focus for creating a holistic MDM system is the various applications it will serve. Fidelity’s Director of Data Analysis Ian Wood recommends an approach in which an organization builds (rather than buys) its own applications, because doing so enables it to support virtually all of its applications from a single OLTP operational data store, as opposed to deploying multiple repositories. It also shifts the focus of the system to its upfront usage and configuration rather than on stitching together different technologies and applications on the back end, in which the primary challenge is to keep the data in sync.

Hidden Costs

In addition to eliminating the propensity for additional data stores, building apps based on a single store eliminates most of the “hidden costs” associated with commercially packaged applications. Most of those costs pertain to synchronizing the data between those various apps. Wood described a situation in which Fidelity created 25 apps utilizing a single data store, an approach which has less hidden costs than attempting to synchronize data between the same numbers of commercial apps (which would likely require more than one repository).

He observed, “If you are in that situation of trying to justify internal development versus a package system, be aware of all of the costs of going with a package system. It’s not just the costs of the package itself. It’s the costs of all the work you need to do under the covers to keep the data in sync with that package with all the other packages that you’re using.”

Centralization

The deployment of a single data store reinforces the element of centralization for which Master Data Management is known. At the personnel level, it is vital to utilize a similarly centralized approach when structuring the organization and its usage of its Master Data system. There are several critical points when integrating numerous apps into a single store—foremost of which is the fact that each application team must agree to pre-ordained rules regarding the usage of shared data. The primary benefit of integrating data into various applications means that different users can leverage the same data for different purposes with a single, orderly copy of the data.

Those different uses, however, are largely supported by a data engineering team that tends to the centralized technology functions of the Master Data repository. Such an engineering team is responsible for several key facets of Master Data that enable them to be used for any number of applications including:

  • Data modeling: It is essential for data engineers to create models that are of use across applications. The data engineering group should provide models for all aspects of data outside of sandboxes and data lakes for Data Scientists.
  • Taking ownership of the data: Ascribing data ownership to the engineering team enforces the fact that the shared data actually exists independent of the respective applications, and that the various developers that build those apps and the employees that use the data do not own them. Instead, the engineering team does.
  • Impact analysis for individual units/applications: This level of analysis is essential to the ability of business users to derive action from the data processed through various apps, and is conducted across the different development groups for the applications.

IT Involvement

IT departments are primarily divided into two components to implement the work of the data engineering team. Developers, of course, are responsible for developing the applications and expediting the process of getting them in production. In this respect, they provide the tools or the raw materials for the engineering team to work with. Wood also recommends the employment of a centralized team of Database Administrators (DBAs) which apply code (provided by the engineers) to the various applications and issue support for them once they are in production by performing backups, restorations, and security patch-ups:

“It’s important those data structures follow a kind of software development lifecycle process like with…code changes,” Wood said. “In some ways it’s more important because you’re affecting more things. It’s very important the DBA group follows the rules as well. They understand they don’t go in and change the data structures, the development teams don’t go in and change the data structures; it’s only through this centralized group [data engineering] that we do that.”

Data Governance

The role of a Data Governance Council and stewards is two-fold in this centralized, technology agnostic implementation of an internally developed Master Data system. The council (which will incorporate representatives across business units, stewards, and upper level management) determines the rules, roles, and definition standards to which the data is held accountable, and also provides the final say on any matters pertaining to data. Fidelity utilizes a specialized Data Quality services group comprised of business users that are tasked with sustaining Data Quality standards and enforcing them, which is related to, yet distinct from, it’s Data Governance Council.

The latter focuses on establishing policy and enforcing it, particularly when there are conflicting issues related to organizational structure or data. “The data engineering group otherwise has to take the responsibility for pulling all of the interested parties together,” Wood stated. “We did that for a number of years, but it was just much easier to have a formal standing body of business and IT people that we could bring these things to.”

Sales Pitch

The cross-collaborative effort required to involve so many people from disparate aspects of the organization is an integral part of utilizing a centralized repository for the majority of an organization’s data needs. The data engineering team requires data modelers, data analysts, and architects to work with developers, governance members, and business representatives from all of its various units. The degree of solidarity required to make such a Master Data Management system work requires more than a little foresight on the part of those involved, and the dedicated ‘selling’ of the value that the resulting product—that single, trustable copy of the data that informs any number of applications—can bring to the organization.

Culture of Data

The crux of that sell and of the MDM system that is based on organizational structure and roles, instead of a fleeting piece of technology, is that the data themselves are more valuable than their applications, simply because they can be applied to so many varieties of applications. Such a system requires the efforts of different types of workers who readily acknowledge a culture of data and its benefits across various units of business. This fact is critical to the long term success of an internally developed Master Data Management process in which an organization’s entire structure and the roles and responsibilities of its workers are based on such a lofty valuation of its data. “Early on we were able to create a culture where the importance of the data, independent of the applications, was recognized rather than the data jut being subservient to the needs of the application development teams,” Wood said. “It wasn’t an easy concept to sell—I didn’t have gray hairs when we started this.”

We use technologies such as cookies to understand how you use our site and to provide a better user experience. This includes personalizing content, using analytics and improving site operations. We may share your information about your use of our site with third parties in accordance with our Privacy Policy. You can change your cookie settings as described here at any time, but parts of our site may not function correctly without them. By continuing to use our site, you agree that we can save cookies on your device, unless you have disabled cookies.
I Accept