Advertisement

How to Not Get Overrun by Your Data Overrides

By on

Click to learn more about author Richard Mohrmann.

Large-scale enterprises rely heavily on market and consumer data sourced from outside vendors, but that information comes at a cost. In addition to intermittent issues with the data quality itself, consumers of vendor data can face timeliness problems, non-standard formats, coverage gaps, and lags in technical support. Despite these and other issues, vendor data can be quite sticky. It’s time consuming and expensive to set up the feeds in the first place, and there are often only a few vendors to choose from. Whereas lots of companies sell polished, efficient, highly commoditized information, relatively few trade in the rawer data enterprises need for more complex analytics. High demand and low supply combine to cause an inverse relationship between the value of the data and how easy it is to source and maintain. (Please refer to the “very scientific” chart below.)

Enterprises deal with this economic bind by adapting to data feeds that have these problems. They pad their teams with data experts, hiring and training the best people to mitigate issues with incoming data, but sooner or later are faced with the inevitable problem of bad data.

It’s a common scenario. A client calls your company’s technologists and complains that the data is wrong. Not user-error wrong. Not did-you-try-turning-it-off-and-back-on-again wrong. Wrong wrong. You received bad data from a premier data vendor, propagated it through your systems, and passed it on to your clients. This isn’t slap-on-the-wrist bad or even negative-Yelp-review bad. These data errors, if unchecked, can be incredibly expensive both financially and reputationally. Under the wrong circumstances, they can shutter an otherwise solid and promising businesses.

So, the hypothetical consumers of this bad vendor data go about making sure this never happens again, kicking off a surprisingly common series of steps that come up as companies try to evolve their Data Management strategy. Identify-isolate-eliminate. They go to the people who reported the problem and make sure they understand how those individuals knew there was a problem. They develop new Data Management processes that can catch those current problems further upstream and then congratulate themselves on how clever they are. And then…

It happens again. The enterprise’s quality systems flag the problem much earlier on, maybe even on the vendor feed itself. Their crack support staff is on it. Messages get sent to the vendor, the phone is ringing, our people know what corrections need to be made. The right people in the business have been alerted. Clients are given a very carefully worded heads-up. But time passes, and the corrections don’t get made—at least not fast enough. For whatever reasons, the vendor has trouble pushing the correction through. Maybe they are in violation of their SLA. Maybe they were given bad data from one of their sources. It doesn’t matter: the company still can’t operate its business.

And then comes the suggestion. The user on the other end of the call knows what the data is supposed to be. They understand the issue with the vendor. But they need the correct values entered into the system to correct their analytics asap.

“Can’t you just override what the vendor sent?”

And so begins the next phase of the Data Management evolution.

Yes, they can override.

And they do. And sure enough, screens get refreshed, analytics are re-calculated, clients are given the all clear and they are back in business. For the moment.

Problem #1: Overrides Get Overridden

All this brings us to problem number one when overriding data: our overrides get overridden. Our hypothetical enterprise has a vendor that’s sending us regular updates. (Let’s assume a daily process because the more rapidly things get refreshed, the more obvious this particular problem is.) The bad data has been overridden, and everything is again running smoothly until the vendor, not yet having resolved the problem on its end, sends the nightly update.

The update resets the override instated the day before, and bam, the company is right back in the doghouse. Only this time it’s worse. As far as its end users are concerned, they have now made the same mistake two days in a row. Not good. They make another mad scramble, reset the override, and tell everybody everything is ok—again. This time, they aren’t smiling. Clients are scouring the web looking for another provider and the enterprise’s business experts are polishing up their resumes and updating LinkedIn.

Problem #2: Locked Overrides Stay Locked

So how do businesses avoid that second black eye? Obviously, they need to lock their overrides. It makes sense. Add a flag to show that this particular data point has a correction and that the overnight feed should not be allowed to change it. Phew! Now they can rest easy. The feed goes through, and the next morning: no complaints! Life is good. Clients are feeling better, and business is getting back to normal. This too shall pass.

But the problem with locking overrides is that they stay locked. Days or weeks or longer go by and everything seems ok. The vendor corrects the root cause of their problem and continues to operate as they were, and then there is an update. The data the company has corrected and locked is about to change. That new data is fed through correctly by the vendor, but it hits the override lock and never gets entered into the system.

So, once again, the same data are wrong – and now it is all on the enterprise. The vendor provided the correct update, but the company’s overrides kept it from going though. They’re back in the hot seat.

Managing Overrides

Like the other steps in this evolution, this is also solvable. Enterprises learn that they need to manage their overrides by making sure they are still valid and correcting them when they’re not. “Aging” overrides and applying increasing pressure on vendors as the need for a particular override persists helps ensure that these corrections aren’t forgotten.

The first step of this aging process is to set context-appropriate flags on all overrides so that the locks are removed when the vendor has corrected the feed. An enterprise managing tens, hundreds, or thousands of overrides cannot supervise each one manually, so database administrators are responsible for grasping the semantics of the data and writing a lock script appropriate to the context.

If the entry in question can only have one value and that value is relatively static (think address or employer or product identifier), the flag can be set to check for an identity between the incoming value and the correct value set by the override. If they match, the lock is removed, and the feed continues normally.

If the value of the problematic entry is volatile, however, other logic must be put in place to determine whether the vendor feed issue has been resolved. The reference check and reasonableness check, described below, are examples of how data stewards might monitor for a dynamic value. (These are just a few common methods for setting flags; circumstances may require any number of others.)

Next, the enterprise’s data team needs to ensure that stakeholders stay informed of the issue ongoingly if it isn’t resolved right away. Establish a grace period, and once it’s elapsed, have regular notifications sent to relevant departments and perhaps even the vendor itself. This way, the issue won’t languish in obscurity and never get fixed.

Let’s imagine, however, that an override grows to be weeks or even months old when one day the flag conditions are met and the lock removed. This turns off the override notifications, and its days before it surfaces that the value is wrong yet again. After some investigation, the data team determines that the feed was never fixed in the first place, that the incoming value only met the flag criteria by chance. Mistakes like these only cause extra work, so it’s important that admins place expiration dates on override flags’ automatic unlocking functions. When an error has persisted for so long, it’s important for a person to confirm that it was indeed fixed before unlocking the override manually. The older the override, the greater the chance that an incoming value will be to meet the flag criteria by happenstance.

The point is, data overrides are not a set-it-and-forget it mechanism. They need to be managed, or they will cause more harm than good. When overrides are developed, monitored, and reported on as part of a successful data quality program, it’s possible for enterprises to mitigate complications that arise from using vendor data, more effectively manage their data suppliers, and minimize undue cost to the company and its clients. 

Leave a Reply