Loading...
You are here:  Home  >  Data Education  >  Big Data News, Articles, & Education  >  Big Data Blogs  >  Current Article

Mitigating the Disruption of Real-World Experiments

By   /  February 26, 2014  /  No Comments

by James Kobielus

You can’t shut down a community and expect it to spring back to life at a later date, picking up exactly where it left off.

Living communities cannot be reinvented from scratch. They must continue to support all normal operations – albeit awkwardly – while key infrastructure is being repaired, improved, or replaced entirely. That’s why people endure endless congestion when a major highway construction project squeezes traffic into narrow makeshift lanes – it’s better than closing the public thoroughfare entirely.

In many ways, a business is a community, both internally and throughout its value chain. Implementing disruptive changes in how that community operates is not something you undertake lightly. Somehow, you must continue with “business as usual” even as you shake things up with a nouveau management philosophy such as the “experimental enterprise,” as discussed by Edd Dumbill in this recent post. Dumbill defines this sort of enterprise as any “company whose infrastructure is designed to make experimentation possible and efficient.”

By that, Dumbill means experimentation with operational business processes, such as customer engagement, order fulfillment, and inventory management. This vision hinges on the concept of “real-world experiments,” which I’ve dissected in this recent article, in IBM Big Data & Analytics hub, and in diverse LinkedIn posts. The concept of real-world experiments is aligned with such nouveau management notions as agile development, the learning organization, A/B testing, and “failing fast.”

If you’re an old-school business executive, real-world experimentation may seem to introduce excessive risk and confusion into operations. At first glance, it may seem “disruptive” in the old-fashioned pejorative sense, rather than “disruptive” in the trendy new notion of achieving competitive advantage by breaking decisively from past ways of doing things.

What I like about Dumbill’s recent discussion is that he addresses the issue of risk head-on. To that end, he articulates three core principles of real-world experiments:

  • “Experimentation must be cheap in order to de-risk failure…
  • Experimentation must be quick, so a feedback loop can be used to learn from the market and environment, and allow the business to respond rapidly…
  • Experimentation mustn’t break the important production processes of a business.”

In order to mitigate risks, I would add a few other principles to the vision of the experimental enterprise:

  • Experimentation must be replicable, so that lessons learned from one experiment can be disseminated rapidly to other experiments, processes, domains, and applications where they might be relevant. For example, customer experience experiments that pay off in the self-service Web portal channel might be applied, where appropriate, to the mobile and in-store kiosk channels. If you can’t easily replicate a successful experiment, you risk sacrificing full ROI from the effort.
  • Experimentation must be isolable, so that the impacts from a specific experiment can be operationally and statistically isolated from those of other experiments ongoing in the same and adjacent domains; unless isolated. For example, separate sentiment-boosting experiments in the 18-25 year old customer segment and in the professional woman segment might cancel each other out among recently graduated females who have landed their first career-track jobs. If you can’t isolate those experiments, you risk not truly knowing if either of them was entirely effective.
  • Experimentation must be reversible, so that the impacts from a specific experiment can be rolled back if they either don’t pay off, or, even if they do, if a different experiment needs to be conducted from the baseline conditions that prevailed at the start of the first experiment. An example of the latter might be the need to testing alternate HR models of boosting employee participation in online training without creating the perception among employees that any particular engagement model being tested is the final “production” approach. For example, an experiment might involve email notifications from the HR director with an embedded link to a streaming appeal from the CEO on the first few Tuesdays of the calendar year, but expressed in such a way as to make it clear that this new format, frequency, and channel is tentative and is being evaluated. If people think the experiment is “locked in” when it’s not, the end of that trial might truly disrupt operations as confusion and anger grow.
  • Experimentation must be explicable, so that the stakeholders don’t see it all as wild, random, meaningless monkeying around by data scientists with too much time on their hands. Explicability comes down to having clearly defined and documented objectives for each experiment, metrics for how well the experiment performed, and a rough business case for the need to conduct that particular experiment. Lacking all of that, real-world experiments risk becoming disruptive in the worst sense of that word.

Of course, too keen a focus on incremental process tweaks can be a business risk as well. As Wyatt Jenkins states in his excellent article on Shutterstock’s “experimental culture:”

“[E]xperimentation teams sometimes miss the next big innovation because they’re constantly making incremental improvements that show quickly in test results. Remember, some test results will show a negative outcome in the short term, but be better in the long term due to user change aversion. Also, testing strategy is hard, and there’s still a place for strategic thinking that moves your organization in new directions.”

In other words, the chief risk of incrementalism – per the “experimental enterprise” vision – is that you’ll fail to disrupt the competitive arena before your competitors do.

About the author

James Kobielus is an industry veteran and serves as IBM’s big data evangelist. He spearheads IBM’s thought leadership activities in Big Data, Hadoop, enterprise data warehousing, advanced analytics, business intelligence, data management, and next best action technologies. He works with IBM’s product management and marketing teams in Big Data. He has spoken at such leading industry events as Hadoop Summit, Strata, and Forrester Business Process Forum. He has published several business technology books and is a very popular provider of original commentary on blogs and many social media.

You might also like...

Weaving Your Own Big Data Fabric

Read More →