Advertisement

How Statistics Can Lead to a Successful Data Experiment

By on

Click to learn more about Dave Karow.

It’s human nature to want things to go your way; you drop little hints for your birthday presents or avoid certain topics, for example. It’s also well-documented that companies commission surveys tailored to provide the results they want. But on a more basic level, we subconsciously (and sometimes consciously) will try to influence variables to provide the results we want. This could be something as simple as asking a question in a certain way or setting up a test to deliver the statistics you want.

So, when it comes to experimentation, how do you safeguard against the perils of wishful thinking and hidden biases? First, it is important to remember that great teams don’t run experiments to prove they are right; they run them to answer questions. Keeping this in mind will help when creating an experiment to test your new feature or code.

It all starts with a handful of core principles in the design, execution, and analysis of experiments that have been proven by teams that run tons of experiments – sometimes in the thousands – every month. Following these principles increases the chances of learning something truly useful, reduces wasted time, and avoids false signals that lead teams in an unproductive direction.

It has been noted by the Harvard Business Review that between 80% and 90% of features shipped have a negative or neutral impact on metrics they were designed to improve. The issue is, if you’re shipping these features without doing experimentation, you may not notice that they are not moving the needle. You may feel accomplished because you’ve released the features, but you haven’t actually accomplished anything.

Another issue to look out for when it comes to metrics is the HiPPO syndrome. HiPPO stands for Highest Paid Person’s Opinion. The acronym, which was first popularized by Avinash Kaushik, refers to the most senior person in the room imposing their opinions onto the company. That can sway decision-making. Their presence can stifle ideas being presented during meetings. This has a negative effect on design and ultimately metrics.

Right now, you may be thinking “but things like A/B testing replace or diminish design.” This could not be further from the truth. Good design always comes first, and design is always included. In fact, product managers have a team of designers, coders, and testers that all put a lot of effort into setting up their experiment with a new design. A/B testing is an integral tool that informs you if the end-users have done what you wanted them to do based on the success of the design.

But if you’re receiving metrics that look better than those received prior to the feature being released, what’s the problem? The problem is that it’s pretty easy to be fooled. False signals are a very real part of analyzing metrics. Basing your results on these false signals can cost a lot of money and waste time and energy.

The best way to avoid traps is to remember these four things. Users are the final arbiters of your design decisions. Experimentation allows you to watch users voting with their actions. You need to know what you are testing and invest time into choosing the right metrics. And, most importantly, any well-designed, implemented and analyzed experiment is a successful experiment. 

Leave a Reply