Advertisement

Where Traditional Business Intelligence Tools Fall Short

By on

grby Angela Guess

Sarah Gerweck recently wrote in Information Management, “Traditionally, Business Intelligence leverages only some of the most basic statistical techniques available. BI is still largely using 17th-century statistical techniques: counts, sums, averages and extrema. At most, we might use techniques that were used by Gauss and Galton in the 19th century (e.g., standard deviations and quantiles). In traditional BI, when we’re slicing and dicing, we take data that’s defined over some complex dimensional space and project it down onto a smaller dimensional space that’s easy to understand. Like virtually everything in statistics, we’re doing this mostly through regression and clustering.”

Gerweck goes on, “There are now well-known statistical techniques for dimensional reduction (e.g., principal component analysis – PCA) but there is a huge potential for even simpler techniques like automatically pointing out dimensions that correlate well with your KPIs. On the operational side of things, where we’re often looking at individual items more than aggregate behavior, there are also plenty of statistical techniques. One opportunity is using Bayesian techniques to identify maximum-likelihood tops and bottoms. You can put this feature into a broader statistical context of noise reduction and significance testing: we want to know whether something that looks unusual really is. This also includes things like outlier detection, which is immensely useful to users. Operational and row analysis might also make effective use of things like clustering and similarity analysis, but that quickly starts to move from BI to data mining.”

Read more here.

photo credit: Flickr/ European Southern Observatory

Leave a Reply