by Angela Guess
Forbes contributor Michael Kanellos recently wrote, “What is Big Data? Everyone seems to have their own definition. To purists, it refers to software for data sets that exceed the capabilities of traditional databases. For a growing number of people, it’s shorthand for predictive analytics. To others, it just means a really staggering amount of 1s and 0s. The problem is that the term is too general. I count five different types of Big Data. It’s a work in progress so your feedback would be greatly appreciated.”
His list begins with “Big Data. These are the classic predictive analytics problems where you want to unearth trends or push the boundaries of scientific knowledge by mining mind-boggling amount of data. A typical human genome scan generates about 200GB of data and the number of human genomes scanned is doubling every seven months, according to a study conducted by the University of Illinois (And we’re not even counting the data from higher-level analyses or the genome scans from the 2.5 million plant and animal species that will be sequenced by then.)”
Kanellos continues with “Fast Data. Can you quickly analyze a consumer’s personal preferences as they pause by a store kiosk and dynamically generate a 10% off coupon? Fast Data sets are still large, but the value revolves around being able to deliver a good enough answer now: a somewhat accurate traffic forecast in near real-time is better than a perfect analysis an hour from now. The West Japan Railway system recently installed cameras to detect telltale signals of intoxication to keep people from falling onto the tracks. IBM and Cisco are building their companies around these type of systems.”
photo credit: Flickr