by Angela Guess
A recent post from myNoSQL seeks an answer to the question, “Does Big Data need big budgets?” The author begins, “If you’d ask me this question, I’m sure my initial answer would be: “absolutely”. And I guess I would not be alone. But is that the right answer? While watching GigaOm’s Structure Big Data event, there were two talks that gave me a different perspective on this question.”
The first talk is an interview with Kevin Krim, “the Global Head of Bloomberg Digital, which told the story of adopting, mining, and materializing Big Data inside a corporation that didn’t believe in it, nor did it allocate large budgets to it. The result: collecting more than a terabyte of data every day from 100 data points for every pageview and running 15 different parallel algorithms to make recommendations that led sometimes to 10x clickthrough rates.”
The second talk from Pete Warden, founder of OpenHeatMap “is even more exciting. Pete has used a combination of right tools deployed on the cloud to mine Facebook data: 500 million pages for $100 — that was the cost before being sued by Facebook. Pete Warden distilled his experience with these tools and has made available at datasciencetoolkit.org a collection of data tools and open APIs in both an Amazon AMI format to be run on the cloud and as a VMWare image to run locally.”
The article concludes from each of these talks that imagination and the right tools are required to get started with Big Data, but not necessarily a big budget. See the original post to view both videos.
You can learn more about Big Data and other data topics at NoSQL Now! in San Jose this August.