You are here:  Home  >  Data Education  >  Big Data News, Articles, & Education  >  Current Article

Library of Congress’s Big Data Challenge

By   /  January 15, 2013  /  No Comments

locby Angela Guess

Brandon Butler of Network World reports, “The Library of Congress has received a 133TB file containing 170 billion tweets — every single post that’s been shared on the social networking site — and now it has to figure out how to index it for researchers. In a report outlining the library’s work thus far on the project, officials note their frustration regarding tools available on the market for managing such big data dumps. ‘It is clear that technology to allow for scholarship access to large data sets is not nearly as advanced as the technology for creating and distributing that data,’ the library says. ‘Even the private sector has not yet implemented cost-effective commercial solutions because of the complexity and resource requirements of such a task’.”

Butler goes on, “If private organizations are having trouble managing big data, how is a budget-strapped, publicly funded institution — even if it is the largest library in the world — supposed to create a practical, affordable and easily accessible system to index 170 billion, and counting, tweets? Twitter signed an agreement allowing the nation’s library access to the full trove of updates posted on the social media site. Library officials say creating a system to allow researchers to access the data is critical since social media interactions are supplanting traditional forms of communication, such as journals and publications.”

Read more here.

photo credit: Library of Congress

You might also like...

James Kobielus

Making Your Data Progressively Smarter

Read More →