Loading...
You are here:  Home  >  Data Education  >  BI / Data Science News, Articles, & Education  >  BI / Data Science Blogs  >  Current Article

Don’t Put the Cart Before the Horse When It Comes to Big Data

By   /  December 21, 2016  /  No Comments

Click here to learn more about author Dave Brunswick.

The sheer volume of information – and its various formats – is what makes data “Big Data.” Attempting to manage Big Data across business units, continents, and data centers with traditional, un-scalable tools is a major underestimation of modern needs.

So something very important must happen well before all of that data ever hits the Analytics or Business Intelligence tools: It has to be integrated.

Ultimately, Big Data integration is ingesting, preparing, and delivering data, no matter the source. This includes leveraging every type of data in an enterprise, including the complex, often-unstructured machine-generated kind, and requires a more converged enterprise infrastructure that connects this Data Architecture.

Even today’s smallest companies have dozens of applications, systems of record, ERPs, and other technology from a variety of vendors – deployed via Cloud and on-premise networks – producing data, and it all must be connected for a comprehensive, accurate, and real-time view of the business. Without a proper managed file transfer and integration platform, mass amounts of labor-intensive, manual coding will be in store for your IT teams to get these systems communicating with each other.

So one of the initial steps – and arguably the most important step – for IT teams is to deploy a platform to pipe all of these data sources into your Data Lakes. Prioritizing efficient integration architecture may not have quite the “wow” factor as some of the outcomes advanced Big Data Analytics promise, but it’s a key strategic component that will enable that raw data to securely flow through your business, Data Lakes, and Analytics applications.

Successful Big Data integration projects, then, prioritize:

  • Support for any type of data across any endpoint, integrating with any Big Data application
  • Deep protocol support to ensure integration with every source
  • Consolidation of disparate point solutions onto a single platform
  • Certified connectors for high-speed Hadoop ingestion and other Big Data connectivity
  • Rapid and secure data extraction, ingestion, and integration
  • Carrier-grade scalability to meet the volume, variety, and velocity of even the most demanding Big Data initiative

A secure, Agile integration platform that focuses on securely mobilizing the actual flow of data into and out of the Enterprise Data Lake ensures reliable information exchange within increasingly complex workplace ecosystems. And if this integration infrastructure can’t deliver quality data on demand, the potential riches of Advanced Analytics will be lost before that Big Data project ever begins.

About the author

Dave Brunswick, Vice President of Solutions for Cleo. Dave leads Cleo’s pre-sales and solution support for North America. He brings more than 25 years of experience in technical sales, pre-sales, technology strategy, engineering, product management, and product development. In previous positions, Dave has held senior consulting and architecture roles throughout the managed file transfer software market, serving as a senior technology leader at Axway and Tumbleweed Communications. He also has led systems research and development teams for a range of government, manufacturing, and transportation customers. He holds an M.A. in mathematics from Oxford University.

You might also like...

Where is Data Science in the Hype Cycle?

Read More →