Advertisement

Cloud Build or Buy? Answers for the Enterprise

By on

Click to learn more about author John Morrell.

In today’s fast-paced world, organizations must constantly adapt and innovate to maintain their competitive advantage. Cloud Computing has revolutionized the technological landscape, allowing companies to “rent” time on online services without having rapidly-obsolescing hardware on-site.

Nowhere is this clearer than the realm of Data Analytics, where a wealth of Cloud solutions await. They can be catered to every type of organization from the smallest small business to the largest corporate enterprise. But wait, there’s a catch– how do you choose what cloud analytic data management approach will be most effective for your needs?

Cloud Build vs. Cloud Buy

You’ve decided to take your analytics to the Cloud. That’s not the problem. There’s one question: do you build your own solution with “building blocks” from AWS and other third-party vendors, or do you buy a turnkey data pipeline platform? There are pros and cons to each, but what you choose depends on your organization’s priorities, goals, and resources. Let’s look into the options below.

Building Your Cloud with AWS

A Public Cloud Analytics stack can have few or many components. They work together to store and analyze your data. There are many Cloud providers out there offering their own versions of these services, but for the sake of simplicity we will focus on Amazon’s offerings. Amazon provides a variety of services called “primitives” for these tasks, making it easy to select only the ones your organization needs.

While the services themselves can be incredibly valuable, the real power comes from combining them into an Analytics workflow to seamlessly process your data.

  • The Analytics Stack

These are the most common components of any Public Cloud Analytics stack. In the parentheses we list the Amazon offering for these services.

  • Object storage (S3)
  • SQL over object storage (Athena)
  • NoSQL Database Management (DynamoDB)
  • Relational database services (RDS)
  • Data Warehouse services (Redshift)
  • Big Data processing services (Elastic MapReduce)
  • extract transform and load (ETL) systems (Glue and Data Pipeline)
  • streaming data processing services (Kinesis Firehose and Kinesis Analytics)
  • Business Intelligence and Data Visualization services (QuickSight) – also works with Tableau, Qlik, Looker, and MicroStrategy

 

  • Integration Pairs on AWS

Amazon provides bilateral interfaces to communicate across services. These interfaces allow teams to connect commonly used services in an integration pair. Most services interact natively with S3 so that customers can use Amazon as their “Data Lake.” But what about their other services?

Depending on what services you want to utilize, Amazon offers varying levels of integration support. For example, Kinesis Firehose can load streaming data directly into Redshift, and then Redshift’s COPY command can load data directly from DynamoDB. That data can then be visualized by QuickSight.

While these popular combinations work pretty well out of the box, other permutations are not so simple. What if you wanted to replicate data from DynamoDB into Aurora in real-time? No pre-packaged integration pair exists, so you’d need to roll your own using Kinesis Firehose and the serverless compute service Lambda that Amazon provides. It’s doable, but not for the faint of heart.

●      A Laundry List of Skills

Piecing together and maintaining an Analytics Architecture in the Cloud isn’t as easy as one might think. Cloud providers focus their expertise – providing best in breed tools for developers in a self-contained fashion. However, they don’t always concern themselves with the more holistic aspect of a seamless, integrated architecture that solves a specific problem.

Therefore, teams need a number of skills in order to successfully build and maintain a Cloud analytics solution. Here are just a few of the competencies a team would need to have for a project of this magnitude:

  • Database and SQL skills
  • ETL skills
  • Spark and/or Hadoop knowledge, as well as scripting and job management expertise
  • In-house development and project management resources to see the project through

Organizations can engage with a systems integration partner to help set these kind of systems up, but this comes at a non-trivial cost. In addition, you will either need to keep them on retainer or facilitate knowledge transfer to your team so that they can continue to upgrade and maintain the system as needed.

Of course, this is not to say that a bespoke solution is always the wrong choice. There are cases and companies who could benefit from in-house development. Technology companies with complex workflows and the engineering resources to back it up often take this path. Many of today’s Open Source Analytics solutions were once developed in-house. For enterprise customers who use tech to further their business ends, however, there’s a better way.

Buy, Don’t DIY

A well-engineered third-party software product that sits on top of these Cloud primitives could be the answer to your build vs. buy questions. Instead of cobbling together multiple tools and maintaining the architecture, an end-to-end Analytic Data Management platform allow teams to create and manage their own personalized data pipelines using a higher level visual interface – no coding – speeding the time to insight. These platforms are designed to abstract the lower level services with an easy to use interface so that teams can hit the ground running.

This can cut down on your risk, maintenance time, and time-to-market significantly. Esoteric open source and Cloud technologies become more friendly and usable when integrated into these packages, further reducing risk. Analytic Data Management platforms provide a complete, end-to-end platform that delivers the right information to the front lines of business without complex coding or job management.

Analytic Data Management platforms provide higher level Analytic Data Management and preparation features and leverage underlying AWS services to deliver that rich, powerful functionality in the cloud. Utilizing key AWS services including Elastic Compute Cloud (EC2), Simple Storage Service (S3), Elastic Map Reduce (EMR), Identity and Access Management (IAM) and Key Management System (KMS), and consuming data from S3, Redshift, Aurora and other Cloud data sources, results can be delivered into Redshift and other Cloud Data Warehouses, or consumed by Athena and Quicksight.

The end result is a streamlined platform for self-service creation and management of analytic data pipelines without the need to code or the need to build and maintain complex integrations with tools. Not only does this speed the time to insight, but it also removes the cost of building and maintaining a complex Data Architecture in the Cloud.

Conclusion

The wealth of Public Cloud technologies available today are incredibly powerful and innovative, but many disparate packages put together does not make a cohesive data strategy. That’s why unified data pipeline platforms are more important than ever. They can mean the difference between project success and failure. Between truly becoming data-driven or simply paying lip service to the idea.

By exploiting the power of the Public Cloud and hiding the complexity, third-party software offerings provide a low-risk, high-reward solution for enterprise customers. For all but the most esoteric use cases, third-party platforms the cover the complex Analytic Data Management lifecycle are the recommended solution.

Remember: Cloud providers can make you most successful if you combine their products with those from their partners. First party + third party = customer solution success. Follow that equation, and watch the results unfold.

Leave a Reply