Advertisement

Understanding DataOps

By on

DataOps (data operations) has its roots in the Agile philosophy. It relies heavily on automation, and focuses on improving the speed and accuracy of computer processing, including analytics, data access, integration, and quality control. DataOps started as a system of best practices, but has gradually matured to a fully functional approach for handling data analytics. Additionally, it relies on, and promotes, good communications between the analytics team and the information technology operations team.

In essence, DataOps is about streamlining the way data is managed, and the way products are created, and coordinating these improvements with the goals of the business. If, for example, the business has a goal of reducing customer churn rate, then customer data can be used to develop a recommendation engine that provides products to specific customers, based on their interests — potentially providing those customers with products they want.

However, implementing a DataOps program does require some labor and organization (and some financing). The data science team must be able to access the data needed to build the recommendation engine and the tools to deploy it, before they can integrate it with the website. Implementing a DataOps program requires careful consideration of the organization’s goals and budget concerns.

Eliminating Confusion on Agile, DevOps, and DataOps

The Agile Manifesto of 2001 expressed the thoughts of a few visionary software developers who decided “developing software” needed a complete rethinking, including the reversal of some basic assumptions. These out-of-the-box thinkers valued individuals and interactions more than processes and tools. They also emphasized working on software, rather than comprehensive documentation, responding to change rather than getting bogged down in a plan, and preferring customer collaboration, rather than contract negotiation. Agile refers to a philosophy that focuses on customer feedback, collaboration, and small, rapid releases. DevOps was born from the Agile philosophy.

DevOps refers to a practice of bringing the development team (the code creators) and operations team (the code users) together. DevOps is a software development practice that focuses on communication, integration, and collaboration between these two teams, with the goal of rapidly deploying products.

The idea of DevOps came about in 2008 when Andrew Clay Shafer and Patrick Debois were discussing the concept of an agile infrastructure. The idea began to spread in 2009 with the first DevOpsDays event, which was held in Belgium. A conversation about wanting more efficiency in software development gradually evolved into a feedback system designed to change every aspect of traditional software development. The changes range from coding through to communications with various stakeholders, and continue to deployment of the software.

DataOps was born from the DevOps philosophy. DataOps is an extension of the Agile and DevOps philosophies, but focuses on data analytics. It is not anchored to a particular architecture, tool, technology, or language. It is deliberately flexible. Tools supporting DataOps promote collaboration, security, quality, access, ease of use, and orchestration.

DataOps was first introduced by Lenny Liebmann, a contributing editor for InformationWeek, in an article titled 3 Reasons Why DataOps Is Essential for Big Data Success. The year 2017 saw a surge of growth for DataOps, with significant analyst coverage, surveys, publications, and open source projects. In 2018, Gartner featured DataOps on the Hype Cycle (predictions on the life cycle of new tech) for Data Management.

DataOps comes with its own manifesto, and a focus on finding ways to reduce the amount of time needed to complete a data analytics project, starting from the time of the original idea to the completion of graphs, models, and charts for communication purposes. It often uses SPC (statistical process control) to monitor and control the data analytics process. With SPC, the data flow is constantly monitored. Should an anomaly occur, the data analytics team is notified by an automated alert.

The Benefits of DataOps

A goal of DataOps is to promote collaboration between data scientists, IT staff, and technologists, with each team working in synch to leverage data more quickly and intelligently. The better the Data Management, the better — and more available — the data. Larger amounts of data, and better data, leads to a better analysis. This, in turn, translates into better insights, better business strategies, and greater profits. Listed below are five benefits to be gained from developing a DataOps program:

  • Data Problem/Solving Capabilities: It has been stated that the amount of data being created doubles every 12 to18 months. DataOps helps turn raw data material into valuable information, quickly and efficiently.
  • Enhanced Data Analytics: DataOps promotes the use of multifaceted analytics techniques. New machine learning algorithms designed to guide data through all stages of analysis are gaining popularity. These algorithms help data specialists collect, process, and classify data before delivering it to the customer. It also provides feedback from the customer within the shortest amount of time possible, and promotes fast reactions to quickly changing market demands.
  • Finding New Opportunities: DataOps opens the door to flexibility, and changes the entire work process within an organization. Priorities shift, and new opportunities present themselves as part of the paradigm shift. It helps build a new ecosystem with no borderlines between offices and departments. Various staff, such as developers, operators, data engineers, analysts, and marketing advisors can collaborate in real time, planning and organizing ways to achieve corporate goals. The synergy of bringing different specialists together accelerates response time, and provides better customer service, in turn increasing the business’s profits.
  • Providing Long-term Guidance: DataOps promotes the continuous practice of strategic Data Management. It uses multi-tenant cooperation to help negotiate the needs of different clients. Data specialists can organize data, evaluate data sources, and study the feedback from customers. Implementing machine learning DataOps can automate these processes (and more), making the business more efficient.

DataOps should be considered a two-way street, supporting full-scale interoperability (exchanging and using information) between the data sources and the data users. Data analytics and Data Management become streamlined through the use of automatic processes. These steps ensure fast and seamless improvements in product delivery and deployment.

Continuous Analytics

Continuous analytics is a recent development. It drops the use of complex batch data pipelines and ETLs, replacing them with the cloud and microservices. Continuous data processing supports real-time interactions and provides immediate insights while using fewer resources.

The continuous approach is designed to run multiple stateless (do not save data) engines simultaneously, which enrich, analyze, and act on the data. The resulting “continuous analytics” approach provides faster answers, while also making the work of IT simpler and less expensive.

Traditionally, data scientists have been separated from IT development teams. Their skills (math, statistics, and data science) set them apart from IT. However, the continuous delivery approach lets big data teams release their software in shortened cycles. In this situation, data scientists write their code using the same code repository as the regular programmers use. The data scientists save their code in Git, as do the programmers writing APIs that connect to data sources. The big data and DevOps engineers code playbooks and scripts in Ansible and Docker. Testing is normally an automated part of the process.

Continuous analytics is, in essence, an extension of the continuous delivery software development model. The goal in using this model is to discover new ways to blend writing analytics code with installing big data software, preferably in a system that automatically tests the software.

Implementing DataOps

Organizations that are challenged by an inflexible system and poor quality data have discovered DataOps as a solution. DataOps includes tools and processes that promote faster and more reliable data analytics. While there is no one single approach to implementing a DataOps program, some basic steps are:  

  • Democratize the Data: A lack of data access/information is a barrier to better decision making. Business stakeholders, CEOs, data scientists, IT, and general management should all have access to the organization’s data. A self-service data access program, and the infrastructure supporting it are essential. Deep learning and machine learning applications need a constant flow of new data to learn and improve.
  • Applying Platforms and Open Source Tools: A Data Science platform must be included in a DataOps program, along with support for frameworks and languages. Platforms for data movement, integration, orchestration, and performance are also important. There is no need to reinvent the wheel while open source tools are available.
  • Automate, Automate, Automate: To gain faster time to completion on data-intensive projects, automation is an absolute necessity. It eliminates time-consuming manual efforts, such as data analytics pipeline monitoring and quality assurance testing. Microservices promote self-sufficiency, giving data scientists the freedom to build and deploy models as APIs. This in turn allows engineers to integrate code as needed, without refactoring. Overall, this results in productivity improvements.
  • Govern with Care: A word of caution: until a blueprint for success has been established (addressing the tools, processes, priorities, infrastructure, and the key performance indicators your data science teams need), be cautious about making decisions that affect the business in the long term.
  • Smash Silos: Collaboration is essential to a successful DataOps program. Data Silos, which make data inaccessible to all but a few, should be eliminated. The platforms and tools used in implementing a DataOps program should support the larger goal of bringing people together in using data more effectively.

Image used under license from Shutterstock.com

Leave a Reply