When meeting customers who are starting to build new data warehouses and platforms, I have started hearing the requirement that “We want our solution to follow DataOps principles” or “to be DataOps compatible”. At the same time, Gartner [1] recognises DataOps on their latest Data Management Hype Cycle as being the most emergent concept on the chart. While being in the Innovation Trigger phase on the chart Gartner sees that DataOps will likely inhibit adoption of the practice in the next 12 to 18 months.
So DataOps is clearly not yet a mainstream thing when building data solutions but it has raised a lot of interest among people who actively follow the market evolution. As we at Solita have been utilising DataOps practices in our solutions for several years already it is easy to forget that the rest of the world is not there yet. Gartner [2] states that the current adoption rate of DataOps is estimated at less than 1% of the addressable market so if DataOps really is something eligible there is a lot to be done before it will become the common way of doing.
Several software vendors provide solutions for DataOps (like Composable, DataKitchen and Nexla for instance) but are they the real deal or are they selling snake oil? It’s hard to tell. Then again should DataOps even be something to go after? We evidently need to understand what forces drive DataOps from an emergent concept to mainstream.
From DevOps to DataOps
Before going any further into DataOps itself let’s first look at where it’s coming from and what has triggered the fact that it’s now more relevant than before. We’ll start with DevOps. DevOps has become the prevalent methodology in software development in recent years. It has changed the way of thinking regarding delivering new features and fixes to production more frequently while ensuring high quality. DevOps is nowadays the default way of doing when developing and operating new software. But how does this relate to DataOps and what do we actually know about it?
To better understand the concept of DataOps we need to go through how building data solutions has changed in the recent past.
A few years ago, the predominant way of developing data solutions was to pick an ETL tool and a database, install and configure them on your own or leased (IaaS) hardware and start bringing those source databases and CSV files in. This basically meant a lot of manual work in creating tables and making hundreds and thousands of field mappings in your beloved ETL tool.
Reach for the clouds
Cloud platforms such as Microsoft Azure or Amazon Web Services have changed the way data solutions are developed. In addition, the Big Data trend brought new types of data storage (Hadoop and NoSQL) solutions to the table. When speaking with different customers I have noticed that there has even been a terminological shift from “data warehousing” to “building data platforms”. Why is this? Traditionally the scope of data warehouses has been to serve finance, HR and other functions in their mandatory reporting obligations. However, both the possibilities and the ambition level have taken steps forward and nowadays these solutions have much more diverse usage. Reporting has not gone anywhere but the same data is used on all strategic, tactical and operative levels. Enriching the organisations data assets with machine learning can mean better and more efficient data-driven processes and improvements in value chains that lead to actual competitive advantage as we have seen in several customer cases.
There is also more volume, velocity and variety in the source data (I hate the term Big Data but its definition is fine).
In addition to internal operative systems, the data can come from IoT devices, different SaaS services in divergent semi-structured formats and whatnot. It is also common that the same architecture supports use cases that combine hot and cold data and some parts must update near real time.
You can build a “cloud data warehouse” by lifting and shifting your on-premises solution. This can mean for example installing SQL Server on an Azure Virtual Machine or running an RDS database on AWS and using the same data model as you have used for years. You can load this database with your go-to ETL tool in the same way as you have done previously. By repeating your on-premises architecture to cloud will unfortunately not bring you the benefits (performance, scalability, globality, new development models) of a cloud-native solution and might even bring in new challenges in managing the environment.
To boldly go to the unknown
Building data solutions for cloud platforms can feel like a daunting task. Instead of your familiar tools, you will face a legion of new services and components that require more coding than using a graphical user interface. It is true that if you start from scratch and try to build your data platform all by yourself there is a lot of coding to be done before you start creating value even if you use all available services on the selected platform.
This is where DataOps kicks in!
As DevOps, DataOps will set out a framework and practices (and possibly even tools) so that you can concentrate on creating business value instead of using time on non-profitable tasks. DataOps covers infrastructure management, development practices, orchestration, testing, deploying and monitoring. If truly embraced, DataOps can improve your data warehouse’s accustomed release schedule. Instead of involving several testing managers in a burdenous testing process, you may be able to move to a fully automated continuous release pipeline and make several releases to production each day. This is something that is hard to believe inthe data warehousing context before you see it in action.
Innovator’s dilemma
Competition forces companies to constantly innovate. As data has become the central resource in making new innovations it is crucial that data architectures support experimentation and subsequently innovation. Unfortunately, legacy data warehouse architectures seldom spur innovative solutions as they can be clunky to develop and most of your limited budget goes to maintaining the solution. You will also have to cope with the burden of tradition as many of the processes and procedures have been in place for ages and are hard to change. Also, the skill set of your personnel focuses on the old data architecture and it takes time to teach them new ways of doing. Still, I believe that cloud data platforms are the central piece for an organisation to be able to do data-driven innovations. By watching from the sidelines for too long you risk letting your competitors too far ahead of you.
If you want to increase innovation, you need to cut the cost of failure. If failure (or learning) actually improves your standing, then you will take risks. Without risk, there will be no reward.
In the next parts of this blog series, we will take a closer look at different parts of DataOps, think about how actual implementations can be made and what implications DataOps has on needed competences, organisational structures and processes. Stay tuned!
References
[1] Donald Feinberg, Adam Ronthal. Hype Cycle for Data Management. 2018. https://www.gartner.com/document/3884077
[2] Nick Heudecker, Ted Friedman, Alan Dayley. Innovation Insight for DataOps. 2018. https://www.gartner.com/document/3896766