Running out of budget. Economic calamity. Global pandemic. The risks to our AI initiatives are, literally, all around us. Some of them we cannot control, but some we can. In this piece, we’ll review and assess some of the most common mistakes practitioners make that lead to suboptimal or significantly delayed transformation processes. It’s worth noting that they’re all ultimately a reflection of the success or failure of the cultural transformation an organization must go through to take advantage of AI. Without a strong focus on cultural acceptance and adoption, even the most accurate AI will have a hard time delivering real business value.
1. Lack of buy-in for AI
Getting buy-in from all levels of the organization is crucial to success. Without executive support and sponsorship, the analytics community will never get the investment in tools, technology, and people required to move forward. An equally important part of the journey toward AI at scale is winning the hearts and minds of business leaders by solving their problems using AI. Organizations need to trust the algorithms and see them as more than a “black box.” In other words, the community should focus on building teams with no name by teaming with cross-functional allies who believe in the vision and have real problems to solve using AI.
2. Not training the business
Business-wide data and analytics literacy is another central component of creating this “grassroots excitement” about the potential of advanced analytics and AI. Without the right “pull” from the wider business for data-driven solutions, embedding AI and ML will be an uphill battle. Additionally, if the organization is relying on a small group of people to think and innovate around advanced analytics, chances are that many of these use cases will go unnoticed.
To avoid this trap, focus on two elements: Firstly, there is formal training on data, analytics, and AI across all levels of the organization. Secondly, employees outside the analytic function feel part of the team. This ensures use cases and final solutions and a strong connection to the everyday reality of the end-consumers of AI outputs.
3. Thinking the analytics team is a consultancy
Many organizations dream big but fail to set their AI delivery teams up for success. The common scenario involves starting out by hiring data scientists and setting up technical infrastructures. However, before long, data scientists are working on a plethora of urgent and important projects that benefit from the highly sought-after technical skill set of these experts. As a result, the analytics team ends up working like an internal consultancy, jumping from project to project rather than building on the long-term vision for advanced analytics and AI.
The solution is simply to make these disruptions go away so data scientists can focus on the most important strategic initiatives.
4. Getting stuck in the pilot stage
Delivering impactful AI projects can be distilled down to three things: identifying an appropriate business problem, testing the solution, and (if the solution works), industrializing it.
Industrialization of the final solution often requires some sort of tech change and ongoing management of the process from an expert team. For example, a bank might try a retention tool that identifies with a high level of accuracy who is likely to attrite, and when. The pilot proves successful, and the bank reduces customer attrition by several percent. Now comes the final challenge: industrializing the tool in a process that requires minimal maintenance for maximum impact. The team that delivered the pilot might have been a mix of internal and external experts across data science, data engineering, technology, experience design, and project management. These people usually need to move on to new projects. So, the bank must have not only a team of cross-functional builders but also a team to focus on the ongoing management and continuous improvement of the existing solutions. Otherwise, the initiative gets stuck in the pilot stage.
5. Giving up after false starts
If at first, you don’t succeed … this trap is a little more nuanced than that. This situation tends to arise when a model prediction or recommendation has not been linked to the right solution. Designing and implementing an effective intervention is at least as important as the prediction.
For example, the data science team might have identified customers who are likely to refinance their loan with a competitor with a high degree of accuracy. However, if the chosen intervention is not effective (customers didn’t want it) or unprofitable, it is likely to cast doubt on the project for a while, including whether the prediction was actually accurate to begin with. Too often, high-quality algorithms remain unused because end users don’t trust them or know how to apply the outputs effectively.
The key to winning the hearts and minds of everyone involved – even when results are not as desired – is creating a set of shared goals that link up the cross-functional team.
The themes of this article are explored in Demystifying AI for the Enterprise, a book written by Prashant Natarajan, Vice President, Strategy and Products at H2O.ai, along with Bob Rogers, Edward Dixon, Jonas Christensen, Kirk Borne, Leland Wilkinson, and Shantha Mohan.