13.11.2020Blog

Wonder why your AI investments fail? Here’s why.

AI investments blog

The vast majority of AI projects are doomed to fail, according to market estimates even up to 85%. The public domain celebrates massive investments into AI, but the return on investment remains elusive, notwithstanding companies like Google and Netflix whose entire business is based on data and AI. Why does it seem like the rest of us are throwing money into the AI pit?

I’ve been working with AI for some years now and, through various projects and conversations with experienced colleagues, I’ve noticed common causes why AI initiatives fail. By failure I mean that the money and effort invested into AI never pays itself back: either the initiatives don’t deliver anything at all, or they deliver solutions that are never used.

This is an issue we need to tackle head-on; AI is much too valuable a tool to keep missing the mark. In this blog, I will list eight things to look out for – and give you my thoughts on what to do about those (and what not to do!).

1 Lack of vision

There are two root causes to this problem. The first is inadequate understanding of AI at all levels of the organisation. It is understandably difficult to envision AI use cases if we don’t really know what the technology is and isn’t capable of. While we don’t all need deep data science knowledge, a basic understanding of AI and Machine Learning belongs to everyone. It will help us to see how AI can be used to solve strategic business problems or to unlock new opportunities.

The other root cause is ambitions that are focused on technology rather than business. What good is a chatbot if it can’t solve customers’ problems? Why predict your customers’ creditworthiness if none of them are missing payments? While arguably technology is exciting, the vision should always stem from a business need rather than technical ambition.

  • DO focus on identifying problems that are strategically important to your business, and only then think about whether you need AI to solve the problem. If not, great – you’re winning if there’s an easier and cheaper solution!
  • DO NOT hire Data Scientists to tell you what to do with AI. It’s not for them to identify your strategic business problems.

2 Unrealistic expectations

When asking the doers in organisations about their biggest obstacles, they often mention leadership’s unrealistic expectations. As an analogy, if the leaders expect a rocketship when you could deliver a scooter, there’s bound to be disappointment. (Even if it could be a really good scooter that would get you there much faster than walking!)

When discussing use cases analogous to the humble scooter, a defying comment I often get is “but is it even AI?!”. Does it really matter? Isn’t it more important that we solve the right problems, even if it doesn’t require a state-of-the-art deep neural network? Hyped up expectations only cause an inability to make the most of today’s technology.

  • DO educate the leaders of the organisation on AI and Machine Learning so that they can calibrate their expectations and support the right AI initiatives.
  • DO NOT oversell the benefits of the AI solution you have in mind. It might help you get funding this time, but it won’t the next.

3 Issues with data

When talking about Machine Learning, we must talk about data. The vast majority of today’s ML solutions are supervised learning where we train a model with historical data to predict the future. The amount of data required to train an ML model depends on many things, but suffice to say we need a lot.

The good news is that, as everything becomes digital, we have more and more data to work with – but with that comes the fallacy that we can do anything as long as we have data. Yet, Nearly all ML projects run into issues with data. The usual suspects you already know: there’s not enough data, or it’s messy, unreliable, or unattainable.

The biggest need when it comes to data, however, is understanding it in the business context. Without contextual understanding, we have very limited possibilities of using our data to solve business-critical problems. It is this contextual understanding that also allows us to think critically about our data, to identify what kind of biases it contains (data always contains biases!), and to assess whether historical data can represent the future.

  • DO allow sufficient time and resources to investigate your data and its suitability to the use case in question. Engage a diverse group of people with different backgrounds and skills to understand the data in context and challenge it.
  • DO NOT rush to development before sufficient data discovery has been done. This will save you a great deal of time and money in the long run.

4 Ways of working

One of the crucial success factors is combining domain expertise with data science expertise. As business people learn about data and AI, and data people learn about the business, they are better equipped to solve wicked problems together. The collaboration needs to be continuous and natural – this box is not ticked by a workshop at the start of the project. Data science work entails a lot of experimentation, and without an ongoing dialogue with domain experts it’s all but impossible to verify that the solution being built solves the problem to be solved.

  • DO build AI solutions in cross-disciplinary teams containing business, technology, data, and design expertise.
  • DO NOT expect success with the old “order-delivery” model where business hands over a specification to the data science team and expects to get a working solution some months later.

5 Proof-of-concept trap

Anyone can build a proof-of-concept in a strictly controlled environment with clean data. The hard part is putting the solution into production. If we work with the proof-of-concept mindset, we inevitably make choices that will not carry through to production. We will make assumptions about data or users that won’t hold in the messy real world, we may choose technologies that cannot be integrated into our production environment, or we may even set out to solve a problem that doesn’t make sense in the real world (like building a trading algorithm for an organisation without a license to trade, or predicting the price of a commodity based on weather data in a market that is heavily controlled by bilateral agreements).

While there might sometimes be a point in doing a proof-of-concept, I argue that it always makes sense to aim for production from day one – what’s to lose?

  • DO design the solution for the real world, taking into account the realities of your customers, context, data, and technology.
  • DO NOT buy marketing hype from salespeople who assure you that a quick proof-of-concept is the way to go. If the solution doesn’t adapt to your data, technology, or the problem you’re out to solve, a proof-of-concept proves nothing at all.

6 Lack of human insight

We humans are not as rational decision makers as we’d like to think. Desk drawers are full of AI initiatives that made perfect sense rationally thinking, but something’s awry. Maybe there wasn’t a sufficient understanding of the intended user’s motivations in the situation where they’re meant to use the solution.

For example, maybe a dynamic pricing solution is meant to help the salespeople, but the customers want an explanation of the price – how can a salesperson justify a machine’s decision? Or maybe customers are greeted by a cheerful chatbot in a channel that they typically use to make a complaint – a bot won’t be able to meet the customers’ emotional needs in such circumstances.

  • DO take the time to understand the intended users’ needs and context. Anthropological approach helps to gain deep insights on the complex motivations of end users. Co-creating the solution will help to ensure a continued match between users’ needs and what the solution will deliver.
  • DO NOT proceed to development before making sure that the intended users want the solution. This will weed out initiatives that nobody wants before any significant spend. Don’t be afraid to pivot (or even discard!) the idea if it doesn’t fulfill users’ needs.

7 Lack of business design

Organisations often put lots of effort into planning the production of AI but forget to plan the consumption of it, i.e. how the results of the AI solution will be used to effect the desired change. The value is only realised once something changes as a result of the solution being used.

For example, if our solution detects anomalies in the functioning of a piece of machinery, what do we do with that information to reap the benefits? Maybe we feed the numbers into a production automation system that will restart the faulty component. Or maybe we’ll alert a human user who will decide whether to send out a repair team. Without the consumption, the solution is worthless.

  • DO design the business processes that will use the solution to realise the value. If you’re building an enabler for many uses, prioritise and select one use case, then implement it through to value creation.
  • DO NOT overlook the realities of the business processes that will need to change. Take a moment to assess if the organisation is ready and willing to embrace the solution, then plan and manage the change.

8 Neglected AI

AI solutions sit between traditional IT and humanity, making decisions or suggestions that perhaps people would otherwise make. They receive input from their environment and act on it based on how they’ve been taught. As the world is constantly changing, so are the inputs that the AI system receives.

The change may be slow and gradual, but at some point it becomes meaningful. At that point, any AI system that hasn’t been looked after will become useless at best and harmful at worst. A product recommendation system that doesn’t learn changing user preferences will keep recommending a juicy steak to a recently-turned-vegan. A drug recommendation system that doesn’t keep up with medical developments might recommend a harmful combination of drugs. A self-driving car that isn’t kept up-to-date with changes in the physical environment and legislation will put lives at risk.

  • DO plan for the continuous monitoring, validation, and retraining of the AI system after it goes live. Include the cost of these activities in the business case calculation.
  • DO NOT put AI systems into production and leave them there without supervision. AI systems, much like people, require on-going support, validation, and continuous learning.

In conclusion, the success rate of your AI initiatives can be increased through pragmatic actions. Ensuring focus on strategic value, strengthening collaboration across business, technology, and design, and investing in AI education are smart moves to increase the organisation’s ability to benefit from intelligent automation.

If these topics resonate and you’d like to discuss how to boost your organisation’s AI capabilities, give us a shout – we’d love to help you on your journey.