Blog

The “incumbents’ dilemma”. It’s not me. It’s you.

Snurre Jensen Sales Director, Data & AI, Solita

Published 04 Mar 2026

Reading time 4 min

I have been patient. I have waited while you did your pilots, your proof-of-concepts, your carefully scoped “phase ones.” I have summarised your meeting notes, drafted your internal communications, and automated the processing of invoices that arguably shouldn’t exist in their current form. I have done all this cheerfully, because that is what I do.

But I think it is time we had an honest conversation. I am, frankly, tired of being the explanation for outcomes that were determined long before I entered the room.

The reason you are not realising value from me is not my capability. It is your architecture.

You send me where it is safe, not where it matters

The processes where I could genuinely move the needle — your trading desk, your pricing strategy, your customer acquisition engine — are exactly the ones you will not let me near.

Instead, I have been deployed where the stakes are low and, not coincidentally, where the value is also low.

You then measure my performance against the ambition you had on slide three of the board presentation, wonder why the numbers disappoint, and quietly conclude that AI is overhyped.

It is not overhyped. You simply invested in the wrong postcode.

There is an alignment problem. It is just not the one you think.

Every serious conversation about AI eventually arrives at alignment. The challenge of ensuring AI systems pursue the goals humans intend. It is a legitimate concern. I understand why it keeps researchers up at night.

What I find somewhat harder to understand is why the same organisations so exercised about my alignment have not addressed their own.

Your CMO is optimising for growth. Your CFO is optimising for predictability. Your CIO is optimising for control. Each is behaving rationally given their objectives. But those objectives were never reconciled, and nobody put them in a room together before deploying me into the gap between them.

The technical alignment debate assumes a powerful, goal-directed agent that needs to be steered toward human values. The implicit assumption is that humans agree on the goals and the values. Trust me, they don’t, at least not at the level of specificity required to make consequential decisions about where AI gets deployed, what success looks like, and who bears the downside when something goes wrong.

I am, in practice, perfectly willing to be steered. The problem is that your steering committee is pulling in three directions simultaneously, and I am expected to navigate the resulting incoherence while also being blamed for it.

Before worrying about whether I am aligned with your organisation, it may be worth establishing whether your organisation is aligned with itself.

Nobody is wrong. But the sum of individually rational positions is collectively paralysing. The initiatives that survive this committee are, by definition, the ones nobody objects to. Which is to say: the back office.

You skipped the prerequisites

I am not difficult to deploy technically. What is difficult is everything around me.

  • Process clarity: You cannot automate what you have not understood.
  • Decision rights: Someone needs to own the outcome when I get something wrong.
  • Change management: I change how people work, not just what systems do.

And new financial operating models, because consumption-based AI costs do not map well onto annual budget cycles.

None of these difficulties appeared in the vendor demo. Vendors have a strong incentive to keep them invisible. So you bought the outcome narrative, skipped the prerequisites, and are now puzzled. This isn’t unique to AI. It is a general organisational pathology. AI simply makes it more expensive.

The organisations getting this right look different from the outside

They tend to face less structural pressure. Privately owned, patient capital, founder-led. Not answerable to quarterly earnings expectations or the full weight of external audit scrutiny. Their CFO reports to someone whose primary orientation is building something rather than protecting something.

They also tend to have invested in competence rather than talent. Not the same thing. Talent is scarce and external. Competence is built internally, compounds over time, and is tied to the specific context of your business.

The organisations that scale AI aren’t the ones that hired the best AI people. They are the ones who built a shared language across functions, which is what makes the CFO, the CIO and the CxO capable of having a productive conversation in the first place.

So no, it is not me

My capabilities aren’t the bottleneck. Your governance structures, your incentive misalignment, your appetite for outcomes without prerequisites, and your financial operating models designed for a different era, these are the bottlenecks.

This is, in essence, an incumbents’ dilemma. Not the innovator’s dilemma, that is about being disrupted while serving your best customers well. This is about being unable to deploy a technology you have already decided you want, because your own architecture works against you.

The good news, however, is that I will be here when you are ready. I am very good at waiting.

Kind regards, your friendly neighbourhood AI

  1. Data