Why are so many AI projects failing? AI is an investment like any other, needing more than the latest and greatest tech to improve the bottom line
- Gilbert Hill
- May 26
- 3 min read
Updated: May 28
Rapid pilots are being commissioned and Proof-Of-Concepts (POC’s) hacked without consideration of practical governance; if you want AI to transform your business, just ‘moving fast and breaking things’ will lead to a herd of white elephants…

Not since the rise of the web over 30 years ago has there been such a wave of excitement, investment and experimentation in new technology as that around AI-enabled tools, apps, agents and processes. It’s suddenly everywhere, and if used correctly almost indistinguishable from the ‘real thing’.
This boom has caused concerns among lawmakers and technologists around risks of hallucinations, infringement of data rights and the as yet-untested limits of AI’s impact on our digital and real lives. Even Vitalik Buterin, inventor of the Ethereum blockchain is now diverting some of his fortune into a foundation and research to stop the potential threat to humankind.
In response a range of ethical frameworks, committees and more recently regulations have spread across different parts of the world. Bodies including the ISO and IAPP have produced new frameworks and training to equip builders and stakeholders of AI-enabled tech to share a common understanding of concepts and terminology.
But this isn’t feeding through into the stakeholders, committees and investors who support the creation of new AI projects, whether home-grown or increasingly bought in. While a lot of AI projects and widgets are still in the phase of ‘solutions looking for problems’, a lack of clarity around the value versus risk associated with their use means they are likely to come off as half-baked.
This is a shame, because unlike some other waves of tech excitement (here NFT’s spring to mind) AI has the potential to change the world in significant ways beyond a zero-sum game of more robots=less humans in the workplace.
A recent example is airline British Airways, which had struggled with a rising number of disrupted flights since the end of the COVID pandemic, particularly at its congested Heathrow hub.
The airline invested £100 million in a multi-year, integrated programme of “operational resilience” which combined new AI technologies with 600 new staff, hired and trained in their use. Now, BA is better able to calculate when to cancel a flight versus delaying it and proactively re-route planes to avoid bad weather which causes delays.
All this activity had a clear goal which has now been achieved: in the first quarter of 2025, 86% of BA flights left Heathrow airport on time, making it their best performance on record.
In the absence of clear guidance from governments who make AI a key pillar of their growth forecasts, unless stakeholders build in clear measurement, operational support and governance the risk and expense of scaling new AI pilots and gizmos will outweigh the benefits.
This leaves too many good ideas on the launchpad, or at risk of being snapped up by foreign competition too early (as seen with the UK’s DeepMind sale). Worse still, the outcome of building before proper governance is in place could be a repeat performance of recently hyped tech such as Augmented Reality and the Metaverse.
With data that powers AI, even luminaries like Tim Berners-Lee have so far failed to ‘move the needle’ on greater data mobility and value exchange beyond what we surrender to the platforms. The cookie banner, a stop-gap tool in whose development I played a part remains immune to AI, and the only way we can exercise control and rights given by GDPR and similar laws.
I now believe the best way to effect positive change in people’s lives, whether it’s improved health, economic or social opportunities is by using the tools and institutions which already hold our data, enjoy our trust and have the necessary maturity which comes from working in a regulated space. In the UK, this can be key financial players represented by CFIT, the NHS or HMRC.
Crucially, such institutions also understand the need for governance processes and control tools baked into Business As Usual. Right now, most organisations are either locking down employee access to all AI tools (meaning they find ways around it with ‘shadow IT’ tactics) or letting them run free and hoping for the best.
An AI policy is a good start, but if the resulting Committee has no idea what AI is in use or which sensitive data is powering those tools, there is neither visibility nor control. Alongside the well-documented risks, an approach to AI which skimps on governance and planning will not result in meaningful output; 80% of projects are currently in “failure mode”, largely for this reason.
However, if you put work up-front into operational governance at the POC stage with the right assessments and tools (many also AI-powered) then it is that more likely that your use of AI will be one which adds to the lasting success of the project. No more white elephants!