Insights
Why Most AI Pilots Stall and How to Break the Cycle
Technology
•
Jul 1, 2025


If you work in tech or business, you’ve heard it before: “We’re launching an AI pilot.” The excitement is real, the potential is massive, and the sense of being part of something big is hard to resist. But what happens next is far too common: months later, the project is still a pilot, the promised value is nowhere to be seen, and everyone is wondering where things went wrong. According to Forbes, up to 90% of AI pilots never make it into production. This statistic isn’t just a number—it’s the reality for most companies, from financial services to healthcare to manufacturing.
Why do so many promising AI projects stall out before they ever deliver business value? The answer, surprisingly, isn’t about technology at all. It’s about how organisations approach AI from the start. The Wall Street Journal reported recently that companies are struggling to drive a return on their AI investments, with many initiatives “stuck in perpetual test mode” because they focus too much on the tools and too little on the business outcomes. In fact, research from IDC found that out of 33 AI prototypes tracked in a recent survey, only four reached production. The rest never moved past the pilot phase, leaving teams burned out and leadership frustrated.
What’s at the heart of this problem? First, most organisations treat AI like a magic wand—something you “try out” in the hopes that it will produce instant results. In reality, successful AI projects always start with a business problem, not a technology push. Take, for example, the case of a global bank featured in Forbes. The bank developed an impressive AI model to catch fraudulent transactions, and it worked—at least in the lab. But as soon as they tried to deploy it across their business, they ran into a wall of data silos and privacy issues. The pilot never made it into everyday use, not because the model failed, but because the organisation wasn’t ready for it.
This is a classic pattern. Gartner and McKinsey both highlight that over 60% of AI failures stem from data problems: poor quality, scattered ownership, or simply not being able to access what you need. If your data isn’t clean, connected, and available, even the smartest AI won’t help. Add to this the phenomenon of “pilot fatigue”—when teams bounce from one experiment to another, getting little real-world feedback or business traction, eventually losing momentum.
But the good news is this cycle can be broken. The companies that succeed with AI—the ones whose projects actually deliver impact—do things differently. Instead of jumping straight to technology, they start with a pressing business question. It might be as specific as, “How do we cut the time it takes to onboard new customers?” or “How can we reduce claims errors in our insurance workflow?” The technology comes later, after the business case is clear.
Another key lesson is that these teams invest early in building a strong data foundation. That means not just collecting data, but making sure it’s usable, secure, and organized around real business needs. They measure success in terms that matter to the business, not just the IT team—focusing on time saved, happier customers, or fewer manual steps, rather than just accuracy scores or model performance.
Scaling is where many pilots stumble, so successful teams plan for scale from the very beginning. They involve IT, compliance, and business leaders right from the pilot stage, making sure that what works in the test environment will also work when thousands of users are involved. This often means building in privacy, security, and governance from day one—not treating them as afterthoughts.
A real-world case illustrates this approach. McKinsey described how a leading European retailer started small by using AI to optimise pricing in one region. Instead of treating it as a side project, the company’s business and IT leaders worked together from the start, aligning on metrics, data access, and end goals. Because they built with scale in mind, once the initial pilot worked, they rolled it out across dozens of regions, seeing measurable revenue increases and faster pricing decisions across the business.
So what can you do if your AI pilots keep stalling? First, reframe how you think about AI: it’s not about the technology, it’s about the business value. Second, invest in your data and in cross-functional collaboration—AI projects work best when business, IT, and compliance pull together. Third, set realistic metrics and be ready to celebrate small wins on the road to bigger outcomes. And above all, plan for the long term. AI isn’t a magic wand, but with the right approach, it can absolutely deliver on its promise.
If you’re stuck in pilot purgatory, you’re not alone—but you don’t have to stay there. Start with the problem, not the tool. Make your data work for you. Plan for scale, and measure what matters. The companies who break this cycle aren’t just building smarter tech—they’re building stronger businesses.
References:
Related insights
Why Most AI Pilots Stall and How to Break the Cycle
Why Most AI Pilots Stall and How to Break the Cycle
Technology
•
Jul 1, 2025

If you work in tech or business, you’ve heard it before: “We’re launching an AI pilot.” The excitement is real, the potential is massive, and the sense of being part of something big is hard to resist. But what happens next is far too common: months later, the project is still a pilot, the promised value is nowhere to be seen, and everyone is wondering where things went wrong. According to Forbes, up to 90% of AI pilots never make it into production. This statistic isn’t just a number—it’s the reality for most companies, from financial services to healthcare to manufacturing.
Why do so many promising AI projects stall out before they ever deliver business value? The answer, surprisingly, isn’t about technology at all. It’s about how organisations approach AI from the start. The Wall Street Journal reported recently that companies are struggling to drive a return on their AI investments, with many initiatives “stuck in perpetual test mode” because they focus too much on the tools and too little on the business outcomes. In fact, research from IDC found that out of 33 AI prototypes tracked in a recent survey, only four reached production. The rest never moved past the pilot phase, leaving teams burned out and leadership frustrated.
What’s at the heart of this problem? First, most organisations treat AI like a magic wand—something you “try out” in the hopes that it will produce instant results. In reality, successful AI projects always start with a business problem, not a technology push. Take, for example, the case of a global bank featured in Forbes. The bank developed an impressive AI model to catch fraudulent transactions, and it worked—at least in the lab. But as soon as they tried to deploy it across their business, they ran into a wall of data silos and privacy issues. The pilot never made it into everyday use, not because the model failed, but because the organisation wasn’t ready for it.
This is a classic pattern. Gartner and McKinsey both highlight that over 60% of AI failures stem from data problems: poor quality, scattered ownership, or simply not being able to access what you need. If your data isn’t clean, connected, and available, even the smartest AI won’t help. Add to this the phenomenon of “pilot fatigue”—when teams bounce from one experiment to another, getting little real-world feedback or business traction, eventually losing momentum.
But the good news is this cycle can be broken. The companies that succeed with AI—the ones whose projects actually deliver impact—do things differently. Instead of jumping straight to technology, they start with a pressing business question. It might be as specific as, “How do we cut the time it takes to onboard new customers?” or “How can we reduce claims errors in our insurance workflow?” The technology comes later, after the business case is clear.
Another key lesson is that these teams invest early in building a strong data foundation. That means not just collecting data, but making sure it’s usable, secure, and organized around real business needs. They measure success in terms that matter to the business, not just the IT team—focusing on time saved, happier customers, or fewer manual steps, rather than just accuracy scores or model performance.
Scaling is where many pilots stumble, so successful teams plan for scale from the very beginning. They involve IT, compliance, and business leaders right from the pilot stage, making sure that what works in the test environment will also work when thousands of users are involved. This often means building in privacy, security, and governance from day one—not treating them as afterthoughts.
A real-world case illustrates this approach. McKinsey described how a leading European retailer started small by using AI to optimise pricing in one region. Instead of treating it as a side project, the company’s business and IT leaders worked together from the start, aligning on metrics, data access, and end goals. Because they built with scale in mind, once the initial pilot worked, they rolled it out across dozens of regions, seeing measurable revenue increases and faster pricing decisions across the business.
So what can you do if your AI pilots keep stalling? First, reframe how you think about AI: it’s not about the technology, it’s about the business value. Second, invest in your data and in cross-functional collaboration—AI projects work best when business, IT, and compliance pull together. Third, set realistic metrics and be ready to celebrate small wins on the road to bigger outcomes. And above all, plan for the long term. AI isn’t a magic wand, but with the right approach, it can absolutely deliver on its promise.
If you’re stuck in pilot purgatory, you’re not alone—but you don’t have to stay there. Start with the problem, not the tool. Make your data work for you. Plan for scale, and measure what matters. The companies who break this cycle aren’t just building smarter tech—they’re building stronger businesses.
References: