Why Your AI Spend Isn’t Paying Back
- Gee Virdi
- May 4
- 6 min read
Most organisations do not lose money on AI because of weak models. Losses occur because investors fail to identify the specific decision that the investment is intended to improve.
As a result, many pilots stall, quietly pause, or ultimately cancel. This challenge is increasingly pressing for leaders today. Gartner predicts that over 40% of agentic AI projects will be cancelled by the end of 2027 due to escalating costs, unclear business value, or inadequate risk controls. Likewise, McKinsey’s research highlights a gap between activity and impact: while AI adoption is widespread, only 39% of respondents report enterprise-level impact on EBIT. For boards, the priority is not increasing AI activity but focusing on AI initiatives that drive measurable outcomes. To do this task effectively, organisations must ask targeted questions at the board level.
Decision clarity: What is the single decision this investment will improve (approve/decline, price/reprice, stock/transfer, hire/not hire, renew/not renew)?
Decision gap: What is broken today—too slow, too late, inconsistent, or dependent on a handful of 'hero' individuals?
Business impact: Where is that gap hurting performance right now (margin leakage, working capital, churn, downtime, compliance exposure)?
Competitive edge: If the approach works, why won’t competitors replicate it within 6–12 months? (Data advantage, process advantage, scale advantage, switching costs.)
Evidence: What proof, not hope, shows that better decisions will improve a business metric? (Benchmarks, controlled tests, prior rollouts, audited case evidence.)
Value metrics: Which measures will the CFO recognise, and how will we track them (before/after, control group, payback period, risk-adjusted benefits)?
Accountability: Who is the named executive owner with the authority to change the process, not just run the tool?
Cost of inaction: If we do nothing for 12 months, what gets worse, and what is the quantified cost?
The CxO Decision-Value Checklist: Fund Decisions, not ‘AI Projects’
The rule is clear: if the proposal team cannot link the investment to a specific decision, a measurable value lever, and a responsible owner, the project should not be funded.
Leaders should apply this checklist to ensure each automation, data, analytics, or AI investment aligns with strategic transformation goals. If a proposal cannot address these questions clearly, it is a technology experiment rather than a true transformation initiative. What separates successful organisations is how they apply this disciplined approach.
They start with a decision that matters: a repeatable choice made daily or weekly, with material financial consequences. They tie the decision to value: not “better dashboards”, but fewer stockouts, lower write-offs, faster collections, lower fraud losses, and an improved win rate.
They redesign the workflow: who decides, when they decide, what changes in policy, and how exceptions are handled.
They make ownership non-negotiable: one accountable senior owner who can change the process end-to-end.
They avoid ‘pilot theatre’: they plan for production—data, controls, adoption, and measurement—before the first demo.
A real-world illustration reinforces this checklist in action. Gartner notes that many agentic AI initiatives are cancelled because value is unclear or controls are weak, so leaders increasingly begin with narrow, high-value decisions and scale only when outcomes are proven. In supply chain and inventory optimisation, published case evidence shows that double-digit improvements are achievable when the decision is explicit and the process changes: for example, a case study reports an 8.5% reduction in overstock and an 11% improvement in on-time delivery accuracy after applying AI to inventory optimisation decisions.
With these principles in mind, what should change in practice as soon as Monday morning?
High-performing organisations do not run vague AI programmes; they focus on decision-led business change. Boston Consulting Group found that 74% of companies struggle to achieve and scale value from AI, while a minority outperform in revenue growth, shareholder returns, and returns on invested capital. The key difference is clarity on the decisions that drive the P&L and the operational changes needed to act on these improved insights.
How Leaders Reverse High-Cost AI Programmes (Without Losing Face)
In boardrooms, stopping an AI initiative may seem like a failure. Effective leaders convert costly programmes into decision-led transformations, keeping effective elements, eliminating ineffective ones, and refocusing efforts on measurable outcomes.
Sol Rashidi, who has led 200+ enterprise AI deployments, puts hard numbers on what many CxOs suspect: “proof-of-concept purgatory” is real. In a published recap of her talk at the 2025 Data + AI Integration Summit, Rashidi is quoted as saying: “Of the 200 AI deployments I’ve led, only 48 made it into production and are still in use today.” The same recap reports (paraphrased) that training, change management, and adoption planning often receive only 10–15% of total budgets, even though they account for over 60% of success. Source: Nexla blog recap, “From POC to Production: Breaking the AI Stalemate” (nexla.com/blog/from-poc-to-production-breaking-the-ai-stalemate).
The executive takeaway is paraphrased from Rashidi’s framework: don’t “do more AI”—choose what you can execute. High-complexity/low-criticality cases should be deprioritised, and not everything needs AI—sometimes simpler automation is faster and more reliable. Sources: Sol Rashidi's LinkedIn post and CDO Magazine article.
Bill Schmarzo makes the same point from a different angle: if you can’t tie the work to outcomes, it becomes an expensive technology hobby. In an interview episode published by ThoughtSpot’s The Data Chief, Schmarzo says: “data by itself has zero value. In fact, it’s a huge cost”. He then argues (paraphrased) that organisations should pivot from being “data-driven” to being value-driven—because the business does not care about data, it cares about outcomes. Source: YouTube episode “Bill Schmarzo on Why Companies Need to Pivot from Data-Driven to Value-Driven” (youtube.com/watch?v=6W2muoQrs6Q).
Schmarzo argues (paraphrased) that many AI initiatives fail not because of technology but because leaders fail to define and validate outcomes, choose rigorous metrics, and achieve alignment before building. "AI is a magnifying glass, not a corrective lens” (paraphrased). Source: LinkedIn post.
By combining these perspectives, organisations can implement a straightforward turnaround approach when an AI programme becomes costly:
Call the decision: write down the specific decision the programme must improve (and who owns it). If you can’t name the decision, pause and spend. Rebuild the business case with friction: insist on baseline performance, a control group where possible, and a finance-approved definition of benefits. Cut the glamour use cases first (paraphrased from Rashidi’s matrix): deprioritise high-complexity, low-criticality ideas that look good in decks but won’t ship.
Shift budget to adoption (paraphrased from Rashidi’s emphasis): ring fence funding for training, process changes, governance, and controls because “production” is mostly an operating model problem.
Move from ‘data-driven’ to ‘value-driven’ (paraphrased from Schmarzo): treat data/AI as economic assets only when they demonstrably improve outcomes; otherwise, they are cost centres.
To illustrate this approach in practice, consider a 90-day reset vignette: A FTSE‑sized business had spent months funding an “AI transformation” across multiple functions. Costs were rising, benefits were vague, and the programme had become a portfolio of disconnected pilots. The CFO and COO paused new spending for two weeks and demanded one thing: a decision map.
Days 1–30: They reduced the scope from 27 use cases to 4 decisions with clear economic levers (pricing approvals, credit exceptions, inventory replenishment, and churn retention offers). Each decision had a named executive owner, a baseline, and a finance-approved benefits model.
Days 31–60: they redesigned the workflow—what changes, who signs off, what happens when the model is unsure—and ring-fenced budget for training and adoption rather than more data science headcount.
Days 61–90: they took just two decisions into production with controls, measured performance against the baseline, and killed the rest. The result wasn’t “more AI”. It was fewer decisions made late, fewer exceptions handled manually, and a programme the board could govern because value was defined and tracked.
This focused approach, based on ownership and measurable decisions, separates an AI cost centre from a true business transformation. This is what reversing an AI project looks like in practice: rather than retreating from innovation, demand commercial discipline, narrow the scope to critical decisions, and scale only when outcomes are demonstrated.
Sources for the Rashidi & Schmarzo Quotes (For Reference)
Direct quotes are indicated with quotation marks; all other statements are paraphrased interpretations of the cited sources.
Sol Rashidi quotes and statistics (200 deployments / 48 in production; adoption budget shares): Nexla recap “From POC to Production: Breaking the AI Stalemate” (nexla.com/blog/from-poc-to-production-breaking-the-ai-stalemate).
Sol Rashidi “Complexity vs Criticality Matrix”: LinkedIn post “Escaping POC Purgatory with the Complexity vs Criticality Matrix” (linkedin.com/posts/sol-rashidi-mba-a672291_ai-aistrategy-leadership-activity-7423694096730460160-X2c8) and CDO Magazine “Looking into Integrating AI? Think Beyond Business Value Alone” (cdomagazine.tech/opinion-analysis/looking-into-integrating-ai-think-beyond-business-value-alone).
Bill Schmarzo “data has zero value… it’s a cost” and value-driven framing: ThoughtSpot The Data Chief episode on YouTube (youtube.com/watch?v=6W2muoQrs6Q).
Bill Schmarzo “AI design realities” (outcomes first, metrics rigour, fix misalignment): LinkedIn post “AI Design Realities and Leadership Failures” (linkedin.com/posts/schmarzo_why-85-of-ai-projects-fail-and-how-to-fix-activity-7452372271190417409-Ya3V).
No decision impact means no business value. If the decision does not change, neither the outcome nor the return will change.
Call to action: Before your next steering committee or investment review, select one proposed AI initiative and apply the checklist above on a single page, including quantitative measures. If you wish, share your draft business case, and I will help you revise it into a decision-led investment narrative (decision, value, evidence, metrics, owner) that a board can confidently approve.

Comments