01Why board timeline expectations are miscalibrated
Three sources of timeline miscalibration are consistent across UK organisations:
Vendor case studies are best-case scenarios. The Microsoft, Google, and specialist AI vendor case studies that appear in board AI briefings are selected for exceptional outcomes and exceptional implementation quality. They describe what is possible under optimal conditions, not what a typical organisation should expect. Using them as planning benchmarks produces systematically optimistic timelines.
Pilot-to-enterprise extrapolation is linear when transformation is not. A pilot that takes three months to show significant results is not evidence that enterprise deployment will take three months. Enterprise deployment adds data governance complexity, integration work, change management at scale, and organisational resistance that pilots do not encounter. Timeline extrapolation from pilots to enterprise is almost always underestimated by a factor of two to three.
Boards underestimate the foundation investment. The data governance, infrastructure, and organisational capability work that precedes or accompanies AI deployment is invisible in outcome-focused board briefings. When this foundation work takes longer than expected (which it almost always does), boards interpret it as programme delay rather than recognising it as the work that determines whether the downstream AI investment will deliver value.
02Realistic AI transformation timelines
Based on consistent evidence from UK AI transformation programmes, realistic timelines look like this:
Months one to six: strategy, governance, foundations. Strategy finalised, governance framework established, data audit completed, infrastructure prepared, leadership AI literacy developed. No significant user-facing AI deployment; primarily behind-the-scenes investment.
Months six to 18: targeted deployment and learning. First wave AI deployments to champions and early adopters. Adoption rates 20-40% of the licenced population. First business outcome evidence emerging. Significant adoption support still required.
Months 18 to 36: scaled deployment and value realisation. Adoption reaching 50-70% of the target population. Business outcome metrics showing meaningful results against the pre-deployment baseline. Governance maturing and becoming more efficient. First genuine ROI evidence.
Month 36 onwards: embedding and optimisation. AI becoming part of the operating model rather than a programme. New use cases incorporating AI by design rather than by transformation initiative. ROI evidence sufficient for investment expansion decisions.
These timelines are conservative by vendor standards and realistic by transformation experience. Organisations that plan to these timelines will likely outperform them; organisations that plan to vendor-standard timelines will likely disappoint their boards.
03Setting credible board expectations
The timeline conversation with the board requires honesty that many programme leaders are reluctant to provide, because shorter timelines are more likely to receive investment approval.
The problem with overoptimistic timeline commitments is that they do not disappear when they are missed; they become the measure against which the programme is judged. A programme that committed to 12-month AI value at enterprise scale will be in a credibility crisis at month 18, consuming the leadership energy and board confidence needed to complete the work.
More effective: commit to what can be delivered with high confidence in the near term (months one to six), give realistic ranges for the medium term, and frame the longer-term timeline as dependent on the learnings from each phase rather than as a fixed plan.
'By month six, we will have governance established, infrastructure ready, and first wave deployments underway. By month 18, we expect 20-40% adoption in target functions with first business outcome evidence. By month 36, we expect material ROI against committed metrics. Each of these commitments is based on evidence from comparable organisations and is achievable without assumptions about optimal conditions.'
04Managing timeline credibility through the programme
Timeline credibility is built or destroyed throughout the programme, not just at the start. The habits that maintain it:
Update expectations proactively when timeline changes are visible. Boards that learn about timeline changes from the programme sponsor before they are obvious from the data trust the programme more than boards that learn about them when the data makes them undeniable.
Distinguish between timeline changes caused by external factors and those caused by programme execution. A regulatory development that requires governance work to be redone before deployment is a different type of timeline change from underestimating the data migration complexity. Both need to be communicated; neither should be presented as the same type of issue.
Provide a monthly lead indicator alongside the lagging outcome metrics. Lead indicators (adoption rates, change management activity, data readiness status) predict future outcome achievement and give the board visibility of whether the programme is on track before the outcomes themselves are measurable.
Key Takeaways
- 1.Board timeline expectations are miscalibrated by vendor case study selection bias, linear pilot-to-enterprise extrapolation (underestimated by two to three times), and invisibility of foundation investment.
- 2.Realistic UK AI transformation timelines: months one to six for foundations, months six to 18 for targeted deployment with first evidence, months 18 to 36 for scale and first genuine ROI, month 36 onwards for operating model embedding.
- 3.Overoptimistic timeline commitments do not disappear when missed; they become the measure against which the programme is judged and the source of the credibility crisis that constrains the programme's ability to complete its work.
- 4.Commit to high-confidence near-term deliverables, realistic ranges for medium-term, and phase-dependent framing for longer-term timelines; this is more sustainable than fixed optimistic commitments.
- 5.Maintain credibility through proactive timeline change communication, distinguishing external from execution causes, and providing monthly lead indicators that predict future outcome achievement.
References & Further Reading
- [1]McKinsey: The State of AI in 2024McKinsey & Company
Want to discuss this with an expert?
Book a strategy call to explore how these insights apply to your organisation.
Book a Strategy Call