01The consistency-relevance tension
The fundamental tension in multi-stakeholder AI communication is between consistency and relevance. Consistency requires that all stakeholders receive the same core factual claims about the AI programme. Relevance requires that each audience receives emphasis on the aspects most important to them, in language and format suited to their context.
Inconsistency in facts creates credibility problems: when an employee hears something from their manager that contradicts the board communication, trust in leadership erodes. Inconsistency in emphasis or detail is legitimate: the board needs governance detail that is not relevant to a frontline employee; the employee needs role impact specifics that are not in the board communication.
The communication infrastructure required to manage this well includes: a single source of truth document (the core factual narrative about the AI programme), an audience matrix (what each audience needs to know, in what format, at what frequency), and a coordination mechanism to ensure communications to different audiences are reviewed for consistency before release.
02Board and investor communication
Board AI communication should cover: strategic rationale, investment envelope, governance framework, risk posture, progress against committed outcomes, and regulatory compliance status. The board needs enough information to exercise meaningful oversight; it does not need operational deployment detail.
For UK listed organisations, investor communication on AI is evolving rapidly. Large investors, particularly those with ESG mandates, are increasingly asking about AI governance, workforce impact, and responsible AI commitments. Getting ahead of these questions with a coherent investor narrative is preferable to responding reactively.
Key principle for board and investor communication: be specific and be honest. Generic statements about AI strategy ('we are committed to responsible AI') have low credibility. Specific, evidenced statements ('our AI programme has delivered [specific outcome] and is governed by [specific framework], with [specific oversight mechanism]') build the credibility that generic statements do not.
03Manager and employee communication
Manager communication must precede employee communication. Managers who are not adequately briefed before the employee communication arrives are unable to answer the questions they will face, damaging their authority and the credibility of the entire communication.
Manager communication should include: the content of the employee communication (before it is sent), the specific questions managers are likely to be asked and the approved answers, guidance on what managers are authorised to discuss locally and what should be escalated, and a channel for managers to get answers to questions they cannot answer themselves.
Employee communication should be role-segmented: employees in functions significantly affected by AI need different communication from those in functions minimally affected. The 'what does this mean for my role?' question needs a specific answer for each major role family, not a generic 'AI will support your work' statement. Where specific answers are not yet available, the communication should say so honestly and commit to a timeline for providing them.
04Customer and regulator communication
Customer communication about AI is becoming a regulatory requirement in many UK sectors, not just a trust-building option. The ICO expects transparency about automated decision-making that affects individuals. In financial services, FCA guidance on AI use requires clear communication about where AI influences customer outcomes.
For customer-facing AI communication: be specific about what AI is and is not doing in the customer interaction. 'This response was prepared with AI assistance' is more credible and more compliant than either saying nothing or burying a vague AI disclosure in terms and conditions.
Regulator communication about AI should be proactive rather than reactive. Engaging the relevant regulator (FCA, PRA, ICO, CMA depending on sector) with the organisation's AI governance approach before any regulatory inquiry creates a collaborative rather than adversarial dynamic. UK regulators are actively looking for organisations to engage with on AI governance; proactive engagement is typically received more positively than many organisations expect.
Key Takeaways
- 1.The consistency-relevance tension requires a single source of truth for core AI facts and an audience matrix that tailors emphasis and detail without creating factual inconsistencies between audiences.
- 2.Manager communication must precede employee communication with the employee content, likely questions and approved answers, and a channel for escalating unanswerable questions.
- 3.Employee communication must be role-segmented; the 'what does this mean for my role?' question requires a specific answer per role family, not a generic AI augmentation statement.
- 4.Board and investor communication should be specific and evidenced; generic responsible AI statements have low credibility with boards exercising oversight and investors applying ESG scrutiny.
- 5.Proactive regulator engagement on AI governance (ICO, FCA, PRA) creates a collaborative dynamic that reactive engagement after regulatory inquiry does not; UK regulators are actively seeking organisations to engage with on AI governance.
References & Further Reading
- [1]ICO: AI Transparency GuidanceInformation Commissioner's Office
Want to discuss this with an expert?
Book a strategy call to explore how these insights apply to your organisation.
Book a Strategy Call