LB
Back to Technology Change
General5 min read

How to Set Up an AI Ethics and Governance Committee: A Practical Guide for UK Organisations

An AI ethics and governance committee is the organisational mechanism through which boards maintain meaningful oversight of AI without micromanaging its deployment. When designed well, it enables the organisation to move with confidence, providing assurance that AI use is governed consistently with the organisation's values, legal obligations, and risk appetite. When designed badly, it adds governance cost without governance substance. This article covers how to design one that works.

01The case for a dedicated committee

Some organisations attempt to incorporate AI governance into existing governance structures: the audit committee, the risk committee, or the technology committee. For organisations with modest AI programmes, this may be sufficient. For organisations with material AI deployments, or in sectors where AI governance is becoming a regulatory expectation, a dedicated AI ethics and governance committee provides clearer accountability and more focused attention.

The case for a dedicated committee rests on three arguments. First, AI governance requires specialist knowledge (of AI capabilities, risks, and the rapidly evolving regulatory landscape) that existing committees may not have. Second, AI governance questions cross the traditional boundaries between audit, risk, technology, and legal, requiring a forum that can address all dimensions simultaneously. Third, board members and senior management signal the importance they attach to an issue through the governance structures they create around it. A dedicated AI ethics committee signals that AI governance is a board-level priority, not a technology sub-committee matter.

02Membership design

The most common design error in AI ethics committees is homogeneous membership: all technology people, or all legal and compliance people, or all business people. Effective AI governance requires diverse expertise.

Core membership should include: a non-executive director with relevant experience (technology, regulatory, or risk) to chair or co-chair the committee; the CIO or CITO; the General Counsel or Chief Compliance Officer; the CHRO (given AI's workforce implications); a senior business leader from a major AI-affected function; and an independent AI ethics specialist (external member, either a standing appointment or a rotating expert adviser).

For regulated sectors, the committee should include someone with direct knowledge of the relevant regulatory framework: in financial services, someone with FCA and PRA experience; in healthcare, someone familiar with NHS Digital and MHRA requirements.

The committee should be small enough to be functional (five to eight members) but broad enough to cover the expertise required. A 12-person committee that meets quarterly to discuss papers will not provide genuine governance; a five-person committee that meets monthly to make real decisions will.

03Mandate and decision rights

The committee's mandate should be agreed and documented before its first meeting. A clear mandate covers three areas:

Approval authority: what AI deployments require committee approval before proceeding? A materiality threshold is required. Common criteria: any AI deployment using personal data; any AI application that makes or significantly influences decisions affecting individuals; any AI deployment in a regulated function; and any AI investment above a defined financial threshold.

Policy authority: the committee owns the organisation's AI ethics framework, AI acceptable use policy, and AI risk appetite statement. These documents require committee approval and the committee is responsible for keeping them current.

Oversight function: regular review of the AI portfolio's risk posture, incident reports from AI deployments, regulatory developments relevant to the organisation's AI use, and the results of any AI audits or assessments.

Without clearly defined decision rights, the committee becomes an advisory body whose recommendations can be overridden by operational decisions. A committee with genuine authority over AI deployment approvals has the influence needed to make governance substantive.

04Operating rhythm and reporting

An effective AI ethics and governance committee typically meets monthly or bi-monthly, with quarterly board reporting.

Monthly meeting agenda structure: standing items (AI incident review, regulatory update, new deployment approvals) and a rotating deep dive (one aspect of the AI governance agenda each meeting: data ethics, workforce impact, specific regulatory question, third-party AI vendor review).

The committee should produce a quarterly board report covering: AI portfolio governance status, material AI risks and their management, regulatory developments and their implications, and any AI ethics incidents or near-misses.

For the annual report, the committee's work should be summarised in a way that can be included in the organisation's governance statement. UK investors, regulators, and civil society are increasingly attentive to AI governance disclosures. A credible, substantive description of the committee's work and its outcomes provides assurance to external stakeholders while creating internal accountability.

Key Takeaways

  • 1.A dedicated AI ethics and governance committee is appropriate when AI deployments are material; incorporating AI governance into existing committees lacks the specialist attention and cross-boundary authority required.
  • 2.Effective membership is diverse: NED chair, CIO, General Counsel, CHRO, a major business function leader, and an independent AI ethics specialist, kept to five to eight members for functionality.
  • 3.The committee needs genuine decision rights: approval authority over material AI deployments, policy ownership, and an oversight function covering risk posture, incidents, and regulatory developments.
  • 4.Monthly or bi-monthly meetings with quarterly board reporting; the board report should cover portfolio governance status, material risks, regulatory developments, and AI ethics incidents.
  • 5.Annual report AI governance disclosure is increasingly scrutinised by UK investors and regulators; a substantive committee narrative provides both external assurance and internal accountability.

References & Further Reading

Want to discuss this with an expert?

Book a strategy call to explore how these insights apply to your organisation.

Book a Strategy Call