01Awareness: do people know why AI matters?
The first ADKAR building block, Awareness, asks whether employees understand why the change is necessary and the risks of not changing. For AI adoption, Awareness has a specific meaning: do employees understand why AI is being adopted, what it is expected to achieve, and what it means for their work?
Awareness failures for AI are common and take specific forms. 'I know the organisation is deploying AI' is not the same as 'I understand why AI matters to my specific role and what the organisation expects of me in relation to it.' The first statement describes information received; the second describes actionable understanding.
Diagnostic signal for Awareness barriers: employees can describe AI broadly but cannot answer 'why does this matter for my specific work?' or 'what is expected of me?'
Intervention: role-specific communication that directly answers the 'why does this matter for me?' question. Generic AI announcements create the first type of awareness; segmented role-specific communication creates the second. The investment in segmentation pays back in reduced adoption barriers throughout the ADKAR sequence.
02Desire: do people want to adopt AI?
Desire is the most underestimated ADKAR building block in AI transformation. Organisations assume that once people are aware of AI's benefits, desire follows naturally. For technology changes with low personal impact, this may be true. For AI, which touches professional identity, job security concerns, and the value placed on hard-won expertise, Desire requires active cultivation.
The sources of Desire for AI adoption are: personal benefit (the individual perceives genuine benefit to themselves, not just the organisation), peer influence (trusted colleagues are adopting and finding value), leadership role modelling (respected leaders are visibly using AI), and organisational culture (AI use is recognised and valued).
Diagnostic signal for Desire barriers: employees understand the business case for AI but make comments like 'I don't really need it' or 'I can already do this myself' or 'I'm worried about what this means for my job.'
Intervention: peer-sourced evidence of personal benefit from colleagues in similar roles, visible leadership AI use, and direct conversation about AI's impact on roles rather than evasive communication that allows worst-case interpretations to fill the information vacuum.
03Knowledge and Ability: can people use AI effectively?
Knowledge covers what people know about how to use AI; Ability covers whether they can actually perform the required behaviours. Both are necessary; neither alone is sufficient.
Knowledge failures in AI adoption are common and look like: employees know AI tools exist and can describe them but do not know which specific prompts or use cases are most relevant to their role. Generic AI training produces generic Knowledge; role-specific training produces actionable Knowledge.
Ability failures are almost ubiquitous in early AI adoption. Even employees with adequate Knowledge of AI capabilities often find, in practice, that their prompts produce mediocre results, that they struggle to identify where AI fits into their workflow, and that the gap between theoretical Knowledge and practical Ability is wider than they expected. Ability requires practice, not just training.
Diagnostic signal for Knowledge barriers: employees can describe AI broadly but cannot produce useful AI output for their specific work tasks.
Diagnostic signal for Ability barriers: employees can produce AI output in training exercises but do not use AI tools in their actual daily work.
Intervention for both: role-specific, practice-dominant training with immediate application to real tasks, followed by structured 30-day practice periods with peer support from an AI champion.
04Reinforcement: what sustains the change?
Reinforcement is the ADKAR building block most consistently underinvested in AI adoption programmes. Training produces Knowledge and initial Ability; it does not sustain the behaviour change required for genuine workflow integration.
Effective Reinforcement mechanisms for AI adoption:
Performance integration: including AI adoption expectations in performance objectives, and recognising AI-enabled outcomes in performance reviews and talent discussions. Behaviour that is recognised is repeated; behaviour that is ignored fades.
Measurement feedback: providing individuals and teams with visibility of their AI usage and its outcomes. Microsoft Viva Insights personal dashboards, team adoption rate sharing, and regular 'what has AI done for your week?' conversations provide the feedback that sustains motivation.
Community and peer learning: regular team AI conversations, a shared prompt library that grows as people contribute to it, and a champion network that keeps AI adoption visible and active over the months required for genuine habit formation.
Consequences for non-adoption: in the mature phase of AI transformation, where AI competence has become a reasonable performance expectation, sustained non-adoption should be addressed as a performance matter rather than treated as an acceptable individual choice. This is not about punishing individuals; it is about being clear that AI competence is a professional requirement, in the same way that email competence was a reasonable expectation 20 years ago.
Key Takeaways
- 1.ADKAR provides both a diagnostic framework (identifying exactly which building block is causing adoption failure) and a design framework (selecting the right intervention for the specific barrier).
- 2.Awareness for AI requires role-specific understanding of 'why does this matter for my work?', not just general awareness that AI is being deployed; segmented communication is the investment that creates actionable Awareness.
- 3.Desire is the most underestimated ADKAR building block for AI; professional identity concerns and job security anxiety require active cultivation through peer evidence, leadership role modelling, and honest role impact communication.
- 4.Ability failures are ubiquitous in early AI adoption; the gap between Knowledge of AI and Ability to use it effectively requires practice-dominant training with immediate real task application and a structured follow-through period.
- 5.Reinforcement is the most consistently underinvested ADKAR element; performance integration, measurement feedback, peer community, and eventual performance expectations for AI competence are the mechanisms that sustain adoption.
References & Further Reading
- [1]Prosci: ADKAR ModelProsci
Want to discuss this with an expert?
Book a strategy call to explore how these insights apply to your organisation.
Book a Strategy Call