The AI Adoption Gap in Public Affairs: Why Most Teams Should Start Smaller Than They Think
- May 7
- 7 min read

By Paul Shotton, Advocacy Strategy
A recent working group I joined with senior public affairs professionals included a discussion about late adopters. The conversation made me reflect on: how do you bring the rest of the team along?
The participants were, almost by definition, early adopters. They had strong personal practice, several configured tools, and a clear sense of what AI is now capable of. The teams behind them looked very different. Some colleagues were not using AI at all. Others were using it occasionally and informally. Most had no shared method to plug into. The gap between those leading the conversation and the rest of the bell curve was wide.
That gap is becoming a management problem in its own right.
Two conversations, one team
If you listen carefully, there are now two AI conversations happening at the same time inside many public affairs organisations.
The first is among early adopters. They are experimenting daily with Claude Projects, custom GPTs, structured prompting, retrieval setups, and increasingly with agents. Their reference point shifts every few months because the technology shifts every few months. What sounded ambitious a year ago is part of their working week now.
The second is happening one rung down. Colleagues are still working out whether AI can reliably help them summarise a meeting, draft a stakeholder note, structure a briefing, or compare two consultation papers. In some organisations, public AI tools are not even permitted; access is restricted to a corporate Copilot environment with limited integrations and tighter data handling.
The gap matters because it is easy to mistake licences for capability. Buying tools, deploying a corporate assistant, or running a single training does not move a team along the maturity curve. As I argued in AI Adoption Is Real. Implementation Is Not, the data show very high adoption across business and very low integration. The same pattern is now visible inside public affairs teams: a few advanced users, insufficient shared method, and not enough institutional knowledge being built.
Access is not capability
This distinction lines up with the wider change management literature. McKinsey’s recent work on change in the age of generative AI argues that organisations will not generate value by simply bolting AI onto existing processes. Workflows have to be rethought, work has to be redesigned, and people have to be supported through structured capability-building.
That argument is not new. Whether you look at Rogers’ diffusion of innovations, the Technology Acceptance Model, or ADKAR-style change frameworks, the underlying principle is the same: people do not adopt new systems simply because the systems are available. They need clarity, practical relevance, visible value, confidence, time, and support.
AI does not change that pattern. If anything, it amplifies it, because the technology touches not only systems and tools but the way people read, write, analyse, and decide.
Start where the work actually sits
In public affairs, the gap between conversation and practice is particularly visible because senior conversations tend to leap to advanced use cases — predictive analytics, sentiment monitoring, AI-assisted legislative forecasting, automated stakeholder dashboards. These are interesting, but they are rarely the right place to begin for a team that is not yet using AI on a daily basis.
For most teams, the first stage of adoption is exactly what Stage 1 of the maturity curve I have used in earlier articles describes: personal productivity. Drafting, summarising, structuring notes, comparing documents, organising information before a meeting. This sounds modest, but it is where confidence is built and where users start to develop a working sense of how the tool actually behaves.
A useful early example is a policy officer preparing a briefing on a new consultation. AI can summarise the consultation, surface stakeholder positions, identify recurring themes, and suggest a first draft structure. The officer reviews, validates, and edits. That single workflow does two things at once. It produces a usable output, and it teaches the user where the tool is strong, where it is weak, how much context it needs, and where their own judgement remains essential. Stage 2 — task augmentation — emerges from that practice rather than being designed in advance.
Skipping these stages tends to look like innovation but functions as a capability gap. Asking a team that has never used AI seriously to engage immediately with predictive modelling or autonomous workflows almost always produces sophisticated outputs that no one in the team can properly evaluate.
Prompting is still part of AI literacy
There is a fashionable view that prompt engineering is becoming irrelevant because models are getting better at inferring intent. There is some truth in this. Reasoning is improving, multi-step tasks are easier, and the burden of careful instruction is genuinely lower than it was even a year ago.
But for an organisation, prompting still matters. For most users, prompting is the first stage of learning how AI actually behaves. Through repeated interaction, people start to see how context shapes output, how ambiguity weakens results, where hallucinations creep in, and where validation is non-negotiable. That is AI literacy in practice, not a technical specialism.
This matters in public affairs because the work is judgement-heavy. Legislative analysis, stakeholder engagement, briefing preparation, and message development are interpretive tasks. Before teams move toward integrated retrieval, configured workflows, or agent-supported analysis, most users need a period of structured prompt-based practice to build a feel for what these systems can and cannot reliably do.
Sequence matters more than ambition
A more useful way to think about adoption is therefore as a maturity journey rather than an immediate transformation. The five-stage curve I have used in earlier pieces still holds: personal productivity, task augmentation, workflow integration, agentic systems, and organisational intelligence systems.
The mistake most teams make is not ambition. It is sequencing. Advanced use cases depend on more basic ones: workflow literacy, information discipline, confidence in the tools, quality control, and clear human accountability. As I argued in Why Workflows and SOPs Make or Break AI in Public Affairs, the workflow is the map and the SOP is the turn-by-turn instruction. Neither layer can be skipped if AI is going to become reliable rather than performative.
Training plays a critical role here, but it should not be understood mainly as technical instruction. The objective is not to teach people which buttons to press. It is to help them understand how AI fits into the work they already do. McKinsey’s research on change in the age of AI is consistent with the wider adult learning literature: training works when it is practical, immediately relevant, embedded in real tasks, and iterative.
The most effective format is therefore applied. Explain the principle, demonstrate the use case, let participants try it on their own work, review the outputs together, and discuss limitations, validation, confidentiality, and quality control. Done well, this kind of training does more than build AI skills. It forces the team to look closely at how its own work is actually done — and that is often where the real value of adoption begins to emerge.
Not every team member needs the same AI capability
A mature adoption strategy should also recognise that not every public affairs professional needs the same AI competence.
Some practices should become baseline competencies for the whole team: drafting briefings, summarising information, preparing meetings, reviewing stakeholder arguments, developing messages, organising knowledge. These are core public affairs activities, and most people should understand how AI can support them without replacing the judgement they require.
Others are role-specific. A policy officer may need AI to compare consultation documents or prepare issue briefings. A public affairs manager may use it to synthesise stakeholder positions or review campaign progress. A senior leader may need enough literacy to interpret AI-supported analysis and decide what to do with it, without necessarily producing it personally.
A smaller group of advanced users will lead on more specialist capabilities — data integration, workflow automation, custom configurations, monitoring infrastructure. They tend to act as translators between what the team needs and what the technology can plausibly do.
The more useful question is therefore not “how do we train the team in AI?” but “which AI capabilities should be universal, which should be role-specific, and which should be specialist?”
Better tools will not solve work design
There is a growing assumption that, as systems become more intuitive, the need for training will fall away. In some respects that will be true: interfaces will improve, models will reason more, and automation will absorb some of the technical complexity.
But many workplace users are not actually using frontier systems in their full form. They are using AI inside constrained enterprise environments — restricted public access, limited uploads, reduced browsing, tighter integrations. Paradoxically, those constrained tools often demand more skill from the user, not less. When the system cannot do everything, prompting, context management, and disciplined workflow practice matter more, not less.
The implication is straightforward. Even if the tools become dramatically easier, employees will still need to understand when to use AI, how to validate outputs, how to handle sensitive information, and how AI-generated work fits within professional and organisational standards. The long-term capability is not prompt engineering as a niche specialism. It is the ability to integrate machine assistance into professional work in a structured, responsible, and effective way.
The real objective is AI-enabled public affairs practice
For most teams, the journey does not begin with autonomous agents or advanced analytics. It begins with helping people use AI well on the work already on their desks.
That sounds modest, but it is the foundation for everything else. Practitioners who learn to use AI on familiar tasks build a working sense of its strengths and limits. Teams that apply AI inside real workflows start to see where their own processes need clarifying. Leaders who understand both the potential and the limits of AI are better placed to make sensible decisions about investment, governance, and training.
The objective is not AI use as such. It is AI-enabled professional practice: work that is better structured, better supported, better informed, and more capable because machine assistance has been integrated into it deliberately.
That is why most teams should start smaller than they think. Not because ambition is wrong, but because adoption depends on capability. And capability is built through practical use, structured learning, fit-for-purpose tools, and a clear understanding of how the work is actually done.
So the question for public affairs leaders is not how advanced our AI is. It is how confidently and consistently the team can use AI on the work we already do well.




Comments