AI Adoption Is Real. Implementation Is Not.
- 6 days ago
- 4 min read

By Paul Shotton, Advocacy Strategy
The central question for public affairs teams is no longer whether to use AI. It is whether anyone has actually changed how the work gets done — and whether that change is happening by design or by accident.
That distinction matters more than it might seem.
What the data actually show
Across business, the direction of travel is clear. McKinsey's 2025 global survey reports that 88% of organisations are using AI in at least one business function, up from 78% a year earlier. Stanford HAI's 2025 AI Index puts it similarly: 78% of organisations used AI in 2024, up from 55% in 2023. Different studies use different samples and definitions, but the broad picture is consistent. AI adoption is now mainstream rather than marginal.
What the same data also show, with equal consistency, is that maturity is far lower than adoption.
McKinsey finds that most organisations remain in experimentation or pilot mode, with only around a third saying they have begun to scale AI programmes. Separately, 92% of companies plan to increase AI investment over the next three years — yet only 1% believe they are operating at maturity. IBM adds a further indicator: only 25% of AI initiatives have delivered expected ROI, and only 16% have scaled enterprise-wide.
That is the gap worth paying attention to. High adoption, weak integration, uneven returns.
Where public affairs sits
Public affairs teams operate inside organisations that are clearly moving on AI. Large companies may have enterprise copilots, internal LLM environments, or targeted GPT initiatives already in place. But organisational investment in AI does not automatically reach public affairs. A team can sit inside a company spending heavily on AI transformation and still be running largely on individual experimentation — with no strategy of its own.
In practice, that is probably where many teams are: a small number of early adopters working in their own way, with no shared model, no common method, and no institutional knowledge being built.
The wider environment is shifting too. Gallup data released in early 2026 showed that 43% of public-sector employees were using AI at least a few times a year by late 2025, up from 17% in 2023. That is not a measure of public affairs teams specifically, but it tells us something important: the institutions, officials, and stakeholders that public affairs professionals work with and around are themselves changing how they operate.
What adjacent professions suggest
The closest available proxy is public relations. CIPR research found that up to 40% of PR tasks were being assisted by AI tools, and its 2024 State of the Profession report identified AI as both the major challenge facing the industry and the biggest skills shortage. More than half of PR professionals report using AI often or sometimes.
PR is not public affairs. But the workflow overlap is significant — monitoring, drafting, summarising, analysis, message development, internal communication. If AI is reshaping that work in PR, the same pressure is bearing on public affairs.
Thomson Reuters offers a further data point from professional services more broadly: more than half of professionals across legal, tax, risk and government-related work have used generative AI in some form. At the same time, more than half say their organisations are not measuring ROI for those tools. Adoption is real. Institutional integration is not keeping pace.
Where the evidence holds and where it does not
Across McKinsey, Stanford HAI, Gallup, CIPR, and Thomson Reuters, the adoption story is consistent. The percentages differ, but not in a way that changes the picture. On maturity, the same pattern keeps appearing across sources: widespread activity, much weaker integration.
On ROI the picture is more mixed, though perhaps less contradictory than it first appears. Deloitte reports that 74% say their most advanced generative AI initiative is meeting or exceeding expectations. IBM reports that only 25% of AI initiatives overall have delivered expected ROI. Those findings are likely measuring different things: the best-developed initiative in one case, a broader portfolio in the other. The more plausible reading is that well-run, focused initiatives can create real value — while the average organisational portfolio of AI activity remains highly uneven.
One limit is worth naming directly. The evidence base on AI adoption across business generally is now fairly robust. Evidence on adjacent professions is increasingly useful. What we still lack is strong benchmarking data focused specifically on public affairs as a profession. We can infer a great deal. But inference is not direct measurement, and that is partly why the conversation in the profession can still feel slightly abstract.
The question that actually matters
My conclusion is this: AI adoption has moved faster than organisational implementation, and public affairs sits inside that gap rather than outside it.
The real issue is no longer access to tools. It is whether teams have built the operating conditions for those tools to matter — and whether anybody has taken responsibility for that at a functional level. In most public affairs teams, that question has not yet been asked directly: what is our AI strategy, and who owns it?
In public affairs terms, that means asking harder questions than "are we using AI?"
Has it changed how you monitor issues? How you produce internal reporting? How you map stakeholders or draft briefings? How you manage institutional knowledge across a team? And how far does any of that extend beyond a few motivated individuals?
Those are not technical questions. They are management questions. And the data suggest most organisations — in public affairs as elsewhere — have not yet answered them seriously.
The market has moved. Adoption is real. But a strategy for the public affairs function specifically — one that goes beyond access to tools and into how the work is designed, measured, and shared — is still the exception, not the rule.




Comments