top of page
Search

The Real Reason AI Isn’t Taking Off

  • Paul Shotton
  • Nov 27, 2025
  • 5 min read

By Paul Shotton, Advocacy Strategy


I promised myself I wasn’t going to write another piece about AI this month. I really did. And then I came across The Economist’s latest article, “Investors expect AI use to soar. That’s not happening,” which shows that AI adoption inside organisations isn’t accelerating—it’s actually slowing.


What struck me most was how strongly the numbers reinforced something I’ve been saying repeatedly: the biggest barrier to meaningful AI adoption is not the tools themselves, but the absence of clear workflows, defined use cases and structured training. The Economist article puts this into sharp relief. According to their analysis of U.S. Census Bureau data, only 11% of workers reported using AI in producing goods or services in the preceding two weeks. Even more surprisingly, that number has actually declined over recent months. Adoption has fallen fastest in large organisations—the ones you’d expect to be best positioned to scale AI use. Other studies referenced in the article show similarly stagnant trends. One found that daily AI use rose only marginally, from 12.1% to 12.6% over an entire year.


And then there’s the adoption gap that tells its own story: 87% of executives say they use AI at work, but only 27% of employees do.


This is one of those statistics that looks simple but speaks volumes. And it’s worth pausing on, because it reveals far more about organisations than it does about the technology itself.

Executives tend to use AI earlier and more confidently for a few reasons. First, their work is unusually suited to AI’s current strengths: summarising, synthesising, scanning large volumes of information, shaping messages and generating options. Second, executives have far more autonomy to experiment. They can change how they work without asking for permission, training or workflow alignment. Third, executives experience immediate, visible wins: faster briefings, clearer talking points, quicker strategic scans. This naturally fuels enthusiasm.


But employees operate under very different conditions. Their workflows are structured. Their responsibilities are interconnected. Their output is audited, edited and passed across teams. Tools that disrupt this flow—even if they promise efficiency—can feel risky. Employees also bear the burden of accuracy, while executives often sit at the synthesising rather than the producing end of the process. And without clear SOPs, consistency expectations or training, employees are right to hesitate. The cost of getting something wrong is not evenly distributed within an organisation.


For anyone working in Public Affairs, this dynamic should sound familiar. Almost every team I speak with tells me a version of the same story: “We know AI matters. We’ve tried a few tools. But we’re not sure how it fits into how we actually work.” And that sentence—the part about how they actually work—is the quiet centre of the whole problem.


The reality is that most Public Affairs teams haven’t fully mapped their workflows. They haven’t unpacked the dozens of micro-steps that make up the real job: the coordination across functions, the triage of issues, the constant prioritisation, the interpretation of political signals, the synthesis of inputs from across the organisation and the translation of complexity into clarity. Drafting is only a tiny fraction of what PA professionals do, yet it’s often the only place where AI experiments begin. When teams limit their exploration to generating text, they quickly conclude that AI is interesting but not transformative. And they’re right—if it’s only layered on top of a workflow that hasn’t been examined, clarified or standardised.


This is why so many teams feel stuck. They experiment, they dabble, they test a few features, but nothing fundamentally changes. It’s not because AI doesn’t have potential. It’s because the foundations for using it aren’t in place. Without clear workflows, there is no clarity about where AI adds value. Without defined SOPs, there’s no consistency in how tools are used. Without specific use cases, there’s no way to distinguish between novelty and impact. And without a structured training plan that reflects real Public Affairs roles and tasks, adoption naturally gravitates up the hierarchy and stalls everywhere else.


The Economist statistics simply make visible what many Public Affairs teams are experiencing privately. Organisations aren’t struggling to access AI. They’re struggling to integrate it meaningfully into the way work actually gets done. And this gap—between enthusiasm at the top and uncertainty in the middle—is exactly what slows adoption and undermines returns. It’s also why many early AI investments fall short of expectations. The tools arrive before the organisation is ready to use them.


The encouraging part is that teams that do make progress share a common pattern. They start by mapping their workflows honestly, not according to job titles or aspiration but according to how decisions, information and responsibilities really move through the team. Once those flows become visible, the bottlenecks stand out. Low-value, repetitive tasks suddenly reveal themselves as opportunities for automation or augmentation. High-value analytical or relational work becomes easier to protect and enhance. And most importantly, the team begins to see the difference between tasks where AI is merely helpful and tasks where it can fundamentally reshape how work is done.


From there, the next steps become far more practical. Teams define use cases anchored in their real work rather than general ambition. They develop SOPs that codify what happens at each stage of a workflow, ensuring consistency even as roles evolve. They build training plans that focus on capability, not exposure—training people on how to use AI in the context of their daily responsibilities, not in abstract environments detached from reality. And they experiment in small, contained ways that allow insight to build steadily rather than aspirationally. Over time, those small improvements compound, and the organisation begins to feel genuinely more capable.


In this sense, the current slowdown in AI adoption shouldn’t be interpreted as a failure of the technology. It’s more likely a reflection of organisations coming to terms with what meaningful integration actually requires. Public Affairs teams are not an exception; if anything, they feel these dynamics more intensely because their work sits at the intersection of information flow, interpretation, influence and decision-making. AI can enhance all of that—but only when the underlying structure is clear.


I see the Economist numbers as an invitation. They confirm that even organisations with deep pockets and strong intentions are facing the same challenges Public Affairs teams encounter: unclear workflows, uneven capability, inconsistent experimentation and a gap between executive ambition and operational reality. None of that is fatal. In fact, it’s entirely addressable.


The path forward for Public Affairs isn’t about adopting AI faster. It’s about adopting it smarter. It means starting with one workflow that consistently drains time and energy, mapping it fully, identifying where AI can offer genuine relief or enhancement and building from there. A team that does this not only adopts AI more effectively—it lifts its entire strategic capability.


In a field where competitive advantage is often invisible, the teams that invest in structure, clarity and skill-building will quietly outpace the rest. And ironically, that’s exactly what the latest data on AI adoption is really telling us.

 
 
 

Comments


bottom of page