top of page
Search

Are You Ready to Build AI Agents into Your Public Affairs Practice?

  • Paul Shotton
  • Jan 5
  • 8 min read

By Paul Shotton, co-founder Advocacy Strategy

To kick off the new year, after what was a restful Christmas break, I want to raise a question that many public affairs teams are already circling around.


Are you ready to start using AI agents as part of how your team works, not just as a personal productivity trick?


I'm not talking about trying an agent onces. I mean deliberately designing, using, and maintaining agents so they support real public affairs workflows, in a way that the whole team can learn from and rely on.


Much about AI has focuses on how AI systems learn: models, data, training methods, bias, performance. That conversation matters, but it is not where most public affairs teams are making their day-to-day decisions. The more practical challenge is about tools: how they fit into workflows, how they are shaped, who owns them, and how teams move from individual experimentation to consistent organisational practice.


Why Public Affairs Needs a Structured Approach

Public affairs work is fundamentally workflow-driven. It is executed through recurring processes: monitoring policy developments, analysing implications, prioritising issues, coordinating engagement, drafting messages, and reporting internally. AI only becomes useful when it is applied to specific tasks inside these workflows, with clear expectations about what "good" looks like.


At this point, it helps to clarify what we mean by an agent. An AI agent is a tool that uses an AI model to carry out a task in a structured way, often across several steps rather than producing a single output. An agent embodies a method. It is not just answering a question; it is applying a process.


And there are clear levels of complexity in how agents can be designed and adopted.

At the lower end are off-the-shelf agents, such as custom GPTs available through services like ChatGPT. These abstract away most of the technical complexity. They rely on configuration rather than coding and allow teams to shape behaviour through instructions, examples, and constraints.


From there you quickly move into more advanced territory. Some agents are bundled into existing services, such as monitoring platforms or public affairs tools that offer AI-driven summaries, alerts, or analysis. Beyond that are bespoke agents developed internally or with specialised consultants, often involving programming, statistical methods, and computational analysis.


As you move along this spectrum, the ratio between complexity and customisation changes sharply. In the early stages, you can get meaningful value with minimal coding. At later stages, customisation, integration, and software development become central. The underlying AI model may remain similar, but responsibility shifts from the platform provider to your organisation. That shift is what makes agent adoption hard.


The Competitive Reality

Before getting into use cases, it is worth stating a clear position. I believe integrating AI into public affairs practice is essential. Even if many teams are struggling to see immediate return on investment, it is a competitive field, and the potential rewards make it difficult to justify sitting it out. The real question is not whether AI matters. The question is how to adopt it in a way that is realistic, sustainable, and aligned with your team's capability.


This is why it often makes sense to start with lightweight experimentation and simple customisation (for example, custom GPTs in ChatGPT) before diving into complex build projects that require coding skills, data engineering, and ongoing maintenance. There is a technological barrier, and there is a capability barrier. If you skip straight to complex projects, you risk spending a lot of time and money building something your team cannot properly absorb.


Understanding Organisational Readiness

Some teams move quickly with new tools while others stall, and the difference rarely comes down to technology. Teams with strong existing habits around structured workflows, templates, and shared methods can absorb new tools faster because they have the foundation needed to adopt agents—what organisational researchers call "absorptive capacity." Teams relying on informal routines and tacit knowledge typically need more groundwork before agents become effective.


This matters because the goal is not just to make individuals faster. It is to build shared capability that the entire team can draw on. Agents can support that because they embody consistent methods and provide a common reference point for how tasks should be done. But they only help if people understand the underlying workflow well enough to use the agent critically, not blindly. When an agent is layered on top of unclear practices, it increases confusion and cognitive load. When it is layered on top of a clear and well-understood method, it reduces effort and improves consistency.


The real value of agents in public affairs is not primarily about speed or automation. It is about freeing senior practitioners from repetitive analytical and data-crunching tasks so they can focus on what truly requires expertise: strategic judgement, stakeholder relationship management, and political navigation. An agent that produces a first-draft stakeholder map or structures a message house creates space for experienced professionals to do what they do best—interpret context, adjust for nuance, and make decisions that require human insight.

When to Start (And When to Wait)

This leads to a crucial point. You cannot jump straight to agents without some preliminary work on workflows and use cases. If people do not understand the task and the method, they will not be able to judge whether the agent's output is good, whether it has missed something important, or whether it has applied the wrong logic.


At the same time, you do not need to map every workflow before you begin. The practical question is: which parts of your public affairs practice are already mapped and understood well enough to experiment safely? For many teams, that might be monitoring. For others, it might be stakeholder mapping. For others, it could be the drafting of messaging or standard communications outputs. Those well-understood areas are excellent candidates for first experiments, precisely because you can compare the agent's output against an existing method.


The Quality Challenge: Design, Data, and Discipline

This is also why agents are only as good as the design and data you put into them. If an agent is fed weak inputs, vague instructions, inconsistent templates, or poorly curated sources, it will produce weak outputs. If an agent is well designed, grounded in a clear workflow, and supported by good data, it can become a reliable component in a broader system of work.


And here is where many teams hit a wall. "Good data" in public affairs often means information scattered across monitoring platforms, email threads, meeting notes, briefing documents, and institutional memory held in people's heads. Before an agent can support stakeholder mapping, someone needs to decide where the data lives, how it will be compiled, what format it takes, and who maintains it. The data challenge is not a technical problem. It is an organisational one.


What Goes Wrong: Common Failure Modes

It is worth naming what failure looks like, because it happens more often than success in early implementations.


An agent produces a stakeholder map that looks comprehensive but misses a critical relationship because the data source was incomplete. A messaging agent generates a message house that sounds coherent but reinforces a frame your organisation is trying to move away from. A briefing agent summarises a policy position accurately but strips out the political nuance that explains why the position matters.


These failures share a pattern. They happen when agents are used without sufficient human oversight, when outputs are not compared against known standards, or when junior staff lack the experience to spot gaps and errors. They are not a reason to avoid agents. They are a reason to build in validation, start with low-stakes experiments, and treat early outputs with scepticism until you understand what the agent does well and where it falls short.


The Cost-Benefit Calculation

Agent adoption is not free. Even off-the-shelf agents require time to design, test, and refine. More advanced agents require data work, integration, and ongoing maintenance. If leadership wants teams to move beyond individual experimentation, then leadership needs to resource it. This means time allocation, clear ownership, training, and sometimes external support.


A useful way to think about cost-benefit is to ask three questions.

First, what is the value of standardising this task? Tasks with repeated outputs, high visibility, or high risk are strong candidates. Message development and stakeholder mapping often fall into this category because errors and inconsistencies directly affect effectiveness.


Second, what is the cost of getting it wrong? If poor output creates reputational risk, strategic misalignment, or decision errors, you need stronger validation and governance. That pushes you toward lower autonomy, more human review, and more careful agent design.


Third, what is the cost of maintaining it? Agents that rely on changing data, evolving policy contexts, or updated templates require ongoing upkeep. If your team cannot maintain it, you should not build it at high complexity.


Build or Buy?

This calculation also affects a fundamental decision: should you build custom agents or adopt vendor solutions?


If your existing monitoring platform or public affairs software already offers AI-driven summaries, alerts, or analysis, start there. The vendor absorbs the maintenance cost, and you can evaluate whether the approach fits your needs before investing in custom development. These tools are designed for immediate adoption and require minimal setup.

Custom agents make sense when your workflows are distinctive, when you need tight integration with internal data sources, or when off-the-shelf solutions do not match your methodology. But custom development requires programming expertise, ongoing maintenance, and often more organisational complexity than teams anticipate. The threshold question is not "would a custom agent be better?" It is "do we have the capability and commitment to maintain it over time?"


For most teams, the answer is to start with configuration-based tools (like custom GPTs) or vendor-embedded agents, experiment until you understand what works, and only then consider bespoke development if the value case is clear and the resources are available.


Governance and Ownership

Governance sits at the centre of these decisions. Leadership is not just there to set direction. Leadership also needs to decide what data can be accessed, how it can be collated, what boundaries should exist, and how disputes are mediated. For example, if an agent ranks stakeholders, leadership may need to define what indicators are acceptable, who signs off on them, and how different parts of the organisation resolve disagreements about stakeholder importance or engagement strategy.


Ownership also needs to be practical. Strategic direction should sit with leadership. Day-to-day ownership should sit with senior practitioners or a lead consultant who understands both the substance and the method. Those owners should gather feedback from the team, including junior staff, and iteratively improve the agent over time. Without clear ownership, agents drift, quality drops, and trust erodes.


Training as Capability Building

Training is the final piece that determines whether this becomes capability-building or just another tool.


Once a workflow is defined and an agent supports part of it, staff need training not just on how to use the agent, but on how the workflow works and how the agent fits within it. They need to understand what steps the agent covers, what assumptions it makes, what it cannot do, and what comes next in the broader process. They also need to learn how to review and challenge outputs. This is where organisational learning becomes real. The goal is not blind adoption. The goal is shared competence.


And this is why taking a critical and iterative approach is non-negotiable. You test small. You compare outputs against known standards. You refine templates and instructions. You decide what can be trusted and what must always be reviewed. You treat agent development as a continuous improvement process, not a one-off build.


What This Reveals About Your Team

Agents are not primarily about automation. They are about codifying practice and strengthening team capability. They can help embed structure, support onboarding, and improve consistency. But they will only work if you have enough clarity about how you work today and enough discipline to build and maintain tools around that.


Which brings us to a final question.


If experimenting with agents forces you to make your workflows explicit, agree on methods, define ownership, invest in training, and govern data properly—and it does—then adoption becomes a test of organisational maturity.


What would that test reveal about your team right now?

 
 
 

Comments


bottom of page