top of page
Search

Why Is Everyone Suddenly Talking About AI Agents in Public Affairs?

  • Mar 9
  • 5 min read

By Paul Shotton, Advocacy Strategy


What’s the Difference Between an LLM and an Agent?

Over the past year, the conversation around artificial intelligence has shifted noticeably. Not long ago, most discussions focused on tools like ChatGPT and how they could help draft emails, summarise documents, or generate text.


Now the buzzword seems to be something else entirely: agents.


Everyone is suddenly talking about AI agents. Companies are announcing them.

Developers are building them. Consultants are proposing them to clients. Entire product roadmaps seem to revolve around them.


But if you listen carefully, there is still a lot of confusion behind the excitement. Many people are not entirely sure what an AI agent actually is, or how it differs from the large language models (LLMs) that have already become part of everyday work.


Understanding that distinction helps explain both the excitement and the limits of what agents can actually do.


To understand agents, it helps to start with the technology that made them possible.


What an LLM Actually Does

Tools like ChatGPT or Claude are based on what are called large language models. At a very basic level, these models are extremely sophisticated text prediction systems.

They are trained on enormous datasets of written material and learn statistical patterns in language. When you ask a question or provide a prompt, the model generates a response by predicting the most likely sequence of words based on what it has learned.


The interaction is typically simple: you provide a prompt, and the model generates a response.


For many tasks, this is already extremely powerful. LLMs can draft documents, summarise reports, analyse policy texts, or explain complex concepts. They can help with brainstorming, writing, and research.


But there is an important practical limitation that experienced users quickly discover. If you ask the model to perform a very complex task in a single step, the quality of the output often declines. The instruction becomes too broad, too vague, or requires too many analytical steps at once.


In practice, the more complex the task you assign in a single prompt, the more likely the answer becomes uneven or superficial.


Breaking Complex Work Into Steps

Because of this limitation, many advanced users discovered that better results come from breaking complex tasks into smaller steps.


Instead of asking the model to perform an entire analysis in one prompt, you divide the work into stages. Each stage performs a specific function and produces an intermediate output that feeds into the next step.


For example, analysing a policy document might involve a sequence like this:

  • summarising the document

  • identifying the key policy issues

  • extracting the positions of different actors

  • analysing the strategic implications

  • translating that analysis into a briefing or report


Each step is handled with a separate prompt. The outputs from one step become the inputs for the next.


Over time, this approach often evolves into a structured workflow or a standard operating procedure. The workflow defines the analytical process required to produce a reliable output.


In other words, the user becomes the orchestrator of the process. You guide the model through each stage of the analysis.


In my own work in public affairs, this kind of structured prompting quickly became essential. Monitoring policy developments, analysing legislative texts, developing message houses, or defining campaign KPIs all involve multi-step analytical processes. The LLM can support each step, but the user typically has to guide the sequence.


This is where the idea of agents begins to emerge.


What an AI Agent Actually Is

An AI agent is not simply a more powerful language model. Instead, it is a system that allows a language model to perform multiple actions in order to complete a task.

Rather than responding to a single prompt, an agent can break a task into steps and execute them in sequence. Instead of manually orchestrating each stage of a workflow, the system coordinates the process on your behalf.


You can think of it in simple terms like this: An LLM answers questions. An agent completes tasks.


The language model still performs the reasoning and analysis, but the agent provides the structure that allows the model to carry out a sequence of actions.


For example, imagine a system designed to produce a monitoring report on a legislative development. Rather than asking the model to summarise a document once, an agent might:

  • retrieve relevant documents or transcripts

  • analyse the content of those documents

  • identify key actors and policy positions

  • extract the most relevant developments

  • generate a structured monitoring report using a predefined template


In this case, the language model is still doing the analytical work. But it is embedded inside a system that allows it to perform a series of actions rather than a single response.


The agent is the system that orchestrates the workflow.


Why Agents Are Generating So Much Excitement

Once you start thinking in these terms, it becomes easy to understand why agents have generated so much interest.


Many professional tasks are not single-step questions. They involve sequences of actions: collecting information, analysing it, structuring insights, and producing outputs. Agents offer the possibility of automating parts of those workflows.


In public affairs, this potential is immediately visible. When I first started thinking about agents, it was easy to imagine dozens of possible use cases: agents that help draft monitoring reports, analyse policy documents and formulate positions, build message houses, support the definition of KPIs for advocacy campaigns, or structure stakeholder intelligence and produce internal briefings.


Public affairs work involves a constant movement between gathering information, interpreting it, structuring it, and translating it into strategy.


Much of this work is too complex to be solved with a single prompt, but structured enough that you can imagine AI supporting the process through a sequence of analytical steps.

That is exactly why agents feel so exciting. They appear to offer a way of moving beyond isolated prompts and toward systems that support real workflows.


But this excitement also hides an important complexity.


Not All Agents Are the Same

The word agent is now used to describe a wide range of systems, from relatively simple tools to much more sophisticated platforms.


At the simplest level, an agent might be a structured prompt embedded in a tool like ChatGPT or Claude. These systems guide the model through a specific task using predefined instructions.


A step further are workflow agents that coordinate multiple analytical steps. These systems may retrieve documents, process information, and generate outputs based on structured processes.


At the most advanced level are custom-coded agents built by developers. These systems can integrate multiple data sources, interact with external tools, access internal knowledge bases, and perform complex workflows automatically.


All of these systems fall under the umbrella of “agents,” but their capabilities can vary significantly.


Understanding these differences is important because the current excitement around agents sometimes treats them as a single technological breakthrough. In reality, they represent a spectrum of systems built around language models.


Where Things Are Heading

Another reason the conversation about agents has accelerated recently is that the language models themselves are evolving.


Models like Claude and the latest versions of ChatGPT are increasingly capable of analysing large document sets, performing multi-step reasoning, and refining their answers before producing a result.


In some situations, they already behave in ways that resemble simple agents.


This creates an interesting dynamic. Developers and organisations are building agent systems around language models, while at the same time the models themselves are becoming more capable of performing complex reasoning internally.


It is not yet clear how that balance will evolve.


What is clear is that the next phase of AI adoption will not simply involve better prompts. It will involve designing systems and workflows that allow language models to operate more effectively.


And that brings us back to the core distinction.


A large language model generates responses. An agent is a system that allows that model to perform work.


Understanding that difference is the first step toward understanding why everyone is suddenly so excited about agents — and what they might actually change.

 
 
 

Comments


bottom of page