Are You Behind on AI in Public Affairs? A Practical Framework to Assess Your Team’s Maturity
- Mar 9
- 5 min read

By Paul Shotton, Advocacy Strategy
Almost every public affairs professional I speak to today is experimenting with artificial intelligence. Some are using tools like ChatGPT or Claude to draft emails, summarise documents, or brainstorm ideas. Others are beginning to integrate AI into more analytical tasks such as policy monitoring, document analysis, or briefing preparation.
But a recurring question keeps emerging in conversations with colleagues and clients: are we ahead of the curve, or already behind it?
Artificial intelligence is evolving extremely quickly. New tools appear almost weekly, and capabilities that seemed remarkable only a few months ago quickly become standard features of the next generation of models. This creates a sense of uncertainty for many organisations. It can feel difficult to know whether you are experimenting productively, or whether others are moving much faster.
One useful way to navigate this uncertainty is through the concept of maturity models.
Where Maturity Models Come From
Maturity models have been used for decades to help organisations understand how they develop capabilities over time. One of the most influential examples is the Capability Maturity Model (CMM), developed in the late 1980s by the Software Engineering Institute at Carnegie Mellon University. The model was originally designed to evaluate how organisations develop reliable software processes.
The principle behind it was straightforward. Organisations rarely master complex practices all at once. Instead, they evolve through stages. They begin with informal experimentation, gradually introduce structured processes, and eventually build systems that are optimised and integrated across the organisation.
Since then, maturity models have been widely used in many fields, including cybersecurity, digital transformation, data governance, and enterprise artificial intelligence.
Most maturity models follow a similar logic. Capabilities evolve over time. Organisations move through identifiable stages of development. Maturity depends not only on technology but also on processes, skills, governance, and organisational culture.
Perhaps most importantly, maturity models are diagnostic tools. They help organisations understand where they are today and what capabilities they need to develop next.
Applying Maturity Thinking to Public Affairs
Public affairs work is heavily structured around knowledge workflows. Monitoring legislation, analysing policy proposals, preparing briefings, developing advocacy positions, building message houses, or defining campaign KPIs all involve multi-step analytical processes.
Artificial intelligence can support many of these activities. But the way organisations adopt AI varies widely.
Some professionals use AI occasionally as a personal assistant. Others are integrating it into more analytical workflows. A smaller number of organisations are exploring more advanced approaches such as workflow automation or AI agents.
Inspired by existing maturity frameworks, it is possible to sketch a simplified maturity curve for AI adoption in public affairs.
The goal is not to create a rigid classification system. Instead, it provides a practical way of reflecting on where your team currently stands and how your capabilities might evolve.
Stage 1: Personal Productivity
At the first stage, AI is used primarily as an individual productivity tool.
Professionals use tools such as ChatGPT or Claude to draft emails, rewrite documents, summarise texts, brainstorm ideas, or clarify concepts. Use is typically informal and individual rather than organisational. There are no shared processes or structured workflows across the team.
Many professionals in public affairs are currently operating at this stage.
Stage 2: Task Augmentation
In the second stage, AI begins supporting specific professional tasks.
Examples might include summarising policy documents, analysing consultation responses, extracting insights from legislative debates, or drafting briefing notes.
Users often begin introducing more structure at this stage. They develop prompts, templates, or repeatable instructions that guide the model toward consistent outputs.
AI becomes embedded in how certain tasks are performed, but the use remains largely task-specific.
Stage 3: Workflow Integration
At the third stage, organisations begin thinking in terms of workflows rather than isolated prompts.
Public affairs work rarely consists of single tasks. Most outputs are the result of multi-step processes. For example, producing a monitoring report may involve collecting source materials, structuring the information, analysing actors and positions, and generating a structured report.
At this stage teams begin mapping those processes and integrating AI into multiple stages of the workflow. Standard operating procedures often emerge, guiding how AI is used across the analytical process.
The human user still orchestrates the workflow, but the use of AI becomes more systematic.
Stage 4: Agentic Systems
The fourth stage introduces agents.
Agents are systems that allow language models to execute sequences of tasks automatically. Instead of manually running each step of a workflow, the system coordinates the process. For example, an agent might retrieve policy documents, analyse their content, identify key developments, and generate monitoring reports.
Some organisations are already experimenting with agents for policy monitoring, document analysis, or strategic reporting. But this stage also introduces new challenges. Agents require clearer workflows, more structured data, and stronger governance.
Stage 5: Organisational Intelligence Systems
The most advanced stage moves beyond individual tools or agents.
Artificial intelligence becomes embedded in the organisation’s broader information architecture. Systems connect policy monitoring, stakeholder intelligence, campaign planning, and internal reporting.
AI helps support the continuous flow of information and analysis across the organisation.
Very few public affairs teams are operating at this stage today, but the direction of travel is becoming visible.
A Moment of Reflection: Where Are You on the Curve?
Frameworks like this are useful because they encourage reflection.
It can be helpful to pause for a moment and ask two simple questions. Where are you personally on this maturity curve? And where is your organisation or team?
In many cases the answers will not be the same. Individuals often experiment with AI tools long before organisations integrate them into their processes. Some team members may already be developing sophisticated workflows while others are still discovering the basic capabilities of the tools.
The speed of technological progress can also create anxiety. Many professionals feel they are already behind the curve.
In reality, most organisations are still in relatively early stages of adoption.
It is also important to recognise that maturity evolves under different conditions. Smaller organisations with fewer resources may find it harder to experiment extensively or invest in structured systems.
Maturity models are not meant to create pressure. Their purpose is to create awareness.
Maturing Toward a Moving Target
The idea of maturity models remains useful, but artificial intelligence introduces a unique challenge.
Organisations can control many internal factors that influence their maturity: their willingness to experiment, the clarity of their workflows, the skills of their teams, and the learning culture they create.
But they have very little control over the external environment.
Artificial intelligence is evolving at extraordinary speed. Capabilities that once required complex workflows are increasingly becoming native features of the models themselves.
This means organisations are maturing into a landscape that is changing as quickly as they are adapting to it.
In that environment, maturity may not mean catching up with technology.
It may mean building organisations that are capable of evolving alongside it.




Comments