top of page
Search

Defining AI for Public Affairs: Four Functional Types and a Common Pitfall

  • marta2253
  • 2 days ago
  • 3 min read

Co-Founder Advocacy Academy, Advocacy Strategy and Owner at Paul Shotton Consulting

July 10, 2025


Artificial intelligence is now firmly on the agenda in public affairs. Many professionals are reflecting not only on which tools are available, but also on how AI can support different elements of their workflows—from monitoring and intelligence gathering to content creation, stakeholder engagement, and strategic planning. Alongside this interest is a growing need for clarity: what exactly do we mean when we talk about “using AI” in public affairs?

 

A recent systematic review by Lock, Hoffmann, Burgers, and Araujo (2025), based on nearly 200 studies in the field of public communication, offers a helpful foundation. Building on their work and informed by current developments in the field, four distinct functional types of AI can be identified. Each serves a different role and has different implications for professional practice.

 

Alongside these types, a fifth consideration deserves attention: the widespread and unhelpful use of AI as a vague label. Without a clear definition of what kind of AI is being used and for what purpose, it becomes difficult to evaluate risks, benefits, or outcomes.

 

1. AI as Method

 

AI is used to process, structure, or analyse large volumes of data. This includes tools such as machine learning, natural language processing, clustering algorithms, or automated content analysis. These tools are commonly used in public affairs to support monitoring, media analysis, or intelligence gathering. They help identify patterns, group actors, or track policy narratives at scale. However, their outputs must be interpreted and validated by human professionals who understand the political and policy context in which they are working.

 

2. Generative AI

 

These tools create new content—such as text, audio, visuals, or video—based on input prompts or data. Examples include ChatGPT, image generators, or automated speech tools. Public affairs teams are increasingly experimenting with generative AI to draft internal notes, brainstorm messages, or produce first drafts of briefings. While the productivity benefits are clear, the outputs require careful editing, validation, and contextualisation before they are used in any strategic or external setting.

 

3. AI as Communicator

 

AI interacts directly with audiences—examples chatbots, or virtual agents. These tools respond to questions or requests in real time. While not yet widely adopted in public affairs, they may eventually be used to support structured stakeholder engagement or internal knowledge systems. However, they introduce significant reputational and accountability considerations and should be deployed with caution, especially when speaking on behalf of entities.

 

4. AI as Decision-Maker

 

AI systems that not only analyse information but also structure, recommend, or even initiate decisions. These may prioritise stakeholders, rank threats, propose strategy pathways, or generate recommendations for action. Some public affairs teams are already testing AI-powered dashboards or recommendation systems that assign engagement scores, suggest lobbying tactics, or flag high-risk issues for escalation. While humans remain “in the loop,” the AI is influencing what gets done—and when. These tools require transparency, clear governance, and a strong understanding of how outputs are generated.

 

If the AI is used to organise and present information for human interpretation, it falls under AI as Method. If it moves beyond this to suggest or trigger a specific course of action, it should be considered a Decision-Maker.

 

A Common Pitfall: Using "AI" Without Defining It

 

Many discussions—both in research and in professional practice—refer to “AI” without specifying what is actually being used. This lack of definition is more than a semantic issue. Public affairs teams should take care to avoid this. Rather than using the term “AI” in general terms, it is essential to define the specific technology, its purpose, and how it fits into existing workflows. A shared internal understanding of what kind of AI is being used is the foundation for effective adoption.

 

Why These Definitions Matter

 

Each functional type of AI raises different strategic, ethical, and operational considerations. Understanding the distinctions helps public affairs teams:

 

  • Select the right tools for the right tasks

  • Build appropriate review and validation processes

  • Align AI use with professional standards and strategic objectives

 

Defining the type of AI being used is the first step toward embedding it effectively and responsibly in public affairs practice.

 

 
 
 
Contact Us
+32 (0) 470 95 23 29
hello@advocacystrategy.com
Brussels - The Hague - Madrid
© 2025 Advocacy Strategy. All Rights Reserved.
bottom of page