Before you build your AI strategy, build your AI foundation
- marta2253
- May 27
- 5 min read

Co-Founder Advocacy Academy, Advocacy Strategy and Owner at Paul Shotton Consulting
We are starting to see real momentum. Across the EU public affairs community — and well beyond — there are more and more signs of experimentation and early adoption when it comes to using AI. On LinkedIn, you’ll find some incredible examples: custom GPTs, stakeholder analysis tools, and creative uses of summarisation or translation functions in day-to-day policy work.
These early adopters — the innovators, the explorers — are out in front. They’re testing new tools and sharing lessons. It’s exciting to watch.
But for many organisations, these examples feel distant. They haven’t yet built the internal confidence, clarity, or capacity to take the first step. And this isn’t about attitude. Most teams I work with want to explore AI. But they’re held back by a more fundamental problem: they haven’t laid the groundwork for adoption.
In public affairs, that groundwork matters. Because we are a profession that works best when things are structured. And so we need to start with the basics.
Step one: do you have a model and a toolkit?
Much of my work — whether in consultancy, training, or workshops — focuses on helping teams build a structured, stepwise approach to public affairs. That begins with a model that maps out the core phases of your work: from scanning and prioritisation to engagement and evaluation.
At each step, there are outputs — deliverables that define whether you’re progressing: a stakeholder map, a monitoring report, a message house, a project tracker, an evaluation grid. These deliverables require methods. And methods require tools — Excel templates, dashboards, shared folders, guidelines, structured formats.
That’s your public affairs toolkit. And it should align with your model.
If you don’t have that structure in place, then any AI application risks becoming a one-off experiment, rather than a tool that supports how your team actually works.
Step two: upgrade the toolkit — carefully
Many of us are already in the process of evolving our toolkits. We’ve moved from static Word and Excel documents to more dynamic interfaces: Airtable, Power BI, Smartsheet, Notion, and so on. These tools help structure information, allow collaboration, and create dashboards to track progress and performance.
Some of them also promise AI functionality. And while some features — Canva’s smart design tools, for example — deliver real value, others are less helpful. I’ve found Airtable’s AI offering, for instance, difficult to use. It often produces overcomplicated results when a simple structure would do. That may be down to me — feeding in the wrong prompts or the wrong data — but it’s a useful reminder: not all AI tools are created equal, and usefulness depends on context, clarity, and capability.
Step three: acknowledge the limits — and the potential
Much of the buzz around AI in public affairs is focused on the high end of the spectrum: big data, predictive analytics, algorithmic targeting, automated stakeholder profiling.
These are powerful, but highly specialised tools. They require significant investment and (in-house) data science expertise. Most public affairs teams — especially small to mid-sized ones — don’t have that capacity. And many political scientists, myself included, don’t intend to become statisticians or machine learning engineers.
But what we do need to do is develop the ability to work alongside experts — to interpret results, ask the right questions, and review the quality and relevance of outputs. That means developing foundational literacy, not deep technical mastery.
Step four: experiment with what’s already available
The good news is that large language models like ChatGPT — and its many domain-specific variants — offer a much more accessible entry point. You don’t need a data engineering team. You need curiosity, structure, and a willingness to learn.
And this is where most teams should start.
Not with a grand AI strategy. Not with a custom-built tool. But with everyday workflows.
Ask:
How do we currently write emails, translate briefings, analyse documents, or define KPIs?
Can LLMs help us summarise more efficiently, test alternative framings, or generate message drafts?
How could these tools support the way we prioritise issues or structure internal reporting?
And then — crucially — run these experiments together.
Don’t isolate innovation — build shared practice
One of the real risks at this stage is the “lone innovator” trap. That’s when a single team member (usually a tech enthusiast) builds a custom tool, tests an LLM-based workflow, or creates a dashboard — but does so in isolation.
It may be impressive, but it doesn’t build capability across the team.
That’s why, when we work with clients, we run collective workshops. Well-prepared, structured sessions with diverse team members — not just early adopters. We build use cases together, based on real workflows. We reflect on what works, what doesn’t, and what needs to change. And we embed that learning through repetition, iteration, and shared experience.
This is how you build not just knowledge, but practice.
Step five: identify quick wins — and build momentum
Change takes time. You won’t get it perfect on the first try. But you don’t need to.
The goal is to identify relatively simple, low-risk processes that allow your team to experiment and see the benefit. That’s where the mindset shift starts. That’s where the enthusiasm builds. And that’s how you develop the internal confidence to move toward more complex applications over time.
And along the way, you’ll identify the gaps:
The wrong user licenses.
The absence of tools.
A lack of experience in prompting.
Unfamiliarity with workflow logic or process mapping.
These are real and solvable problems — but you can only solve them once they’re visible.
Step six: define the policy — and protect the future
Now of course, we need to talk about data privacy, GDPR, security, and responsible use. These are serious, and legitimate concerns. But they cannot become an excuse for inaction.
Because other organisations are already gaining a competitive edge by moving forward. Not necessarily by working faster — but by working smarter. And that matters. In public affairs, quality is a competitive advantage.
So you need to find a solution. Define your AI policy. Set the parameters. Create space for controlled experimentation and internal learning. If your team is blocked from using these tools, they will fall behind. Their skills, outputs, and thinking will remain rooted in outdated practices. And your organisation will feel the consequences — in agility, in quality, and ultimately in impact.
Final thought: foundations before futures
Wherever you’re starting from — whether you already have a model or still need to build one — remember: you can’t skip the foundational step.
Build your structure. Define your workflows. Review your toolkit. Create shared understanding. Then experiment. Iterate. Learn together.
You’re not just updating your practice. You’re building a capability that should outlive individuals. Early adopters will come and go. But the practice you embed now — the way your team uses tools, understands process, and embraces change — that’s what will shape your organisation’s future.
And yes, it can be fun. These tools can foster creativity. They free up time to think strategically, to analyse more deeply, to summarise more effectively, and to challenge the quality of our own work.
But only if we take that first step — and start building.
Comments