When AI Does the Junior Work: How Public Affairs Leaders Should Rethink Competence Development
- 3 days ago
- 5 min read

By Paul Shotton, Advocacy Strategy
This weekend, while working through how AI is reshaping public affairs practice, I found myself returning to a chapter I co-authored with Arco Timmermans of Leiden University for the Research Handbook on Public Affairs. We argued that public affairs competence is multidimensional — it combines knowledge, skills, attributes, professional seniority and the ability to operate across different political, institutional and organisational contexts. Re-reading it now, I was struck by how quickly that framework is being stress-tested. Not because it was wrong, but because the pace of AI development is accelerating exactly the tensions it identified.
That is what this piece is about. Not which tools to use, or how to build a monitoring workflow, but what AI means for how public affairs professionals develop the judgement that makes any of that useful.
The apprenticeship profession meets the automation moment
Public affairs has always been, to a significant degree, an apprenticeship profession. Formal education programmes exist, but they are few. Most practitioners learn on the job — monitoring policy files, drafting briefing notes, preparing meetings, mapping stakeholders, observing senior colleagues, gradually building an understanding of how political systems actually work. The profession has relied on repeated exposure to the raw material of public affairs as the primary mechanism for developing competence.
That model is under pressure. AI can now summarise policy documents, compare amendments, produce monitoring reports, generate stakeholder lists, draft briefing notes, test messages and structure consultation responses. These are precisely the tasks through which junior professionals have traditionally learned the profession. And the tools available today are still early versions of what is coming.
What those tasks were actually teaching
In the chapter with Arco, we drew on Kahneman's distinction between System 1 and System 2 knowledge. System 1 is tacit, pattern-based and built through repetition — the kind of knowledge that allows an experienced practitioner to read a political signal before it becomes obvious, or to sense that a stakeholder's stated position is not their real one. System 2 is explicit, analytical and evidence-based — the kind of knowledge that can be taught, documented and applied consciously.
For professional development in public affairs, the construction of System 2 knowledge alongside the more implicit System 1 is crucial. Both matter. And this is where AI creates a genuine developmental risk.
Monitoring may look like routine administration, but it builds System 1 knowledge: how institutions evolve, how language shifts during negotiations, how political signals accumulate before they become visible. Drafting a briefing note looks like a writing exercise, but it teaches prioritisation, synthesis and relevance — what to include and, critically, what to leave out. Stakeholder mapping looks like a technical task, but it develops the feel for the difference between formal authority and informal influence, between an actor who is credible with an institution and one who merely claims to be.
If AI produces the output before a junior professional has worked through the process, System 1 development is interrupted. They receive the summary before learning to read the original. They see the stakeholder map before understanding how relationships are built. They edit a draft message before grasping the political trade-offs behind the wording.
Generic and specific competences: where AI hits hardest
Another distinction in the chapter is worth applying here. We drew a line between generic competences — knowledge and skills transferable across jurisdictions and contexts — and specific competences tied to particular political systems, regulatory cultures or institutional settings. The generic layer covers things like stakeholder analysis methodology, drafting techniques, monitoring protocols and message structuring. The specific layer covers things like reading informal dynamics in a particular parliament, understanding the unwritten norms of a specific regulatory body, or knowing which voices carry weight with a given institution.
AI's substitution effect is not uniform. It is heaviest at the generic level — the tasks that are most codifiable, most replicable across contexts, most amenable to a good prompt and a well-structured workflow. It is lightest at the specific level, where political judgement is deeply contextual and where experience in a particular environment is what matters.
This is significant for how we think about junior development. The generic layer is not just administrative scaffolding. It is where the foundations are built. Practitioners who skip it — or who have it automated away before they have internalised it — may reach mid-level roles with the appearance of competence but without the underlying System 1 pattern recognition that genuine judgement requires. They may be able to use the tools without being able to tell when the tools are wrong.
The role question
Not every professional needs the same relationship with AI, and this is where a clearer sense of competence levels matters. A junior professional needs to become highly capable in using AI to support research, monitoring, drafting and preparation — but also in interrogating outputs, checking sources and identifying gaps that a model cannot see. That critical layer is not automatic. It requires deliberate development.
A mid-level professional needs to integrate AI into workflows, manage quality control across a team, compare outputs against source material and turn analysis into recommendations that reflect political reality. A senior professional needs to make decisions about where AI should and should not be used, what standards are required, what risks are acceptable, and — perhaps most importantly — how the team develops the judgement needed to work with AI responsibly over time.
What public affairs leaders need to do differently
The practical implication is not to slow AI adoption. The tools are useful and will become more so. The implication is to manage the developmental consequences deliberately.
That means mapping not only the tasks where AI saves time, but also the tasks where human learning takes place. It means identifying which activities should remain developmental even when they could be automated. It means designing exercises where junior colleagues critique AI outputs, compare them with original source documents, and explain what is missing and why. It means bringing younger professionals into interpretation and trade-off discussions earlier than the traditional model required. And it means senior professionals making their own reasoning more explicit — because tacit learning through gradual exposure is a less reliable mechanism when the exposure itself is being compressed or bypassed.
AI can be a powerful learning tool if it is used to support reflection, comparison and critique. It becomes a developmental risk when it is used only to produce faster outputs.
Life Long Learning
What made me pause, re-reading the chapter, was a sentence we wrote about competence development continuing across a career — that the profession requires lifelong acquisition and refinement, not just entry-level formation. That is still true. But the conditions under which it happens are changing more quickly than most public affairs teams are yet recognising.
AI can produce outputs. It cannot yet tell you whether those outputs are politically accurate, strategically relevant, ethically sound or institutionally appropriate. That remains the work of a professional with developed judgement. The question is how we ensure that judgement is still being built.




Comments