top of page
Search

Why Templates Matter Even More with AI

  • Paul Shotton
  • Sep 30, 2025
  • 3 min read

When working with large language models, outputs can quickly become vague, verbose, or inconsistent. A template counters that by forcing both the human and the AI to work within agreed parameters. It provides structure, limits, and clarity — all of which are essential if you want outputs that can be trusted and used.


Take a monitoring email template I recently drafted. It includes clear sections: title and summary, key developments, background, recommended actions, stakeholder perspectives, and upcoming events. On the surface, this seems like a good structure. But a closer look shows why the criteria inside a template are just as important as the structure itself.


Where Templates Meet AI’s Limits

Before we look at what makes a good template, it’s important to acknowledge the limits of what an AI can do with a template.


Large language models are excellent at certain tasks — they can summarise, structure, and condense information with remarkable speed. This means sections like the executive summary or the context and background of a monitoring report are often handled well by an AI when guided by a clear template.

But other sections demand more than summarisation. Take the recommended actions: here, it’s not enough to restate developments or identify implications. Action points depend on an organisation’s strategy, its relationships, its appetite for risk, and its positioning within a policy debate. These are insights that an AI cannot generate reliably without rich context and a conversation with the people inside the organisation.


So while templates provide the scaffolding, we need to remember that AI cannot fill every gap equally. For descriptive and structural elements, it’s a strong partner. For prescriptive or strategic recommendations, human expertise remains indispensable.


What makes a good template?

A good template isn’t just a container. It’s a framework that shapes how you think, write, and act. From my work with monitoring alerts and policy papers, I see seven essential criteria:

  1. Defines the scope Word counts, line limits, and bullet caps keep text concise. Example (good): “Executive summary ≤100 words, 3–4 bullets” → prevents rambling.

  2. Structures the sequence Sections guide the reader logically from urgency → context → action. Example (good): “Title → Summary → Developments → Background → Recommended Actions → Events → Stakeholders” → ensures information flows, not fragments.

  3. Specifies the lens Each section needs a perspective: comparative, evidence-based, or action-oriented. Example (good): “Stakeholders: split into supporters, opponents, allies” → forces relational analysis instead of listing names.

  4. Sets tone and style Clear, simple, persuasive language avoids jargon and keeps policymakers engaged. Example (good): “The goal is to convince, not educate” → ensures advocacy papers stay focused on influence.

  5. Filters the content Defines what belongs (actors, implications, sources) and excludes what doesn’t (transcripts, generic theory). Example (good): “Background ≤100 words, must cite official legislative text or source” → enforces evidence and relevance.

  6. Embeds accountability Actions, owners, and deadlines ensure follow-through, not just analysis. Example (good): “Immediate actions: [Action] [Owner] [Deadline]” → assigns responsibility clearly.

  7. Builds traceability and validation Contacts, sources, and review dates make outputs trustworthy and reusable. Example (good): “Reviewed by [Reviewer], Next review [Date]” → prevents outdated or unchecked drafts from circulating.


Three most powerful examples of vagueness

1. Scope (length & focus)

  • Instruction: “Describe the latest development related to [Issue Name].”

  • Problem: Without limits, you get sprawling narratives or even transcripts.

  • Fix: “3–5 bullets, max 2 lines each, focusing only on who/what/why and implications.”

2. Accountability (action orientation)

  • Instruction: “Describe the action steps.”

  • Problem: Leads to vague lines like “continue monitoring.” No one knows who is responsible.

  • Fix: “Immediate actions (max 3): [Action] [Owner] [Deadline].”

3. Traceability (sources & validation)

  • Instruction: “Links to key sources and resources.”

  • Problem: Template doesn’t require validation, so outdated or unchecked updates could circulate.

  • Fix: “Include at least one official source + a ‘Reviewed by [Reviewer], [Date]’ field.”eadline].”


Putting it all together

Here’s how one section looks before and after tightening criteria:

Vague version (original template):“Describe the latest development related to [Issue Name] took place on [Date], when [describe what happened]. This development is significant because [explain why it's important].”

Improved version:“Provide 3–5 bullets, max 2 lines each. Each bullet must include:

  • Who acted (institution, committee, stakeholder)

  • What they did (decision, publication, statement)

  • Why it matters (specific policy or procedural implication).Do not exceed 10 lines in total. Avoid full transcripts.”


Why this matters

AI is only as strong as the system around it. Without templates, outputs lack structure. Without sources, they lack credibility. Without validation, they lack trust.


For public affairs professionals, that scaffolding is the difference between scattershot experimentation and dependable integration.


The temptation when experimenting with AI is to focus on the tool. But the real magic lies in the scaffolding:

  • Templates that set clear criteria,

  • Source repositories that ensure evidence,

  • Validation systems that create trust.


The question to ask isn’t just “What can AI do for me?” but “Do I have the scaffolding in place to make its outputs reliable?”

 
 
 

Comments


bottom of page