top of page
Search

Most Policy Dashboards are Built to Impress Leadership, not to Guide Them.

  • Feb 27
  • 6 min read

By Paul Shotton, Advocacy Strategy


That's not an accusation. It's just what happens when you start with the wrong question. Teams spend weeks debating which metrics to use, how to weight them, which tool to build in. They produce something that looks rigorous. Leadership nods. It goes into a folder.

Nothing changes.


Call it a dashboard, a portfolio review, or a one-page executive summary — the format matters less than most people think. What matters is the logic underneath. And the question that should come first is not "how do we score our issues?" It is "what exactly changes when something becomes a priority?"


If the answer is unclear — if moving an issue from medium to high doesn't trigger a different reporting cadence, a different budget conversation, a different level of senior engagement — then the scoring is decorative. You've built a label, not a system.

There's also something most guides on this topic miss entirely. Before you design anything, walk down the corridor. Your risk, legal, or finance function almost certainly has a risk framework that leadership already trusts — with defined impact thresholds, escalation logic, and language the board actually speaks. If your prioritisation approach connects to that, you stop producing a separate public affairs report that competes for attention. You become part of the governance infrastructure that already exists.


That changes how seriously the work is taken.


What follows is a practical framework for building something that does exactly that — whatever format you choose to put it in.


Start with consequences, not criteria

Before you score a single issue, define what priority actually means in your organisation.

When something becomes top priority, what changes? Does it trigger dedicated strategy? Senior engagement? Weekly reporting? Budget reallocation? Formal escalation?

And when something is medium priority? Targeted engagement at key moments only?


Monthly updates? No external spend?


If you cannot describe what changes by tier in concrete, operational terms, your prioritisation system is cosmetic. A tier without behavioural consequences is just a label. A tier with defined reporting cadence, engagement intensity, and resource implications is governance.


Most teams define the scoring before defining the stakes. That is the wrong order.


Name the decision, then choose the metrics

Every prioritisation exercise must support a specific decision. Not a general conversation. A decision.


In annual planning: Are we allocating resources according to strategic exposure? In campaign management: Are we on track — or do we need to intervene? In portfolio review: Are we over-invested in low-impact files?


If you cannot write the decision in one sentence, stop. Clarify that first. Metrics come after.

Once the decision is clear, a robust prioritisation structure for public affairs usually needs four to six metrics, not more. For most teams: impact on business, likelihood of the political outcome, timing of that impact, and realistic ability to influence.


The test for every metric is simple: if I remove this, would the decision change? If not, remove it.


Define each metric precisely

Without definitions, you do not have data. You have opinions.


Each metric needs a short definition, a clear 1–5 scale, anchored descriptions at each score level, agreed interpretation across the team, and a defined update rhythm. Structured judgment is not the same as guesswork. Public affairs is not a laboratory — precision has limits — but discipline does not.


This is also where you get business ownership into the process. Impact scores should not be set by the public affairs team alone. Pull in the people who understand what regulatory exposure actually costs.


Before you build anything — look inside your own organisation

Here is something I learned the hard way: before finalising your metrics and scoring language, walk down the corridor first.


Most organisations already have a structured approach to risk assessment somewhere — in legal, finance, compliance, or internal audit. These teams have spent years building frameworks that leadership already understands and trusts. They have defined what "high impact" means in business terms. They have escalation logic, thresholds, and risk language that the board already speaks.


Your job is not to reinvent that. It is to find it and connect to it.


If the risk department uses a 3×3 likelihood-impact matrix, see whether your framework can align with it. If finance classifies risk by revenue exposure bands, use the same ranges for your impact scale. If legal defines "critical" as anything that goes to board level, adopt that threshold.


It saves you a lot of calibration work — the definitions are already done. And it makes your output immediately readable to leadership. When a senior leader sees a Tier 1 classification in your policy portfolio and it maps to what they already recognise as critical in the enterprise risk register, the conversation changes. You are not asking them to learn a new system. You are showing them that regulatory exposure belongs in the same frame as the risks they already manage and take seriously.


It also opens a practical door. If the risk function runs quarterly reviews, your public affairs prioritisation can feed directly into that process. You stop producing a separate report that competes for attention and start contributing to the governance infrastructure that already exists.


The translation takes some work. "Likelihood of legislative adoption" becomes "probability of material regulatory change." "Ability to influence" becomes "mitigation capacity." But it is worth doing. It is the difference between a prioritisation exercise that sits in a public affairs folder and one that ends up on the risk committee agenda.


Move from issue scores to portfolio signals

Scoring individual issues is necessary. But governance requires portfolio-level visibility.

Leadership does not only need to know how one file scores. They need to understand where exposure is concentrated, how many issues sit in each tier, whether short-term risk is increasing across the portfolio, and whether resource concentration reflects exposure concentration.


Whatever format you use — a slide, a one-pager, a live tracker — it should answer four questions within sixty seconds: Where are we most exposed? What needs attention now? Where are we investing? What decision do you need from us?


If it takes longer than that, simplify.


Track status and progress, not just priority

Exposure tells you where to focus. Progress tells you whether the strategy is working.

Beyond issue scores, leadership needs visibility on which files are moving, which are stalled, and where the team is making a difference.


That means tracking status explicitly — not just political probability, but your own engagement progress against it. Are the right conversations happening? Are positions being adopted? Is momentum building or slipping?


It also means breaking things down by dimension. Which region is performing? Which business unit is most exposed — and is that reflected in how they are engaged? Where is the public affairs function adding value, and where is it thin on the ground?


This is what turns a prioritisation exercise into a management tool. Leadership can then ask sharper questions: Why is this file stalled in the Caucasus but moving in Brussels? Why is this business unit consistently underrepresented in our engagement data?


Without that layer, you are showing leadership a map with no route markers.


Make the governance chain visible

Good prioritisation doesn't just show scores. It shows consequences.


The chain should be explicit: scoring leads to tier classification, tier classification determines reporting frequency and engagement intensity, engagement intensity drives resource allocation. That logic needs to be visible — not buried in a methodology note.

If Tier 1 and Tier 3 look similar in practice, governance is weak.


A useful move is to separate the strategic and operational views entirely. One dataset, two formats. The strategic view shows exposure concentration, tier distribution, portfolio movement, and escalation triggers. The operational view shows issue ownership, policy stage, engagement plans, and milestones. Mixing both creates overload. Separating them creates clarity.


Build rhythm, not just structure

Political landscapes shift. Likelihood changes. Timing accelerates. New risks emerge.

Any prioritisation system without review rhythm becomes decorative within a quarter. Define update frequency, ownership, calibration cycles, and escalation rules from the start. Governance requires repetition and renewal — not just a well-built tool.


The final test

When an issue moves from medium priority to top priority, what exactly changes? Who becomes involved? What reporting frequency shifts? What budget moves? What do you stop doing?


If the answer is unclear, the system is incomplete.


A structured approach to policy prioritisation is an argument about where your organisation should focus its time, energy, and resources. It translates complexity into prioritisation, prioritisation into resource allocation, and resource allocation into action.

If it does not change behaviour, it is not governance.


 
 
 

Comments


bottom of page