top of page
Search

Use ChatGPT to Improve Your Prompt Methodology

  • 5 hours ago
  • 2 min read

By Paul Shotton, Advocacy Strategy


Most public affairs practitioners who use LLMs regularly have developed habits—some good, some inefficient. The problem is that these habits are largely invisible in a single exchange. They only emerge as patterns across a long-form project.


Your prompt history is not just a record of what you asked; it is a data set of how you think, structure work, and refine strategy.


This post provides a four-step exercise—including the exact prompts—to analyze your AI workflow, identify where your method is robust, and pinpoint where it is costing you time or quality.


The Shift: From Prompting to Methodology

As we move into more complex work—drafting policy positions or synthesizing consultation data—the question changes. It is no longer: “Is this output okay?” It is: “What does my interaction with this tool reveal about my strategic rigor?”


Instead of searching for a "magic prompt," look back at one substantial conversation—a research task, a framework development, or a workshop plan. This thread contains the evidence of your prompt methodology.


The 5-Minute Methodology Audit

Step 1: Select Your Data

Choose one long, interactive thread from your history where you worked through a substantial task from start to finish.


Step 2: The Analysis Phase

Ask the model to analyze the conversation. Paste the following prompt into that same thread:

Prompt: "Review this conversation as evidence of how I work with an LLM. Do not give me a score. Assess it against these criteria: Task Framing: Did I define the objective clearly and reset it when it shifted? Context Setting: Did I provide audience and constraint context early enough? Stepwise Progression: Did I move through stages, or did research and drafting blur? Iterative Refinement: Did I give specific, directional feedback or vague requests? Use of Critique: Did I ask for an evaluation of assumptions? Decision Support: Did I use the model to resolve structural decisions early? For each criterion: Characterize the pattern, identify a genuine strength, and identify the most significant weakness (be direct)."

Step 3: The Diagnosis

Once the analysis is complete, ask for a judgment on your "maturity" as a user:

Prompt: "Using the analysis above, diagnose my methodology in three parts: Three Strongest Features: (30-40 words each) Why they strengthen the work. Three Weakest Features: (30-40 words each) Be direct about the cost in rework or quality. Summary Judgment: (Max 100 words) Characterize the overall maturity and repeatability of this approach. Identify the single most important thing that would improve it."

Step 4: The Experiments

Finally, turn the diagnosis into a tactical plan for your next project:

Prompt: "Suggest 3 experiments I could run in my next task to address my weakest features. Prioritize by impact. For each: Name the behavior change, describe what to do differently, and explain what it is designed to test. Keep each to 60 words."

Why This Matters for Public Affairs

In a field where "showing your work" and maintaining intellectual rigor is paramount, AI should not be a black box. If your methodology is disorganized, your strategic output will be too.


Once you move beyond simple prompting, improvement depends less on "clever instructions" and more on your professional discipline. If you try this audit, I would be curious to hear what it surfaces about your workflow.

 
 
 

Comments


bottom of page