Free AI Adoption Strategy Questionnaire
- Paul Shotton
- 4 hours ago
- 5 min read
By Paul Shotton, Co-Founder Advocacy Strategy
Many organizations want to train their teams to use AI tools—but quickly discover they can’t design meaningful training without first understanding where they stand and where they want to go.
When developing a training program on AI adoption for a client’s public affairs team, we were ready to dive into practical skills: prompting, mapping workflows, building SOPs, and even developing AI agents. IN many instances it becomes clear that we needed a vital preliminary step. Before the team can learn how to use AI, the organization needs to decide why it wants AI, where it is starting from, and what it hopes to achieve.
In short, we opt for a light AI adoption strategy—a foundation that would later make the training meaningful, relevant, and sustainable.
This questionnaire helps organizations reflect on that first phase. It’s designed for leadership teams who want to clarify their current position, define their vision, and identify where AI can bring the most value.
The process has two parts:
Phase A – Strategy Definition (Phases 1 to 5): Understand your current position, clarify your ambition, define use cases, map workflows, and align enabling tools.
Phase B – Training and Implementation (Phases 6 to 8): Build capacity, embed governance, and roll out adoption.
What follows is Phase A—a structured reflection process supported by guiding questions you can use directly with your team.
Phase 1 — Diagnostic: Where are we now?
Before describing any destination, it’s essential to understand the starting point. This means defining your organization’s current position, capacity, and capability.
Position describes how AI and digital tools fit within your current strategy, operations, and client or stakeholder expectations. Capacity concerns resources—who could champion AI work, what time and budget exist, and what systems or data you already use. Capability focuses on confidence and competence: how teams work with digital tools, how comfortable they are experimenting, and whether process documentation is already part of your culture.
The goal is a short readiness snapshot that leadership recognizes as true: concise, sober, and useable.
Phase 1 — Leadership Questionnaire
What AI or adjacent tools are in use today, and where are they adding value?
Which teams or roles have the time, appetite, and authority to lead small experiments?
How would you describe the team’s confidence with AI today—curious, cautious, or skeptical?
Where are peers or competitors demonstrably ahead (or behind) on AI-enabled practices?
Phase 2 — Vision: Where do we want to go (relative to others)?
Vision isn’t a slogan—it’s a directional choice grounded in the diagnostic and informed by what’s happening outside the organization.
For consultancies, associations, or any organization operating in a competitive market, “where we want to go” depends partly on where others are going. A short landscape scan—three to five peers or comparators—is often enough to identify emerging baselines. From there, the organisation can make an explicit choice: do we intend to lead, keep pace, or adopt selectively?
The outcome should be a one-paragraph vision statement tying AI adoption to your mission, defining the first scope of work, and stating your relative ambition.
Phase 2 — Leadership Questionnaire
Why adopt AI now—efficiency, better insight, differentiation, or client / member / funder expectations?
Which practice areas should feel the first effects of AI (monitoring, analysis, engagement, reporting)?
What would “good” look like in six months that’s meaningfully better than today?
Phase 3 — Use Cases: What problems are we solving—and what could we start doing?
Once the vision is defined, the next step is to make it tangible through use cases. I separate these into two categories.
Augmentation covers activities the team already does but could do better with AI: faster monitoring summaries, first drafts of position papers, or smarter categorization of existing data.
Expansion covers activities the team doesn’t yet do but could if AI made them possible: automated policy trend comparisons, stakeholder network visualizations, or internal copilots that can retrieve evidence or summarize issues on demand.
The point is to balance quick wins that build confidence with new capabilities that expand value creation. Alongside this, consider a light “tool fit” assessment—whether each use case relies on general-purpose language models, specialist or custom AI, or traditional platforms with embedded AI functions—without committing to vendors too early.
Phase 3 — Leadership Questionnaire
Which recurring tasks consume time without adding distinct value?
Which new analyses or services would be valuable but are currently infeasible without AI?
For each candidate use case, what is the expected impact and complexity?
Phase 4 — Workflows and SOPs: How do we work, and how should we?
Use cases only become operational when you map the actual work. Start by describing the “as-is” workflow for each priority use case: inputs, decisions, outputs, and handoffs. Then identify where AI could assist, accelerate, or automate steps—always preserving human oversight for judgment calls.
Next, sketch the “to-be” version and translate it into simple Standard Operating Procedures. These SOPs form the bridge between strategy and training. Once they exist, you can design exercises, examples, and learning modules that target specific steps and quality checks. This stage ensures that AI adoption is not theoretical but embedded in daily routines.
Phase 4 — Leadership Questionnaire
For each priority use case, what are the exact inputs, decision points, and outputs today?
Where does work pile up or depend on a single person’s tacit knowledge?
Which steps could AI assist or automate, and why?
Phase 5 — Tools and Data Alignment: What enables it?
Finally, we match each workflow to the enabling technology and data it requires. This often involves pairing general-purpose AI models for reasoning and drafting with specialist tools that sit closer to organizational data, alongside existing platforms that increasingly embed AI functionality.
The goal is not to build a complex stack but to identify the smallest, most practical combination of tools that fits your governance and security requirements. The result is a clear plan that links use cases and workflows to real systems and data—establishing the technical foundation for the training that follows.
Phase 5 — Leadership Questionnaire
Which systems will anchor each workflow (LLM, specialist/custom tool, or existing platform with AI features)?
What data sources and permissions are required, and who is accountable for their quality?
What small pilots can validate tool–data fit before scaling
Once Phases 1 to 5 are complete, you have the clarity needed to design a training program that actually fits your organization’s maturity and goals.
From there, the next phase—covering capacity building, governance, and implementation—translates strategy into practice. But for now, this AI Adoption Strategy Questionnaire gives you a structured, reflective starting point to ensure your training efforts have real strategic traction.
