AI Capability Tools
Reflective diagnostics and practical reference applications grounded in the AI Capability Framework.
Practical resources for reflecting on, exploring, and learning how AI capability is enacted in real professional contexts.
CloudPedagogy AI Capability Tools support professionals, educators, and institutions to engage with AI thoughtfully, responsibly, and defensibly.
This section brings together two complementary types of resources, both grounded in the CloudPedagogy AI Capability Framework and designed to be used alongside professional judgement and local context:
AI Capability Diagnostic Tools — stable, reflective instruments for sense-making and discussion
AI Capability Labs — practical, framework-anchored applications that explore how AI capability can be enacted using contemporary technologies
Reflective, non-prescriptive tools for understanding current AI capability.
These browser-based tools help individuals and teams make patterns, assumptions, gaps, and tensions visible across the six domains of the AI Capability Framework.
They are intentionally:
No data is stored or transmitted. All tools run locally in the browser.
AI Capability Self-Assessment
Reflective baseline across the six domains of the framework.
[Launch tool] · [View source]
AI Capability Programme Mapping
Visualises where AI capability appears across programmes and curricula.
[Launch tool] · [View source]
AI Capability Gaps & Risk Diagnostic
Surfaces potential blind spots and areas of exposure.
[Launch tool] · [View source]
AI Capability Scenario Stress-Test
Explores resilience under plausible future change scenarios.
[Launch tool] · [View source]
AI Capability Dashboard (Aggregate View)
Supports system-level pattern awareness over time.
[Launch tool] · [View source]
These tools support reflection and discussion. They do not make decisions or recommendations.
Practical, reference implementations exploring AI capability in action.
The AI Capability Labs are a collection of working applications and workflows developed to examine how the AI Capability Framework can be operationalised using current and emerging AI technologies.
Labs may be fully runnable and practically useful, but they are shared as reference implementations, not finished products or institutional solutions.
Each Lab is designed to surface:
All Labs are developed using a capability-driven development approach, where human capability requirements, governance constraints, and ethical considerations are defined before tools, architectures, or automation choices.
→ View the Capability-Driven Development reference on GitHub.
The AI Capability Framework remains stable; the technologies used in Labs are expected to change.
To support growth and clarity, Labs are grouped by capability challenge, not by tool or platform.
Example thematic groupings include:
Decision Support & Sense-Making
Reference systems exploring how AI can assist (but not replace) human judgement.
Workflow Orchestration & Co-Agency
Agentic and semi-agentic systems examining delegation, oversight, and control.
Curriculum, Research, and Knowledge Work
Applications focused on academic and professional practice.
Futures & Emerging Paradigms
Speculative or exploratory Labs used to stress-test capability assumptions, including quantum-related concepts.
Detailed implementation notes, architecture, and version history are provided in the associated GitHub repositories.
CloudPedagogy tools are most effective when used as part of a capability development journey:
These resources are intentionally limited in scope to ensure accountability and decision-making remain human.
CloudPedagogy tools and labs are provided for reflective, educational, and exploratory purposes only. They are not decision systems, compliance instruments, or institutional governance tools.
Responsibility for interpretation, adaptation, and any subsequent decisions remains with users and their institutions.