AI Capability Tools

Practical resources for reflecting on, exploring, and learning how AI capability is enacted in real professional contexts.

Introduction

CloudPedagogy AI Capability Tools support professionals, educators, and institutions to engage with AI thoughtfully, responsibly, and defensibly.

This section brings together two complementary types of resources, both grounded in the CloudPedagogy AI Capability Framework and designed to be used alongside professional judgement and local context:

  • AI Capability Diagnostic Tools — stable, reflective instruments for sense-making and discussion

  • AI Capability Labs — practical, framework-anchored applications that explore how AI capability can be enacted using contemporary technologies



1. AI Capability Diagnostic Tools

Reflective, non-prescriptive tools for understanding current AI capability.

These browser-based tools help individuals and teams make patterns, assumptions, gaps, and tensions visible across the six domains of the AI Capability Framework.

They are intentionally:

  • diagnostic rather than directive
  • descriptive rather than evaluative
  • safe for discussion, governance, and strategic reflection


No data is stored or transmitted. All tools run locally in the browser.

Core Diagnostic Tools

AI Capability Self-Assessment
Reflective baseline across the six domains of the framework.
[Launch tool] · [View source]

AI Capability Programme Mapping
Visualises where AI capability appears across programmes and curricula.
[Launch tool] · [View source]

AI Capability Gaps & Risk Diagnostic
Surfaces potential blind spots and areas of exposure.
[Launch tool] · [View source]

AI Capability Scenario Stress-Test
Explores resilience under plausible future change scenarios.
[Launch tool] · [View source]

AI Capability Dashboard (Aggregate View)
Supports system-level pattern awareness over time.
[Launch tool] · [View source]

These tools support reflection and discussion. They do not make decisions or recommendations.



2. AI Capability Labs

Practical, reference implementations exploring AI capability in action.

The AI Capability Labs are a collection of working applications and workflows developed to examine how the AI Capability Framework can be operationalised using current and emerging AI technologies.

Labs may be fully runnable and practically useful, but they are shared as reference implementations, not finished products or institutional solutions.

Each Lab is designed to surface:

  • human–AI role boundaries
  • judgement and accountability points
  • governance and ethical implications
  • transferable design patterns


All Labs are developed using a capability-driven development approach, where human capability requirements, governance constraints, and ethical considerations are defined before tools, architectures, or automation choices.

View the Capability-Driven Development reference on GitHub.

The AI Capability Framework remains stable; the technologies used in Labs are expected to change.



How Labs Are Organised

To support growth and clarity, Labs are grouped by capability challenge, not by tool or platform.

Example thematic groupings include:

Decision Support & Sense-Making
Reference systems exploring how AI can assist (but not replace) human judgement.

Workflow Orchestration & Co-Agency
Agentic and semi-agentic systems examining delegation, oversight, and control.

Curriculum, Research, and Knowledge Work
Applications focused on academic and professional practice.

Futures & Emerging Paradigms
Speculative or exploratory Labs used to stress-test capability assumptions, including quantum-related concepts.

Detailed implementation notes, architecture, and version history are provided in the associated GitHub repositories.



How to Use These Resources

CloudPedagogy tools are most effective when used as part of a capability development journey:

  • Diagnose — use Diagnostic Tools to understand current capability
  • Explore — examine Labs to see how capability can be enacted in practice
  • Learn — use courses and guides to build transferable judgement
  • Adapt — modify approaches to suit local context and responsibility


These resources are intentionally limited in scope to ensure accountability and decision-making remain human.



Tools & Labs Disclaimer

CloudPedagogy tools and labs are provided for reflective, educational, and exploratory purposes only. They are not decision systems, compliance instruments, or institutional governance tools.

Responsibility for interpretation, adaptation, and any subsequent decisions remains with users and their institutions.