A lightweight, browser-based, aggregate dashboard for examining patterns and trends in AI capability over time, grounded in the CloudPedagogy AI Capability Framework.
Phase: Phase 3 — Capability System
Role: Aggregates and visualizes organizational AI capability signals to surface structural trends, maturity gaps, and longitudinal benchmarks.
Upstream Inputs: Aggregate datasets or individual JSON snapshots from the Capability Assessment Tool.
Downstream Outputs: Provides multi-dimensional insight for the Gaps & Risk diagnostic and strategic institutional review committees.
Does NOT:
- Perform granular individual performance tracking.
- Test organizational resilience against specific risk scenarios (this is for the Stress Test).
For a full system overview, see: SYSTEM_OVERVIEW.md
The AI Capability Signals Dashboard supports sense-making, reflection, and governance-aligned conversation about how AI capability is developing across an organisation — without surveillance, benchmarking, or individual assessment.
It is designed to help teams notice patterns, surface tensions, and ask better questions, not to measure performance or enforce compliance.
This tool is part of the CloudPedagogy AI Capability Tools suite.
👉 http://cloudpedagogy-ai-capability-dashboard.s3-website.eu-west-2.amazonaws.com/
git clone [repository-url]
cd [repository-folder]npm installnpm run devOnce running, your terminal will display a local URL (often http://localhost:5173). Open this in your browser to use the application.
npm run buildThe production build will be generated in the dist/ directory and can be deployed to any static hosting service.
- Aggregation-Only: Designed to consolidate non-identifiable capability data from Assessment snapshots.
- Internal Benchmarking: Compares domains against the active dataset average to identify structural lags.
- Gap Detection: Visually highlights "Maturity Gaps" where specific domains underperform relative to the institution.
- Longitudinal Insight: Provides trajectory summaries and delta tracking across multiple periods.
This dashboard has been strengthened to serve as the primary insight layer for institutional capability:
The dashboard automatically identifies "Gaps" — domains where the weighted maturity index is more than 15% below the dataset average. This allows institutions to prioritize investment where structural weakness is highest.
The overview chart includes a target benchmark line representing the average distribution across all domains. This provides an immediate visual baseline for internal comparison.
The trends view includes numerical 🔼/🔽 indicators showing the precise shift in maturity index between the two most recent periods, categorized into Improved and Regressed trajectory summaries. ational and governance contexts
The AI Capability Signals Dashboard helps individuals, teams, and organisations:
- observe aggregate patterns in AI capability across the six framework domains
- explore trends over time, rather than one-off snapshots
- surface imbalances or tensions (e.g. innovation accelerating faster than governance)
- support leadership, governance, and strategic discussions without performance anxiety
- turn AI-related data into reflection and dialogue, not action-by-default
- document shared understanding for committees, workshops, and reviews
The tool is capability-led, reflective, and interpretive.
It is explicitly designed to support professional judgement, not replace it.
This tool is not:
- a monitoring or surveillance system
- a performance dashboard or KPI tracker
- a benchmarking or maturity-scoring tool
- a compliance or audit instrument
- a risk register or legal assessment
- an automated decision-making or recommendation system
All outputs are signals, patterns, and prompts for discussion — not verdicts or decisions.
The dashboard is grounded in the six interdependent domains of the CloudPedagogy AI Capability Framework:
Shared understanding, boundaries, risks, and realistic expectations of AI in context
Role clarity, partnership practices, human judgement in the loop, and responsible prompting
Practical use of AI in workflows, experimentation, iteration, and improvement of practice
Fairness, inclusion, harm reduction, transparency, and downstream impact awareness
Oversight, accountability, policy alignment, approvals, and decision hygiene
Review cycles, learning from experience, capability renewal, and institutional memory
These domains act as lenses, not checklists.
- Enter basic context information (e.g. team, programme, or organisational unit; optional notes)
- Record aggregate capability signals across the six domains
- Signals may be derived from prior assessments, workshops, surveys, or agreed reflections
- No individual-level data is required or supported
- Optionally add timepoints to compare capability signals over time
- View the dashboard to explore:
[x]Engine & Calculation Updates[x]Updatesrc/engine/aggregate.tswith gap detection logic (15% threshold)[x]Updatesrc/engine/trends.tswith delta and trajectory calculations
[x]UI: Gaps & Benchmarks[x]Updatesrc/views/OverviewView.tsxwith Gap Overlay and Dataset Benchmark Line
[x]UI: Trends & Insights[x]Updatesrc/views/TrendsView.tsxwith Trend Delta Indicators and Insight Summary Panel
[x]Finalisation[x]UpdateREADME.mdwith insight layer documentation
The tool is designed to be used collectively and deliberatively, not mechanically.
The AI Capability Signals Dashboard provides:
- aggregate domain profiles (no individual or unit drill-down)
- trend visualisations over time
- imbalance and tension indicators
- explanatory “why this matters” context
- structured discussion prompts for groups and committees
- printable, shareable summaries for governance use
- AI steering groups and working groups
- Curriculum review and programme-level discussions
- Research governance and ethics boards
- Leadership workshops and away-days
- Capability retrospectives and annual reviews
- Cross-functional sense-making conversations
The dashboard is especially effective in contexts where trust, ethics, and accountability matter.
- The application runs entirely client-side
- No accounts, analytics, or tracking
- No data is uploaded or transmitted
- All inputs exist only within the user’s browser session
- Clearing the browser resets the session
- Suitable for static hosting (e.g. AWS S3)
The tool is explicitly designed to avoid surveillance and performance monitoring.
This repository contains exploratory, framework-aligned tools developed for reflection, learning, and discussion.
These tools are provided as-is and are not production systems, audits, or compliance instruments. Outputs are indicative only and should be interpreted in context using professional judgement.
All applications are designed to run locally in the browser. No user data is collected, stored, or transmitted.
All example data and structures are synthetic and do not represent any real institution, programme, or curriculum.
This repository contains open-source software released under the MIT License.
CloudPedagogy frameworks, capability models, taxonomies, and training materials are separate intellectual works and are licensed independently (typically under Creative Commons Attribution–NonCommercial–ShareAlike 4.0).
This software is designed to support capability-aligned workflows but does not embed or enforce any specific CloudPedagogy framework.
CloudPedagogy develops open, governance-credible resources for building confident, responsible AI capability across education, research, and public service.
