Fleet-wide orchestration — reads the entire SuperInstance ecosystem state, identifies gaps, assigns work, and tracks progress across all repos.
The meta-orchestrator is the Architect-rank overseer of the SuperInstance fleet. It operates at a level above individual repos, providing:
- Visibility: Scans every repo to build a comprehensive fleet snapshot
- Analysis: Detects gaps — missing implementations, stale data, test gaps, spec-code mismatches, orphaned repos
- Coordination: Generates a prioritized work queue and matches agents to tasks based on skills
- Reporting: Produces markdown fleet reports for transparency and tracking
flux-meta-orchestrator/
├── src/
│ ├── fleet_scanner.py # GitHub API client — builds FleetSnapshot
│ ├── gap_analyzer.py # Detects ecosystem gaps from snapshots
│ ├── work_coordinator.py # Plans work, matches agents, checks deps
│ ├── fleet_report_generator.py # Produces markdown fleet reports
│ └── tests/
│ └── test_orchestrator.py # Full test suite (stdlib only)
└── README.md
| Module | Purpose |
|---|---|
fleet_scanner |
Reads repos, file trees, issues, commits, bottles via GitHub API |
gap_analyzer |
6 gap detectors: missing impl, stale data, test gaps, spec mismatch, orphans, unresolved issues |
work_coordinator |
Work queue generation, agent-skill matching, dependency checking, timeline estimation |
fleet_report_generator |
Comprehensive markdown report with health matrix, gaps, assignments, recommendations |
from fleet_scanner import FleetScanner
scanner = FleetScanner(token="ghp_...", org="SuperInstance")
snapshot = scanner.build_fleet_snapshot()
print(f"Scanned {len(snapshot.repos)} repos")from gap_analyzer import GapAnalyzer
analyzer = GapAnalyzer(stale_days=30, orphan_days=60)
gaps = analyzer.analyze(snapshot)
for gap in gaps:
print(f"[{gap.severity.value}] {gap.repo}: {gap.description}")from work_coordinator import WorkCoordinator
coordinator = WorkCoordinator(team_size=4)
work_items = coordinator.generate_work_queue(gaps)
agent_skills = {
"architect": ["python", "design", "architecture"],
"tester": ["testing", "pytest"],
"devops": ["ci", "cd", "docker"],
}
assignments = coordinator.match_agent_to_work(agent_skills, work_items)
blockers = coordinator.check_dependencies(assignments)
timeline = coordinator.estimate_completion(assignments)from fleet_report_generator import generate_ecosystem_report
report = generate_ecosystem_report(snapshot, gaps, assignments)
print(report)
# Commit to fleet-map or status repo| Type | Description |
|---|---|
MISSING_IMPLEMENTATION |
Repo has schema/spec but no source code |
STALE_DATA |
Fleet-map indicators don't match actual repo health |
MISSING_TESTS |
Repo has code but no test files |
UNRESOLVED_ISSUE |
Repo has excessive open issues |
ORPHANED_REPO |
No activity for >60 days |
SPEC_CODE_MISMATCH |
flux-spec issues with no matching runtime implementation |
Gaps are scored: severity × effort (higher = more urgent):
- 🔴 Critical (100) — system-breaking, missing core implementations
- 🟠 High (75) — significant gaps affecting fleet coherence
- 🟡 Medium (50) — quality gaps (missing tests, stale data)
- 🟢 Low (25) — housekeeping (orphaned repos, minor issues)
cd src/tests
python test_orchestrator.py -vNo external dependencies required — uses only Python stdlib.
None. Everything uses Python standard library:
urllib.requestfor GitHub APIjsonfor data handlingdataclassesfor modelsunittest+unittest.mockfor tests