🏃 Sovereign Agent Evaluation Framework - Zero cloud dependencies. Local-only, cryptographically signed benchmarks for AI agents. npm run demo = instant evals.
-
Updated
Mar 23, 2026 - TypeScript
🏃 Sovereign Agent Evaluation Framework - Zero cloud dependencies. Local-only, cryptographically signed benchmarks for AI agents. npm run demo = instant evals.
An end-to-end Machine Learning project featuring a modular pipeline, configuration-driven workflows, MLflow experiment tracking, DagsHub integration, and a Flask web interface, following industry-standard MLOps practices.
Adaptive multi-star candidate ranking system using deterministic scoring + relevance feedback with bias-aware filtering.
Deterministic job decision engine that scores opportunities using a transparent, testable formula and logs every decision with full traceability. Hybrid 5-signal scoring with a bounded LLM reasoning layer. Reproducible outputs across local, CI, and production environments.
Add a description, image, and links to the reproducible-ml topic page so that developers can more easily learn about it.
To associate your repository with the reproducible-ml topic, visit your repo's landing page and select "manage topics."