FastAPI + Chart.js evaluation dashboard for calibration, rolling accuracy, and CLV-style reporting.
Most public ML repos stop at a headline metric. This one is about how to communicate whether a model is actually trustworthy:
- calibration instead of just raw accuracy
- rolling performance instead of one static score
- CLV-style reporting to talk about market-aware outcomes
- clean API + frontend surface for presenting evaluation to non-ML stakeholders
One-pager for recruiters: Sports ML evaluation case study
FastAPI serves /api/metrics and the frontend renders:
- calibration scatter
- rolling accuracy chart
- KPI/summary block for CLV-style metrics
- a lightweight, portable demo for model evaluation storytelling
pip install -e .
uvicorn nba_clv_dashboard.app:app --reload --port 8765Open http://127.0.0.1:8765.
Replace demo_metrics.demo_payload() with a loader from your parquet or evaluation pipeline. Keep the same JSON shape or adjust static/index.html.
- Not a sportsbook integration
- Not a multi-tenant analytics product
- Not a prediction engine by itself
pytest hits /api/metrics with httpx.
nba-ratings: reusable Elo / logistic / Kelly primitivessports-betting-ml: modeling demo and value-bet pipelinebacktest-report-gen: static report generation from evaluation JSON
MIT