Quantify how much AI actually contributes to your codebase.
CodeStat is a local metrics tool that analyzes how you use AI coding assistants:
how many lines are generated by AI, how many are kept, and how this evolves over time.
中文文档见:
README.zh-CN.md
-
Global dashboard for all data
- AI generated lines, adopted lines, adoption & generation rates
- File count, session count, quick bar chart overview
-
Multi‑dimension queries
- By file: see how much of a file comes from AI and how much you kept
- By session: analyze one coding session with detailed diff lines
- By project: aggregate metrics for an entire repository
-
Agent / model comparison
- Compare multiple sessions (agents / models / settings) side‑by‑side
- See which one actually produces more adopted code instead of just more tokens
-
Local‑first & privacy‑friendly
- All metrics are computed locally from your own diffs
- No source code or prompts are sent to any remote service
-
Nice CLI UX
- Rich‑based tables & colors, arrow‑key navigation
- Minimal but informative header (MCP status + repo info)
TODO: add real screenshots / GIFs from your terminal
From PyPI (recommended):
pip install aicodestatFrom source:
git clone https://github.com/2hangchen/CodeStat.git
cd CodeStat
pip install -r requirements.txtpython .\cli\main.pyUse ↑/↓ to move, Enter to confirm.
Choose “📈 Global Dashboard (All Data)” to see an overview of your local metrics.
-
Measure your own AI usage
- Record one or more coding sessions with your IDE + MCP server
- Run
CodeStatand inspect:- AI generated vs adopted lines
- Which files receive the most AI help
-
Compare agents / models / prompts
- Map different sessions to different agents / models
- Use Compare Agents to get a per‑session comparison table
-
Project‑level health check
- For a given repo, run project metrics to see:
- Where AI contributes the most
- Whether AI‑generated code is actually being kept
- For a given repo, run project metrics to see:

