Skip to content

AI code metrics for your local repos. Measure how many lines are generated by AI, how many you actually keep, across files, sessions, and projects — all computed locally for privacy.

Notifications You must be signed in to change notification settings

2hangchen/CodeStat

Repository files navigation

CodeStat · AI Code Metrics

Quantify how much AI actually contributes to your codebase.

CI License

CodeStat is a local metrics tool that analyzes how you use AI coding assistants: how many lines are generated by AI, how many are kept, and how this evolves over time.

中文文档见:README.zh-CN.md


Features

  • Global dashboard for all data

    • AI generated lines, adopted lines, adoption & generation rates
    • File count, session count, quick bar chart overview
  • Multi‑dimension queries

    • By file: see how much of a file comes from AI and how much you kept
    • By session: analyze one coding session with detailed diff lines
    • By project: aggregate metrics for an entire repository
  • Agent / model comparison

    • Compare multiple sessions (agents / models / settings) side‑by‑side
    • See which one actually produces more adopted code instead of just more tokens
  • Local‑first & privacy‑friendly

    • All metrics are computed locally from your own diffs
    • No source code or prompts are sent to any remote service
  • Nice CLI UX

    • Rich‑based tables & colors, arrow‑key navigation
    • Minimal but informative header (MCP status + repo info)

Demo

TODO: add real screenshots / GIFs from your terminal

  • Global dashboard

    Global dashboard

  • Session metrics with diff lines

    Session metrics


Quickstart

Install

From PyPI (recommended):

pip install aicodestat

From source:

git clone https://github.com/2hangchen/CodeStat.git
cd CodeStat
pip install -r requirements.txt

Start the CLI

python .\cli\main.py

Use ↑/↓ to move, Enter to confirm.
Choose “📈 Global Dashboard (All Data)” to see an overview of your local metrics.


Typical Workflows

  • Measure your own AI usage

    • Record one or more coding sessions with your IDE + MCP server
    • Run CodeStat and inspect:
      • AI generated vs adopted lines
      • Which files receive the most AI help
  • Compare agents / models / prompts

    • Map different sessions to different agents / models
    • Use Compare Agents to get a per‑session comparison table
  • Project‑level health check

    • For a given repo, run project metrics to see:
      • Where AI contributes the most
      • Whether AI‑generated code is actually being kept

About

AI code metrics for your local repos. Measure how many lines are generated by AI, how many you actually keep, across files, sessions, and projects — all computed locally for privacy.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages