Skip to content

Commit bb17847

Browse files
committed
Initial commit: opproplot - operating profile plots for binary classifiers
0 parents  commit bb17847

File tree

17 files changed

+639
-0
lines changed

17 files changed

+639
-0
lines changed

.github/workflows/ci.yml

Lines changed: 37 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,37 @@
1+
name: CI
2+
3+
on:
4+
push:
5+
branches: [main]
6+
pull_request:
7+
branches: [main]
8+
9+
jobs:
10+
build:
11+
runs-on: ubuntu-latest
12+
strategy:
13+
matrix:
14+
python-version: ["3.9", "3.10", "3.11"]
15+
16+
steps:
17+
- name: Checkout repository
18+
uses: actions/checkout@v4
19+
20+
- name: Set up Python
21+
uses: actions/setup-python@v5
22+
with:
23+
python-version: ${{ matrix.python-version }}
24+
25+
- name: Install dependencies
26+
run: |
27+
python -m pip install --upgrade pip
28+
python -m pip install -e .
29+
python -m pip install ruff pytest
30+
31+
- name: Lint with ruff
32+
run: ruff check .
33+
34+
- name: Run tests
35+
env:
36+
MPLBACKEND: Agg
37+
run: pytest

.gitignore

Lines changed: 30 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,30 @@
1+
# Byte-compiled / cache
2+
__pycache__/
3+
*.py[cod]
4+
*$py.class
5+
.pytest_cache/
6+
.mypy_cache/
7+
.ruff_cache/
8+
9+
# Virtual environments
10+
.venv/
11+
venv/
12+
env/
13+
14+
# Packaging
15+
build/
16+
dist/
17+
*.egg-info/
18+
19+
# Coverage / test reports
20+
.coverage
21+
coverage.xml
22+
23+
# Editors / OS
24+
.DS_Store
25+
.idea/
26+
.vscode/
27+
*.swp
28+
29+
# Jupyter
30+
.ipynb_checkpoints/

CONTRIBUTING.md

Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
# Contributing
2+
3+
Thanks for improving Opproplot!
4+
5+
## Setup
6+
7+
```bash
8+
pip install -e .
9+
pip install ruff pytest
10+
```
11+
12+
## Checks
13+
14+
- Lint: `ruff check .`
15+
- Tests: `pytest`
16+
17+
## Pull requests
18+
19+
- Keep changes small and focused.
20+
- Add or update tests when behavior changes.
21+
- Update docs/examples if the API or visuals change.

LICENSE

Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
MIT License
2+
3+
Copyright (c) 2024 Mike Roth
4+
5+
Permission is hereby granted, free of charge, to any person obtaining a copy
6+
of this software and associated documentation files (the "Software"), to deal
7+
in the Software without restriction, including without limitation the rights
8+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9+
copies of the Software, and to permit persons to whom the Software is
10+
furnished to do so, subject to the following conditions:
11+
12+
The above copyright notice and this permission notice shall be included in all
13+
copies or substantial portions of the Software.
14+
15+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21+
SOFTWARE.

README.md

Lines changed: 59 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,59 @@
1+
# opproplot
2+
3+
Opproplot is an operating profile plot for binary classifiers: a single figure that shows score distributions by class plus TPR/FPR/Accuracy as you move the decision threshold. It makes threshold selection, ROC/PR intuition, and calibration discussion concrete in one view.
4+
5+
**What it is:** Opproplot visualizes the family of decision rules h_t(x) = 1{f(x) >= t} and their induced operating characteristics (TPR, FPR, Accuracy), alongside the empirical score distributions p(s | Y=1) and p(s | Y=0).
6+
7+
**Why it matters:** You see where positives and negatives sit in score space, how recall and false positives trade off at every cutoff, and where accuracy peaks—no context-switching between ROC curves, confusion matrices, and histograms.
8+
9+
**When to use it:** model validation, stakeholder reviews, threshold tuning for production alerting, class-imbalance checks, and calibration audits.
10+
11+
## Installation
12+
13+
From the repo root:
14+
15+
```bash
16+
pip install -e .
17+
```
18+
19+
## Quickstart
20+
21+
```python
22+
import numpy as np
23+
from opproplot import operating_profile_plot
24+
25+
rng = np.random.default_rng(0)
26+
y_true = rng.integers(0, 2, size=5000)
27+
scores = rng.random(size=5000)
28+
29+
fig, ax_hist, ax_metric = operating_profile_plot(y_true, scores, bins=30)
30+
```
31+
32+
The resulting operating profile lets you see where positives and negatives concentrate in score space, how recall and false positive rate trade off as you move the threshold, and where accuracy peaks. It is a single, interpretable view of all possible thresholds for a scoring model.
33+
34+
![Opproplot example](docs/assets/opproplot_example.png)
35+
36+
## Project layout
37+
38+
- Package code lives in `src/opproplot`.
39+
- Tests live in `tests/`.
40+
- Documentation for GitHub Pages lives in `docs/` (see below).
41+
42+
## Documentation site
43+
44+
Enable GitHub Pages with the `docs/` folder as the root. The scaffold includes:
45+
46+
- `docs/index.md`: landing page with value proposition and a hero plot.
47+
- `docs/getting_started.md`: install, notebook walkthrough, common models.
48+
- `docs/theory.md`: decision rules, distributions, and metric integrals.
49+
- `docs/examples.md`: real datasets and comparisons.
50+
- `docs/api.md`: core functions and parameters.
51+
- `docs/roadmap.md`: features and status.
52+
53+
Fill in each page as you iterate; the structure is ready to publish.
54+
55+
## Testing
56+
57+
```bash
58+
pytest
59+
```

docs/api.md

Lines changed: 28 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,28 @@
1+
# API Reference
2+
3+
## compute_operating_profile
4+
5+
```python
6+
from opproplot import compute_operating_profile
7+
profile = compute_operating_profile(y_true, y_score, bins=40, score_range=(0, 1))
8+
```
9+
10+
- `y_true`: array-like of shape (n_samples,), binary labels.
11+
- `y_score`: array-like of shape (n_samples,), predicted scores or probabilities.
12+
- `bins`: number of score bins (default 40).
13+
- `score_range`: tuple or None. If None, uses min/max of scores.
14+
15+
Returns an `OperatingProfile` dataclass with:
16+
- `edges`, `mids`, `pos_hist`, `neg_hist`, `tpr`, `fpr`, `accuracy`.
17+
18+
## operating_profile_plot
19+
20+
```python
21+
from opproplot import operating_profile_plot
22+
fig, ax_hist, ax_metric = operating_profile_plot(y_true, y_score, bins=30, show_accuracy=True)
23+
```
24+
25+
- `show_accuracy`: include the dashed accuracy curve (default True).
26+
- `ax`: optional Matplotlib axis to draw on; otherwise creates a new figure.
27+
28+
Returns `(fig, ax_hist, ax_metric)` for further styling or saving.

docs/assets/opproplot_example.png

107 KB
Loading

docs/examples.md

Lines changed: 26 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,26 @@
1+
# Examples
2+
3+
Use these patterns to compare models and datasets.
4+
5+
## Breast cancer (scikit-learn)
6+
7+
- Load `sklearn.datasets.load_breast_cancer`.
8+
- Train a logistic regression or gradient boosting model.
9+
- Plot the operating profile on the test split to inspect separability.
10+
11+
## Fraud-like imbalance
12+
13+
- Simulate or load an imbalanced dataset.
14+
- Compare a calibrated model vs an overconfident one.
15+
- Observe how class imbalance alters histogram heights and accuracy peaks.
16+
17+
## Good vs bad model
18+
19+
- Train two models on the same data.
20+
- Plot both operating profiles side by side.
21+
- Look for:
22+
- Separation of score distributions.
23+
- Lower FPR for the same TPR.
24+
- Stability of accuracy across thresholds.
25+
26+
Swap in your own datasets; the plotting API stays the same.

docs/getting_started.md

Lines changed: 62 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,62 @@
1+
# Getting Started
2+
3+
This page shows how to generate an operating profile in a notebook and how to interpret it for common binary classifiers.
4+
5+
## Setup
6+
7+
```bash
8+
pip install -e .
9+
```
10+
11+
```python
12+
import numpy as np
13+
from opproplot import operating_profile_plot
14+
```
15+
16+
## Basic example
17+
18+
```python
19+
rng = np.random.default_rng(0)
20+
y_true = rng.integers(0, 2, size=5000)
21+
scores = rng.random(size=5000)
22+
23+
fig, ax_hist, ax_metric = operating_profile_plot(y_true, scores, bins=30)
24+
```
25+
26+
- Left axis: stacked histogram of scores by class.
27+
- Right axis: TPR, FPR, and Accuracy evaluated at each bin midpoint threshold.
28+
- Choose thresholds where TPR/FPR trade-offs make sense for your application.
29+
30+
## With scikit-learn (real example)
31+
32+
```python
33+
from sklearn.datasets import load_breast_cancer
34+
from sklearn.model_selection import train_test_split
35+
from sklearn.linear_model import LogisticRegression
36+
37+
data = load_breast_cancer()
38+
X_train, X_test, y_train, y_test = train_test_split(
39+
data.data, data.target, test_size=0.3, random_state=0, stratify=data.target
40+
)
41+
42+
clf = LogisticRegression(max_iter=500)
43+
clf.fit(X_train, y_train)
44+
45+
y_score = clf.predict_proba(X_test)[:, 1]
46+
47+
fig, ax_hist, ax_metric = operating_profile_plot(y_test, y_score, bins=30)
48+
ax_hist.set_title("Breast cancer classifier operating profile")
49+
```
50+
51+
Pattern applies to other models:
52+
53+
- Random forest / gradient boosting: use `model.predict_proba(X)[:, 1]`.
54+
- XGBoost / LightGBM: use `predict` outputs as scores.
55+
56+
## Interpreting the plot
57+
58+
- Separability: wider gap between class histograms indicates better discrimination.
59+
- Threshold effects: steep TPR drops highlight sensitive regions.
60+
- Accuracy peak: dashed accuracy curve shows the maximizer without trial-and-error.
61+
62+
For deeper theory and metric formulas, see [Theory](theory.md).

docs/index.md

Lines changed: 36 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,36 @@
1+
# Opproplot
2+
3+
A compact operating profile plot for binary classifiers: stacked score histograms by class plus TPR/FPR/Accuracy curves at bin-midpoint thresholds. One view to understand every possible cutoff.
4+
5+
## Why Opproplot
6+
7+
- See score separation between classes directly.
8+
- Trace how recall and false positives move as you slide the threshold.
9+
- Spot the accuracy peak without losing visibility into the distribution.
10+
11+
## Install
12+
13+
```bash
14+
pip install -e .
15+
```
16+
17+
## Quickstart
18+
19+
```python
20+
import numpy as np
21+
from opproplot import operating_profile_plot
22+
23+
rng = np.random.default_rng(0)
24+
y_true = rng.integers(0, 2, size=5000)
25+
scores = rng.random(size=5000)
26+
27+
operating_profile_plot(y_true, scores, bins=30)
28+
```
29+
30+
## Learn more
31+
32+
- [Getting started](getting_started.md): notebook-friendly walkthroughs.
33+
- [Theory](theory.md): decision rules, distributions, and threshold geometry.
34+
- [Examples](examples.md): real datasets and comparisons.
35+
- [API](api.md): function reference and parameters.
36+
- [Roadmap](roadmap.md): upcoming features.

0 commit comments

Comments
 (0)