Lightweight system monitoring dashboard for Raspberry Pi. Runs as a single Python process — no Docker, no databases to manage.
Prerequisite: Python 3.11+
pip install -r requirements.txt
python3 main.py
# Open http://<pi-ip>:8889Note: Temperature reading requires
vcgencmd(available on Raspberry Pi OS). On other Linux systems, the fallback reads from/sys/class/thermal/.
Edit config.toml before starting:
| Section | Key | Default | Description |
|---|---|---|---|
[collection] |
interval_seconds |
10 |
Seconds between metric samples |
[collection] |
retention_days |
7 |
Days of history to keep |
[server] |
host |
"0.0.0.0" |
Bind address |
[server] |
port |
8889 |
HTTP port |
[metrics] |
network_interface |
"eth0" |
Network interface to monitor |
[metrics] |
disk_path |
"/" |
Disk partition to monitor |
[database] |
path |
"pistat.db" |
SQLite file path |
Serves the dashboard UI (static/index.html).
Latest snapshot of all metrics. Returns 204 No Content if no data has been collected yet.
Response 200 OK:
{
"id": 1,
"timestamp": 1708747200.0,
"cpu_percent": 12.5,
"cpu_per_core": [10.0, 15.0, 8.0, 16.0],
"mem_used": 512000000,
"mem_total": 4000000000,
"mem_percent": 12.8,
"cpu_temp": 52.3,
"disk_used": 10000000000,
"disk_total": 32000000000,
"disk_percent": 31.2,
"load_1": 0.45,
"load_5": 0.38,
"load_15": 0.32,
"net_bytes_sent": 1234567,
"net_bytes_recv": 7654321,
"uptime_seconds": 86400
}Time-series data for a single metric. hours is clamped to a maximum of 168 (7 days). Defaults to the last 1 hour.
Parameters:
metric— metric name from the allowlist (default:cpu_percent)hours— how many hours of history to return (default:1, max:168)
Response 200 OK:
[
{"ts": 1708747200.0, "value": 12.5},
{"ts": 1708747210.0, "value": 14.2}
]Response 400 Bad Request — metric name not in allowlist:
{"error": "Invalid metric: 'bad_metric'"}Download all stored data as a CSV file. hours is clamped to 168.
Parameters:
hours— how many hours of data to export (default:24, max:168)
Response 200 OK — Content-Type: text/csv, Content-Disposition: attachment; filename=pistat_export.csv
id,timestamp,cpu_percent,cpu_per_core,...
1,1708747200.0,12.5,"[10.0, 15.0]",...
Current configuration values as JSON.
Response 200 OK:
{
"interval_seconds": 10,
"retention_days": 7,
"network_interface": "eth0",
"disk_path": "/",
"host": "0.0.0.0",
"port": 8889
}| Metric | Unit | Source |
|---|---|---|
cpu_percent |
% | psutil |
cpu_per_core |
JSON array of % | psutil |
mem_used |
bytes | psutil |
mem_total |
bytes | psutil |
mem_percent |
% | psutil |
cpu_temp |
°C | vcgencmd / sysfs |
disk_used |
bytes | psutil |
disk_total |
bytes | psutil |
disk_percent |
% | psutil |
load_1 |
— | psutil |
load_5 |
— | psutil |
load_15 |
— | psutil |
net_bytes_sent |
bytes cumulative | psutil |
net_bytes_recv |
bytes cumulative | psutil |
uptime_seconds |
seconds | psutil |
Note:
cpu_per_coreis stored and returned by/api/currentbut is not available via/api/history(it is an array, not a scalar value).
./install.sh # install and start on boot
./install.sh --uninstall # stop and remove the serviceThe script auto-detects the install directory and current user — no path editing required.
To check status or view logs:
sudo systemctl status pistat
sudo journalctl -u pistat -fPiStat is a single Python process: a Flask HTTP server runs alongside a background daemon thread that collects metrics on a configurable interval (default 10 seconds) and writes them to SQLite. The database uses an indexed timestamp column and a prune job that removes rows older than the configured retention window. Metrics are collected via psutil, with temperature reading from vcgencmd and a sysfs fallback. Each metric collector is wrapped in try/except so a single sensor failure never drops an entire sample. The frontend polls /api/current every 5 seconds for live stats and /api/history every 60 seconds to refresh charts — no WebSockets required.
| File | Role |
|---|---|
main.py |
Entry point: wires everything together |
pistat/config.py |
TOML config loader using dataclasses |
pistat/db.py |
SQLite schema, inserts, queries, pruning, metric allowlist |
pistat/collector.py |
psutil metrics; background daemon thread; temperature via vcgencmd or sysfs fallback |
pistat/server.py |
Flask app factory with all routes |
static/index.html |
Single-page UI |
static/app.js |
Polling logic, Chart.js charts, color thresholds |
static/style.css |
Dark theme, responsive grid |
config.toml |
Runtime config (interval, retention, host/port, interface, disk path) |
pistat.service |
systemd unit file |
- Add a collector function in
pistat/collector.pywrapped in try/except, and call it fromcollect_all(). - Add a column to the
SCHEMAstring inpistat/db.py, and add the metric name toMETRIC_ALLOWLISTif it is a scalar value suitable for time-series queries. - Update
static/app.jsandstatic/index.htmlto display the new metric card if desired.
pip install pytest
pytest tests/ -v
# With coverage
pip install pytest-cov
pytest tests/ --cov=pistat --cov-report=term-missing