pyCluster is a modern DX cluster core written in Python.
It keeps the familiar telnet-style operator experience, adds a public web UI and a System Operator web console, and remains compatible with legacy cluster ecosystems such as DXSpider-family node links.
- public web UI: https://pycluster.ai3i.net
- public telnet listeners:
- pycluster.ai3i.net:7300
- pycluster.ai3i.net:7373
- pycluster.ai3i.net:8000
- Telnet-first DX cluster workflow with modernized operator output
- Public web UI for users and a dedicated web console for system operators
- SQLite persistence, CTY refresh tooling, and fail2ban integration
- Validated deploy path across modern Debian, Ubuntu, Fedora, and Red Hat-family Linux
- serves DX-style telnet access for users and operators
- provides a public web UI for viewing and posting cluster traffic
- provides a System Operator web console for runtime, protocol, user, and peer management
- stores spots, messages, and user preferences in SQLite
- supports node linking with profile-aware behavior for legacy cluster families
- ships with deployment tooling for systemd-based Linux hosts
- integrates with fail2ban for login-abuse protection
- supports age-based cleanup for spots, messages, and bulletins
- maintains local CTY data with optional automatic refresh from Country Files
pyCluster is not just trying to mimic old command names. It is trying to keep the parts of legacy cluster software that matter while improving the parts that usually feel neglected.
Key improvements:
- cleaner telnet output and more human-readable replies
- explicit operator command namespace with
sysop/* - public web UI for normal users
- System Operator web console for runtime and policy management
- clearer link and protocol visibility
- per-user access matrix for telnet and web
- integrated audit and security visibility
- structured auth-failure logging with fail2ban support
- age-based retention controls with daily cleanup
- bundled and refreshable CTY data instead of relying on stale host copies
- Linux-first deployment with systemd tooling
pyCluster is usable today as a single-node cluster with web and telnet access, persistent storage, peer linking, and operator controls. The codebase is still evolving, but it is no longer just a prototype.
Primary human and compatibility interface.
- user prompt:
N0CALL-1> - sysop prompt:
N0CALL-1# - DX-style command surface with
show/*,set/*,unset/*, aliases, andsysop/*
User-facing browser interface.
- spot list and filters
- cluster view
- watch lists and recent matches
- operate tab for login and posting
- profile editing for normal users
Operator-facing browser console.
- node presentation and MOTD
- user and access management
- peer and link management
- protocol health and policy drops
- audit and security views
Get the code with SSH:
git clone git@github.com:AI3I/pyCluster.git
cd pyClusterOr with HTTPS:
git clone https://github.com/AI3I/pyCluster.git
cd pyClusterUpdate an existing checkout:
git pull --ff-onlyRun locally for development:
python3 -m venv .venv
source .venv/bin/activate
pip install -e .
pycluster --config ./config/pycluster.toml serveDeploy on a supported Linux host:
sudo ./deploy/install.sh
sudo ./deploy/doctor.shUpgrade an existing deployment:
git pull --ff-only
sudo ./deploy/upgrade.sh
sudo ./deploy/doctor.shDefault listeners:
- telnet: 0.0.0.0:7300
- sysop web: 127.0.0.1:8080
- public web: 127.0.0.1:8081
Production deployment is handled through the checked-in deploy/ scripts and systemd units.
Validated deployment targets:
- Debian 12 and 13
- Ubuntu 24.04 LTS and 25.10
- Fedora 42 and 43 with SELinux enforcing
- CentOS Stream 9 and 10 with SELinux enforcing
- AlmaLinux 8, 9, and 10 with SELinux enforcing
- Rocky Linux 8, 9, and 10 with SELinux enforcing
Deployment notes:
install.sh,upgrade.sh,repair.sh, anduninstall.shhave been validated on the distributions above- Fedora, CentOS Stream, AlmaLinux, and Rocky Linux installs on very small 1 GB hosts may require temporary swap during package installation; the deploy scripts now handle that automatically
- RHEL support is expected to track the validated Fedora, CentOS Stream, AlmaLinux, and Rocky Linux path, but has not yet been tested on a subscription-backed Red Hat host
- Oracle Linux is likely to work as a Red Hat-family target, but has not yet been directly validated
- Raspberry Pi OS / Raspbian is not yet validated, though 64-bit Debian- or Ubuntu-style images are the most likely to work cleanly
- Older baselines should not be attempted:
- Debian 11
- Ubuntu 22.04 LTS
- CentOS 7 / RHEL 7 / Oracle Linux 7 and below
- pyCluster requires Python 3.11+, so older distro baselines without a current Python runtime are out of scope for the supported deployment path
Typical install:
sudo ./deploy/install.sh
sudo ./deploy/doctor.shInitial System Operator web access uses the SYSOP account. The generated 16-character bootstrap password is written to /root/pycluster-initial-sysop.txt.
Typical upgrade:
sudo ./deploy/upgrade.sh
sudo ./deploy/doctor.shInstalled services:
pycluster.servicepyclusterweb.servicepycluster-cty-refresh.timerpycluster-retention.timer
Minimum practical deployment:
- 1 vCPU
- 1 GB RAM
- 10 GB storage
- persistent network connectivity
Recommended small production node:
- 2 vCPU
- 2 GB RAM
- 20 GB SSD-backed storage
Notes:
- SQLite works well at this scale
- reverse proxy, fail2ban, and package upgrades are more comfortable with 2 GB RAM
- very small Fedora or Red Hat-family hosts may temporarily need swap during package operations
pyCluster supports:
- local callsign blocking
- per-user access controls for telnet and web
- structured auth-failure logging
- shipped
fail2banfilters and jails - imported exact-IP blocks from DXSpider
badip.local - sysop visibility for recent auth failures and current bans
Auth-failure log retention:
- shipped logrotate policy for
/var/log/pycluster/authfail.log
pyCluster ships with a bundled cty.dat, and install/upgrade perform a best-effort refresh from Country Files.
Manual refresh:
python3 ./scripts/update_cty.py --config ./config/pycluster.tomlAutomatic refresh:
pycluster-cty-refresh.timer
pyCluster can automatically prune older operational data.
- spots, messages, and bulletins can be retained for configurable day counts
- the System Operator web UI exposes:
Enable age-based cleanup- per-category day values
Run Cleanup Now
- scheduled cleanup runs daily through:
pycluster-retention.timer
- User Manual
- Administration Manual
- Installation
- Migration
- Configuration
- Feature Highlights
- Telnet Commands
- Telnet Command Reference
- System Operator Web
- Public Web UI
- Node Linking
- Security
- Operations
- Architecture
- Roadmap
- Project History
pyCluster is created and led by John D. Lewis, AI3I with help from ChatGPT OpenAI Codex and Anthropic Claude AI.
Special thanks for advice, assistance, consideration and testing:
- Eric Tichansky, NO3M
- Howard Leadmon, WB3FFV
- Joe Reed, N9JR
See CONTRIBUTING.md.
See CHANGELOG.md.