- Linux
- Python 3.11+
- systemd for the supported deployment path
Recommended:
- reverse proxy for public exposure
- fast local storage for SQLite
- fail2ban
The deploy scripts have been validated on:
- Debian 12 and 13
- Ubuntu 24.04 LTS and 25.10
- Fedora 42 and 43 with SELinux enforcing
- CentOS Stream 9 and 10 with SELinux enforcing
- AlmaLinux 8, 9, and 10 with SELinux enforcing
- Rocky Linux 8, 9, and 10 with SELinux enforcing
Not yet directly validated:
- Raspberry Pi OS / Raspbian
- Red Hat Enterprise Linux
- Oracle Linux
Support guidance:
- RHEL-family support is strongly indicated by the validated CentOS Stream, AlmaLinux, and Rocky Linux paths
- Red Hat Enterprise Linux should be described as expected to work on 9/10-class systems, but not yet directly tested
- Oracle Linux is likely to work as an EL-family target, but it should stay in the unvalidated bucket until it is tested directly
- Raspberry Pi OS / Raspbian is plausible on 64-bit Debian-family images, but should not be claimed as tested yet
Do not target older distro baselines for the supported deployment path:
- Debian 11
- Ubuntu 22.04 LTS
- CentOS 7
- Red Hat Enterprise Linux 7 and below
- Oracle Linux 7 and below
Reason:
- pyCluster requires Python 3.11+
- older distro baselines are too old for the current dependency/runtime requirements
Minimum practical node:
- 1 vCPU
- 1 GB RAM
- 10 GB disk
Recommended small production node:
- 2 vCPU
- 2 GB RAM
- 20 GB SSD-backed disk
Additional notes:
- 1 GB RAM works, but leaves less headroom during package operations and service restarts
- EL-family hosts with 1 GB RAM may require temporary swap during package installation; the deploy scripts handle that automatically
- if you plan to run reverse proxy, TLS, fail2ban, and longer spot retention on the same host, prefer 2 GB RAM or better
python3 -m venv .venv
source .venv/bin/activate
pip install -e .Run the core service:
pycluster --config ./config/pycluster.toml serveIf you need machine-local settings, create ./config/pycluster.local.toml. pyCluster automatically loads it after the base pycluster.toml, so local edits do not need to live in the tracked repo file.
From the repo root:
cd /usr/src
git clone git@github.com:AI3I/pyCluster.git
cd pyCluster
sudo ./deploy/install.sh
sudo ./deploy/doctor.shRecommended layout for a host-level install:
- checkout under
/usr/src/pyCluster - let
deploy/install.shcreate thepyclustersystem user and group automatically - use a clean standalone Linux host dedicated to pyCluster rather than trying to combine it with unrelated products or pre-existing service bundles
There is no separate pre-install account creation step for the operator to perform.
Interactive installs now also offer to run nginx setup before finishing. That prompt asks for:
- the public hostname for the user web UI
- an optional separate hostname for the sysop web UI
- whether nginx should publish ports
80and443 - whether to use Let's Encrypt or self-signed TLS
- the email address required by Let's Encrypt
If you skip nginx setup, pyCluster still installs and starts cleanly, but:
- the sysop web UI stays on
127.0.0.1:8080 - the public web UI stays on
127.0.0.1:8081 - those listeners are local-only until you publish them with nginx or another reverse proxy
Skipping nginx setup does not change the supported deployment model. The intended path is still a pyCluster-owned nginx configuration on the pyCluster host, not grafting pyCluster into some unrelated existing reverse-proxy layout later.
deploy/setup-nginx.sh is the supported out-of-the-gate path for wiring nginx to host ports 80 and 443, and it now fails fast if another non-nginx service already owns those ports.
This installs:
- application tree under
/home/pycluster/pyCluster pycluster.servicepyclusterweb.servicepycluster-cty-refresh.timer- keeps
CTY.DATandwpxloc.rawcurrent from the Country Files refresh path
- keeps
pycluster-retention.timer- fail2ban filters and jails for pyCluster auth failures
- logrotate policy for
/var/log/pycluster/authfail.log - an initial
SYSOPaccount bootstrap note at/root/pycluster-initial-sysop.txt
During install and repair, pyCluster now prints the bootstrap SYSOP credentials prominently in the terminal, saves the same note to /root/pycluster-initial-sysop.txt, and pauses interactive installs until the operator explicitly acknowledges that the credentials were reviewed.
sudo ./deploy/upgrade.sh
sudo ./deploy/doctor.shThe supported scripted upgrade path covers 1.0.0 and later. deploy/upgrade.sh runs the cumulative migration chain required by older installs before services restart. The current chain includes:
run_upgrade_1_0_1- hash legacy plaintext user passwords
- seed
config/strings.tomlwhen it is missing
run_upgrade_1_0_6- move any embedded outbound peer
password=values out of DSNs and into the separate peer-password preference path used by current pyCluster
- move any embedded outbound peer
deploy/upgrade.sh, deploy/repair.sh, and deploy/uninstall.sh also create timestamped runtime backups under /root/pycluster-backups/ before making destructive changes to the live tree.
Recommended before future upgrades:
- keep
config/pycluster.tomlclose to upstream defaults - move host-specific changes into
config/pycluster.local.toml - leave
config/pycluster.local.tomluntracked sogit pull --ff-onlystays clean
That local override file is also the right place for host-specific secrets such as QRZ credentials, SMTP credentials, and any node identity or listener changes that should survive repo updates.
SMTP should also be treated as part of the pyCluster host configuration. The supported path is to configure the mail settings pyCluster needs for its own delivery behavior, not to assume pyCluster is meant to be bolted onto some unrelated pre-existing mail stack without dedicated setup.
sudo ./deploy/repair.shKeep config and data:
sudo ./deploy/uninstall.shAfter pyCluster is installed, the first migration pass from DXSpider is available through:
sudo ./deploy/migrate.sh --from-dxspider /spider --dry-run
sudo ./deploy/migrate.sh --from-dxspider /spiderSee Migration for details and current scope.
Current migration behavior also includes:
- simple outbound DXSpider peer import from
connect/* - exact
badip.localIP entries exported toconfig/fail2ban-badip.local - reconciliation of imported exact IPs into the active pyCluster fail2ban block set
- unsupported connect scripts and CIDR-style
badip.localentries are reported, not guessed
Remove config and data too:
sudo KEEP_CONFIG=0 KEEP_DATA=0 ./deploy/uninstall.shsystemctl status pycluster.service pyclusterweb.service
sudo ./deploy/doctor.shIf the first install created the bootstrap account successfully, you should also see:
sudo ls -l /root/pycluster-initial-sysop.txtThat file contains the one-time generated SYSOP password for first web-based operator login.
If the install is interactive, the deploy script now stops and requires READ confirmation before it continues past the bootstrap credential notice.
Normal operator recovery paths:
- System Console: use
Reset MFAon the user record - Telnet sysop command:
sysop/clearmfa <call>
Both paths:
- force
mfa_email_otp=offfor the principal/base callsign - clear outstanding OTP challenges for that callsign and its SSIDs
- leave the password unchanged
- write an audit record
If every sysop is locked out, recover locally on the host:
cd /home/pycluster/pyCluster
sqlite3 data/pycluster.db "
DELETE FROM mfa_challenges;
DELETE FROM user_prefs WHERE call='AI3I' AND pref_key='mfa_email_otp';
INSERT INTO user_prefs(call,pref_key,pref_value,updated_epoch)
VALUES('AI3I','mfa_email_otp','off',strftime('%s','now'))
ON CONFLICT(call,pref_key) DO UPDATE SET
pref_value=excluded.pref_value,
updated_epoch=excluded.updated_epoch;
"
systemctl restart pycluster.service pyclusterweb.serviceReplace AI3I with the principal/base sysop callsign you are recovering. If needed, also reset the password through the existing bootstrap or direct SQLite recovery path.
Ordinary-user registration now tracks:
- registration state
- verified email status
- grace logins remaining before lockout
Normal operator recovery paths:
- System Console user editor:
Send VerificationMark VerifiedUnlock Account
If an account is locked or stuck pending verification and you need to recover it locally on the host:
cd /home/pycluster/pyCluster
sqlite3 data/pycluster.db "
DELETE FROM mfa_challenges;
DELETE FROM user_prefs WHERE call='AI3I' AND pref_key IN ('email_verified_epoch','registration_state','grace_logins_remaining');
INSERT INTO user_prefs(call,pref_key,pref_value,updated_epoch)
VALUES('AI3I','registration_state','pending',strftime('%s','now'))
ON CONFLICT(call,pref_key) DO UPDATE SET
pref_value=excluded.pref_value,
updated_epoch=excluded.updated_epoch;
INSERT INTO user_prefs(call,pref_key,pref_value,updated_epoch)
VALUES('AI3I','grace_logins_remaining','5',strftime('%s','now'))
ON CONFLICT(call,pref_key) DO UPDATE SET
pref_value=excluded.pref_value,
updated_epoch=excluded.updated_epoch;
"
systemctl restart pycluster.service pyclusterweb.serviceReplace AI3I with the principal/base callsign and adjust 5 to match your configured initial_grace_logins policy if needed.
pyCluster supports scheduled age-based cleanup for:
- spots
- messages
- bulletins
The scheduler is installed as:
pycluster-retention.timer
You can manage retention from the System Operator web UI or run cleanup manually through the UI action.
The deploy scripts install logrotate coverage for:
/var/log/pycluster/authfail.log
That policy rotates weekly, keeps compressed history, and prevents the auth-failure log from growing without bound on long-running nodes.
- telnet:
7300 - sysop web:
8080 - public web:
8081
Default bind behavior:
- telnet listens publicly unless you change
telnet.host - the sysop web service listens on
127.0.0.1:8080 - the public web service listens on
127.0.0.1:8081
That localhost binding is intentional. A fresh install is not meant to expose the web UI directly until you finish reverse-proxy setup. That is part of the standalone deployment model: pyCluster expects you to complete its own reverse-proxy setup cleanly, not to co-mingle it into an unrelated existing web stack and assume equivalent behavior.
The supported reverse-proxy path is:
sudo ./deploy/setup-nginx.shYou can also let deploy/install.sh call that for you interactively during first install.
That script is intended to claim 80/443 for the pyCluster nginx deployment path on the host. If another non-nginx service is already bound there, it stops with a clear error instead of silently fighting the existing web stack.
Typical nginx setup choices:
--public-host cluster.example.net- optional
--sysop-host sysop.example.net --tls-mode none--tls-mode self-signed--tls-mode letsencrypt --email admin@example.net
Serial/KISS support:
pip install pyserial