███████╗ █████╗ ████████╗██████╗ ███████╗██████╗ ██╗ ██████╗ ██╗ ██╗
██╔════╝██╔══██╗╚══██╔══╝██╔══██╗██╔════╝██╔══██╗██║ ██╔═══██╗╚██╗ ██╔╝
███████╗███████║ ██║ ██║ ██║█████╗ ██████╔╝██║ ██║ ██║ ╚████╔╝
╚════██║██╔══██║ ██║ ██║ ██║██╔══╝ ██╔═══╝ ██║ ██║ ██║ ╚██╔╝
██████╔╝██║ ██║ ██║ ██████╔╝███████╗██║ ███████╗╚██████╔╝ ██║
╚═════╝ ╚═╝ ╚═╝ ╚═╝ ╚═════╝ ╚══════╝╚═╝ ╚══════╝ ╚═════╝ ╚═╝
OTA deployment for embedded Linux satellites. Push files over SSH or CSP, track versions, rollback with one command.
Deploying software to satellite hardware during development means USB drives, ad-hoc scripts, or hoping SSH works. No versioning, no rollback, no way to know what's running. satdeploy fixes this — works over SSH for networked targets and CSP (CubeSat Space Protocol) over CAN bus for air-gapped ones.
Requires Docker Desktop running (free for personal and education use).
pipx install satdeploy # or: pip install satdeploy
satdeploy demo start # starts a simulated satellite via DockerThis starts a simulated satellite, configures a test app, and prints a quick-start guide. Then:
satdeploy status # See what's deployed
satdeploy push test_app # Deploy a new version
satdeploy list test_app # See version history
satdeploy rollback test_app # Roll back to previous
satdeploy logs test_app # View service logs
satdeploy demo shell # Shell into the satellite
satdeploy init # Generate config for your real target
satdeploy demo stop # Clean upDocker is only used for the demo simulator. Real deployments use SSH or CSP directly.
$ satdeploy push controller
[1/4] Stopping controller.service
[2/4] Backing up /opt/disco/bin/controller
[3/4] Uploading ./build/controller
[4/4] Starting controller.service
> Deployed controller (e5f6a7b9) main@3c940acf
$ satdeploy status
Target: som1 (192.168.1.50)
APP STATUS HASH SOURCE TIMESTAMP
--------------------------------------------------------------------------
> controller running e5f6a7b9 main@3c940acf 2024-01-15 14:35
> csp_server running b7e1d2a4 main@ddfa081f 2024-01-15 09:15
- libparam deployed c4d5e6f1 main@9c622a2b 2024-01-12 16:23
$ satdeploy list controller
Versions for controller:
HASH SOURCE TIMESTAMP STATUS
---------------------------------------------------------------
> e5f6a7b9 main@3c940acf 2024-01-15 14:35:10 deployed
- a3f2c9b8 main@ddfa081f 2024-01-15 14:30:22 backup
- d2c3b4a5 feat@17ad579b 2024-01-14 09:15:00 backup
$ satdeploy rollback controller
[1/3] Stopping controller.service
[2/3] Restoring a3f2c9b8
[3/3] Starting controller.service
> Rolled back controller to a3f2c9b8
Your target has network access. You don't need any C components — just the Python CLI.
1. Create a config:
satdeploy init # select "ssh", enter your target's IP2. Edit ~/.satdeploy/config.yaml for your target:
name: flatsat
transport: ssh
host: 192.168.1.50
user: root
apps:
controller:
local: ./build/controller # path to your local binary
remote: /opt/bin/controller # where it goes on target
service: controller.service # systemd service to restart (or null)3. Test with a real file:
# Create a test file to deploy
echo "hello satellite" > /tmp/test.txt
# Deploy it ad-hoc (no config entry needed)
satdeploy push -f /tmp/test.txt -r /tmp/test.txt
# Check it landed
satdeploy status
# Or deploy a configured app
satdeploy push controller4. See what happened:
satdeploy list controller # version history
satdeploy rollback controller # undo the deploy
satdeploy logs controller # service logsYour target is connected via CAN bus or serial — no network. You need three pieces:
| Piece | Where it runs | How to get it |
|---|---|---|
| Python CLI or CSH APM | Ground station | pip install satdeploy or build the APM |
| satdeploy-agent | Target satellite | Yocto recipe or cross-compile |
| CSH | Ground station | Bridges ZMQ ↔ CAN/serial |
1. Start the agent on the target:
satdeploy-agent -i CAN -p can0 # CAN bus
satdeploy-agent -i KISS -p /dev/ttyS1 # Serial link
satdeploy-agent -i ZMQ -p localhost # ZMQ (local testing only)2. Create a config on the ground station:
satdeploy init # select "csp", enter your node IDs3. Edit ~/.satdeploy/config.yaml:
name: my-satellite
transport: csp
zmq_endpoint: tcp://localhost:9600 # CSH's ZMQ address
agent_node: 55 # your satellite's CSP node ID
ground_node: 40 # your ground station's CSP node ID
apps:
controller:
local: ./build/controller
remote: /opt/bin/controller4. Test with a real file:
# Ad-hoc deploy — no config entry needed
echo "hello satellite" > /tmp/test.txt
satdeploy push -f /tmp/test.txt -r /tmp/test.txt
# Check it arrived
satdeploy status
# Deploy a configured app
satdeploy push controllerHow the pieces connect:
Local testing (ZMQ):
Python CLI --> zmqproxy --> Agent (-i ZMQ)
Real satellite (CAN bus):
Python CLI --> CSH --> CAN bus --> Agent (-i CAN)
Serial link (KISS):
Python CLI --> CSH --> serial --> Agent (-i KISS)
zmqproxy is a simple ZMQ forwarder (demo/local only). For real hardware, you need CSH — it bridges between its ZMQ interface (where the CLI connects) and CAN or KISS interfaces (where the satellite lives).
If you use CSH as your ground station, satdeploy provides native slash commands via the APM module. The commands are identical to the Python CLI.
Build and install:
cd satdeploy-apm
meson setup build
ninja -C build
cp build/libcsh_satdeploy_apm.so ~/.local/lib/csh/Then in CSH: apm load to activate the satdeploy commands.
The APM also adds -n/--node NUM to each command for targeting a specific CSP node (defaults to agent_node from config).
CSH also acts as the CSP router for CAN and serial links — the Python CLI connects to CSH via ZMQ, and CSH routes to the satellite over CAN or KISS.
- Versioned backups - Every deploy saves the previous file with its content hash
- Git provenance - Every deploy records the git commit that built the file
- Dependency ordering - Services stop/start in the right order
- One-command rollback - Instantly restore any previous version
- Multi-transport - Works over SSH or CSP (satellite links)
- Per-target configs - Separate config dirs per target, switch with
--config
The Python CLI and CSH APM share the same command interface. Every flag works in both.
satdeploy push <app> # Deploy app from config
satdeploy push <app1> <app2> # Deploy multiple apps
satdeploy push -a / --all # Deploy all apps from config
satdeploy push -f PATH -r PATH # Ad-hoc deploy (no config entry needed)
| Flag | Description |
|---|---|
-f, --local PATH |
Local file path (overrides config) |
-r, --remote PATH |
Remote path on target |
-F, --force |
Force deploy even if same version |
-a, --all |
Deploy all apps from config |
satdeploy status
satdeploy list <app>
satdeploy rollback <app> # Roll back to previous version
satdeploy rollback <app> -H HASH # Roll back to specific version
| Flag | Description |
|---|---|
-H, --hash HASH |
Specific backup hash to restore |
satdeploy logs <app>
satdeploy logs <app> -l 50 # Show last 50 lines
| Flag | Description |
|---|---|
-l, --lines NUM |
Number of lines to show (default: 100) |
satdeploy config
satdeploy demo start # Start simulated satellite (Docker)
satdeploy demo stop # Stop simulator
satdeploy demo shell # Shell into the satellite
# Bash — add to ~/.bashrc
eval "$(_SATDEPLOY_COMPLETE=bash_source satdeploy)"
# Zsh — add to ~/.zshrc
eval "$(_SATDEPLOY_COMPLETE=zsh_source satdeploy)"All commands also accept:
| Flag | Description |
|---|---|
-n, --node NUM |
Target CSP node (overrides agent_node from config) |
--config PATH |
Config file (default: ~/.satdeploy/config.yaml) |
Each target gets its own config directory (e.g. ~/.satdeploy/som1/config.yaml):
name: som1
transport: csp
zmq_endpoint: tcp://localhost:9600
agent_node: 5425
ground_node: 40
appsys_node: 10
backup_dir: /opt/satdeploy/backups
max_backups: 10
apps:
controller:
local: ./build/controller
remote: /opt/disco/bin/controller
service: controller.service
depends_on: [csp_server]
csp_server:
local: ./build/csp_server
remote: /usr/bin/csp_server
service: csp_server.service
libparam:
local: ./build/libparam.so
remote: /usr/lib/libparam.so
service: null
restart: [csp_server, controller]| Field | Description |
|---|---|
local |
Path to local file |
remote |
Deployment path on target |
service |
systemd service (null for libraries) |
depends_on |
Services this app depends on |
restart |
Services to restart when this library changes |
param |
libparam name for CSP start/stop |
SSH — Direct SSH/SFTP connection. Works with any Linux target.
name: flatsat
transport: ssh
host: 192.168.1.50
user: rootCSP — CubeSat Space Protocol over ZMQ, CAN, or KISS serial. Requires satdeploy-agent on target.
name: satellite
transport: csp
zmq_endpoint: tcp://localhost:9600
agent_node: 5425
ground_node: 40When deploying an app with dependencies:
- Stop services top-down (dependents first)
- Deploy the file
- Start services bottom-up (dependencies first)
For libraries with a restart list, those services are restarted directly.
For contributors or development:
git clone --recursive https://github.com/MahmoodSeoud/satBuild.git
cd satBuild
python -m venv .venv && source .venv/bin/activate
pip install -e ".[dev]"
python -m pytestIf you already cloned without --recursive, pull the submodules with:
git submodule update --init --recursive| Component | Language | Runs on | Purpose |
|---|---|---|---|
satdeploy |
Python | Ground station | CLI — architecture-independent |
satdeploy-agent |
C | Target | Handles CSP deploy commands via libcsp — must be cross-compiled for target architecture |
satdeploy-apm |
C | Ground station | Slash commands for CSH — compiled natively |
The agent runs on the target and is required for CSP transport. Two options:
Option A: Yocto recipe (recommended) — add meta-satdeploy to your Yocto build:
bitbake-layers add-layer /path/to/meta-satdeploy
# In local.conf:
IMAGE_INSTALL:append = " satdeploy-agent"
See meta-satdeploy/ for details.
Option B: Manual cross-compile
System dependencies (Ubuntu/Debian — your Yocto SDK sysroot may already have these):
sudo apt install build-essential pkg-config meson ninja-build \
libzmq3-dev libsocketcan-dev libyaml-dev libbsd-dev \
libprotobuf-c-dev libssl-devBuild (assumes you cloned with --recursive — see Install from Source):
source /opt/poky/environment-setup-armv8a-poky-linux
cd satdeploy-agent
meson setup build-arm --cross-file yocto_cross.ini
ninja -C build-arm
# Output: build-arm/satdeploy-agentFor other toolchains, point meson at your own cross-compilation file and build normally.
CSH ground station module. Compiled natively on the ground station (not cross-compiled):
# System dependencies (Ubuntu/Debian):
sudo apt install build-essential pkg-config meson ninja-build \
libzmq3-dev libsocketcan-dev libbsd-dev
cd satdeploy-apm
meson setup build
ninja -C build
cp build/libcsh_satdeploy_apm.so ~/.local/lib/csh/Note: libyaml, protobuf-c, and sqlite3 are bundled automatically via meson wraps — no system packages needed. OpenSSL is not required (SHA256 is built-in).
- Python 3.8+
- Docker (for demo mode only)
- SSH access to target (for SSH transport)
satdeploy-agenton target (for CSP transport)- CSH on ground station (for CAN/KISS transport — bridges ZMQ to physical bus)
- systemd on target
MIT