The default deployment path now runs nginx inside Docker Compose using
nginx/docker.conf. The legacy host-nginx example in nginx/ela.conf is still
kept as a reference for non-container deployments, but nginx/install.sh now
installs and starts the full Dockerized stack instead of copying a host nginx
site file.
Internet
├── HTTP / WS (port 80) ──► nginx container
└── HTTPS / WSS (port 443) ──► ├── /terminal/<mac> ──► terminal-api:8080
├── /gdb/<in|out>/* ──► gdb-api:9000
└── /* ──► agent-api:5000
/terminal/<mac> is routed to the WebSocket terminal server. /gdb/ is routed
to the GDB bridge WebSocket server. Everything else is passed to the agent API
at its native web root.
Both HTTP (port 80) and HTTPS (port 443) are served independently — plain HTTP is never redirected to HTTPS so that agents on networks without TLS can still connect.
- Docker Engine with Compose support
- Access to the Docker daemon
- Free listeners on TCP ports
80and443 opensslon the host (only needed when not supplying your own cert)docker-proxywithcap_net_bind_service(see below)
On Debian/Ubuntu systems you can install the container runtime and Compose plugins with:
sudo apt-get update
sudo apt-get install -y docker.io docker-buildx docker-compose-v2Docker binds host ports through docker-proxy, which must hold
cap_net_bind_service to use ports below 1024 (80 and 443).
install.sh applies this automatically, but you can set or verify it
manually:
# Grant the capability
sudo setcap cap_net_bind_service=+ep $(command -v docker-proxy)
# Verify
getcap $(command -v docker-proxy)
# → /usr/bin/docker-proxy cap_net_bind_service=+epThe capability must be re-applied after Docker is upgraded since package upgrades replace the binary.
./nginx/install.sh ela.example.comIf --cert and --key are not supplied, a 10-year self-signed certificate is
generated automatically and stored in nginx/ssl/ (gitignored). On subsequent
runs the existing cert is reused; delete nginx/ssl/ to force regeneration.
To supply your own certificate:
./nginx/install.sh ela.example.com --cert /path/to/ela.crt --key /path/to/ela.keyCommon options:
./nginx/install.sh ela.example.com --env-file /path/to/ela.env./nginx/install.sh ela.example.com --no-build./nginx/install.sh ela.example.com --compile-locally./nginx/install.sh ela.example.com --compile-locally --jobs 8./nginx/install.sh ela.example.com --foreground
The installer runs docker compose up against the repository's
docker-compose.yml
and starts:
postgresagent-apiterminal-apigdb-apinginx
It also generates a temporary nginx config for Compose by replacing
example.com in nginx/docker.conf
with the HOSTNAME argument using sed.
If you want to build the release binaries locally instead of downloading them from GitHub Releases during agent API startup, pass:
./nginx/install.sh ela.example.com --compile-locallyTo control the compiler parallelism:
./nginx/install.sh ela.example.com --compile-locally --jobs 8This does the following:
- builds a local Docker builder image from tests/release-builder.Dockerfile
- runs tests/compile_release_binaries_locally.sh inside that container
- writes the compiled artifacts into
$ELA_DATA_DIR/release_binaries - sets
ELA_AGENT_SKIP_ASSET_SYNC=truefor theagent-apicontainer so GitHub fetch is disabled - passes
--jobs <n>through to the local release build script when requested
--jobs is only valid together with --compile-locally.
If --compile-locally is not set, the installer leaves release asset sync
enabled and the agent-api container fetches binaries from GitHub Releases on
startup.
All agent API routes are served at the web root:
GET http://localhost/ → agent-api:5000/
GET http://localhost/tests/agent/shell/download_tests.sh → agent-api:5000/tests/agent/shell/download_tests.sh
POST http://localhost/upload → agent-api:5000/upload
client_max_body_size remains 200m in the bundled Docker nginx config.
The agent connects using the public URL:
./embedded_linux_audit transfer --remote ws://localhostThe agent always appends /terminal/<mac> to the base URL before connecting.
The GDB bridge (gdb-api) relays GDB Remote Serial Protocol frames between the
embedded agent and a gdb-multiarch client over WebSocket. It is used by the
linux gdbserver tunnel agent subcommand.
Two URL paths are served, both requiring a valid Authorization: Bearer header:
| Path | Used by | Description |
|---|---|---|
/gdb/in/<32-hex-key> |
embedded agent | Agent connects here; sends RSP frames from gdbserver |
/gdb/out/<32-hex-key> |
gdb-multiarch |
Analyst's GDB connects here; receives/sends RSP frames |
The <32-hex-key> is a 128-bit random session identifier generated by the agent
at tunnel startup (printed to stderr). Both sides must use the same key; the
bridge forwards binary frames bidirectionally between them.
The nginx proxy sets proxy_read_timeout 3600s and proxy_send_timeout 3600s
on /gdb/ to keep long-running debug sessions alive.
See gdbserver tunnel
for the full workflow including wss-remote setup.
proxy_set_header Authorization $http_authorization;This passes the Authorization: Bearer <token> header from the agent through
to both backends. Each backend validates it independently against its own
ela.key. See API Key Authentication — server side.
If you need a non-container deployment, nginx/ela.conf is still available as
a reference reverse-proxy config for host-installed nginx in front of
host-installed agent and terminal services.