feat: Docker Compose + CI/CD pipeline for staging & production
Summary
Set up the foundational infrastructure for running the Faculytics API on the Hostinger VPS, including Docker Compose configurations for both environments and an automated GitHub Actions deployment pipeline.
Background
The API previously had no containerised deployment setup. The full data layer (Postgres + Redis) runs self-hosted in Docker on the VPS. Postgres uses the pgvector/pgvector:pg16 image to support vector similarity search. ML inference remains on RunPod Serverless endpoints.
What this issue covers
Docker Compose (staging + production)
docker-compose.staging.yml — API on port 3001, Postgres (pgvector), Redis capped at 256 MB
docker-compose.prod.yml — API on port 3000, Postgres (pgvector), Redis capped at 512 MB
- Both use
allkeys-lru eviction + appendonly yes / appendfsync everysec for Redis durability
- Postgres and Redis are isolated to internal Docker bridge networks (not exposed to host)
- API
depends_on Postgres with a service_healthy condition (pg_isready healthcheck) — prevents startup race conditions
- Images pulled from GHCR
CI/CD — .github/workflows/deploy.yml
- Triggers on push to
main (production) and staging (staging)
- Builds and pushes API image to GHCR:
main → ghcr.io/ctrlaltelite-devs/api.faculytics:latest
staging → ghcr.io/ctrlaltelite-devs/api.faculytics:staging
- SCPs the relevant compose file to the VPS on every deploy (no manual VPS updates needed)
- Ensures Postgres and Redis are up (
up -d postgres redis) — no-op if already running
- Restarts only the
api service via --no-deps — database and Redis state are preserved
- Runs
mikro-orm migration:up inside the new API container after restart
- Existing lint and test workflows are unaffected
Developer docs — docs/tableplus-ssh-tunnel.md
- How to connect to VPS Postgres via TablePlus SSH tunnel (including pgvector setup)
- How to connect to VPS Redis via TablePlus SSH tunnel
- Read-only Postgres role setup for safer local browsing
- Tips on color-coding connections, migration discipline, SSH key format
Required GitHub Secrets
| Secret |
Description |
| VPS_HOST |
Hostinger VPS IP address |
| VPS_USER |
SSH username on the VPS |
| VPS_SSH_KEY |
SSH private key (OpenSSH format) |
| GHCR_TOKEN |
PAT with read:packages scope — used by VPS to pull images from GHCR |
Note: GITHUB_TOKEN is used by the Actions runner to push images; GHCR_TOKEN is a separate PAT used by the VPS to pull them.
VPS directory structure
The deploy workflow expects these directories to exist on the VPS:
/opt/faculytics/
├── staging/
│ ├── docker-compose.staging.yml ← SCP'd by workflow
│ └── .env.staging ← provisioned manually
└── prod/
├── docker-compose.prod.yml ← SCP'd by workflow
└── .env.prod ← provisioned manually
Required .env variables
POSTGRES_DB=
POSTGRES_USER=
POSTGRES_PASSWORD=
# ...other API env vars
pgvector setup note
The vector extension must be enabled once per database after first boot:
CREATE EXTENSION IF NOT EXISTS vector;
This should be included as the first MikroORM migration so it runs automatically on deploy.
Out of scope (follow-up issues)
- Nginx reverse proxy + TLS termination
- Postgres database branching / snapshot workflow
- Automated Postgres backups
Acceptance criteria
feat: Docker Compose + CI/CD pipeline for staging & production
Summary
Set up the foundational infrastructure for running the Faculytics API on the Hostinger VPS, including Docker Compose configurations for both environments and an automated GitHub Actions deployment pipeline.
Background
The API previously had no containerised deployment setup. The full data layer (Postgres + Redis) runs self-hosted in Docker on the VPS. Postgres uses the
pgvector/pgvector:pg16image to support vector similarity search. ML inference remains on RunPod Serverless endpoints.What this issue covers
Docker Compose (staging + production)
docker-compose.staging.yml— API on port3001, Postgres (pgvector), Redis capped at 256 MBdocker-compose.prod.yml— API on port3000, Postgres (pgvector), Redis capped at 512 MBallkeys-lrueviction +appendonly yes/appendfsync everysecfor Redis durabilitydepends_onPostgres with aservice_healthycondition (pg_isready healthcheck) — prevents startup race conditionsCI/CD —
.github/workflows/deploy.ymlmain(production) andstaging(staging)main→ghcr.io/ctrlaltelite-devs/api.faculytics:lateststaging→ghcr.io/ctrlaltelite-devs/api.faculytics:stagingup -d postgres redis) — no-op if already runningapiservice via--no-deps— database and Redis state are preservedmikro-orm migration:upinside the new API container after restartDeveloper docs —
docs/tableplus-ssh-tunnel.mdRequired GitHub Secrets
VPS directory structure
The deploy workflow expects these directories to exist on the VPS:
Required
.envvariablespgvector setup note
The
vectorextension must be enabled once per database after first boot:This should be included as the first MikroORM migration so it runs automatically on deploy.
Out of scope (follow-up issues)
Acceptance criteria
staging→ image tagged:stagingappears in GHCR, API restarts on VPS port3001, Postgres and Redis untouchedmain→ image tagged:latestappears in GHCR, API restarts on VPS port3000, Postgres and Redis untouchedmikro-orm migration:upruns successfully as part of the deploy stepCREATE EXTENSION IF NOT EXISTS vectormigration exists and has been applieddocs/tableplus-ssh-tunnel.mdreviewed by at least one teammate