This document describes, in detail, how to deploy Unit09 in different environments, from a local development setup to a production-style cluster.
All examples are intended as reference. You should adapt hostnames, domains, credentials, and secrets to your own environment.
Unit09 is a multi-component system. A complete deployment usually consists of:
- A Solana cluster (localnet, devnet, testnet, or mainnet)
- The Unit09 on-chain program (Anchor-based)
- The core services:
- API service
- Worker service
- Scheduler service
- Optional applications:
- Dashboard app
- Docs site
- Supporting infrastructure:
- PostgreSQL database
- Job queue (for example, Redis or a message broker)
- Object storage (for example, S3-compatible)
- Metrics and logging (Prometheus, Grafana, log aggregation)
This guide covers multiple deployment modes:
- Local development (single machine)
- Local demo stack (Docker Compose)
- Staging / production (containerized, possibly Kubernetes)
- Program deployment to Solana
You should have the following installed on the machine where you perform builds and local development:
- Node.js 20+
- pnpm or npm
- Rust stable toolchain
- Solana CLI
- Anchor CLI
- Docker and Docker Compose
- Git
- A code editor (for example, VS Code)
Verify basic versions:
node -v
pnpm -v || npm -v
rustc -vV
solana --version
anchor --version
docker --versionDecide which cluster you are targeting.
For local development:
solana config set --url http://localhost:8899For devnet:
solana config set --url https://api.devnet.solana.comConfirm configuration:
solana config getEnsure your keypair is set and funded appropriately for program deployment and test transactions.
Clone the Unit09 repository:
git clone https://github.com/unit09-labs/unit09.git
cd unit09Install dependencies at the monorepo root:
pnpm installIf you prefer npm:
npm installMake sure the workspace and scripts run without errors.
Unit09 uses a configuration directory such as:
config/
default.yaml
development.yaml
production.yaml
schema.json
Most runtime components (API, worker, scheduler, apps) will read configuration based on an environment variable, for example:
export UNIT09_CONFIG_ENV=developmentThe loader then merges:
default.yaml(base settings)<env>.yaml(environment overrides)
Common configuration sections include:
app— environment, port, log levelsolana— cluster URL and commitment leveldatabase— connection settings for PostgreSQLsecurity— allowed origins, rate limiting optionspipeline— limits on jobs, repository sizes, concurrencymetrics— settings for Prometheus or a push gateway
You should customize development.yaml and production.yaml to reflect
your environment, domains, and resource constraints.
The Unit09 on-chain program lives under:
contracts/unit09-program/
From this directory:
cd contracts/unit09-program
anchor buildThis will produce a program shared object (.so) and an IDL file under
target/ and idl/ respectively.
Make sure Anchor.toml is configured with a programs section that
matches your intended deployment:
[programs.localnet]
unit09_program = "<LOCAL_PROGRAM_ID>"
[programs.devnet]
unit09_program = "<DEVNET_PROGRAM_ID>"
[programs.mainnet]
unit09_program = "<MAINNET_PROGRAM_ID>"You can use solana-keygen to generate a new keypair for the program:
solana-keygen new -o target/deploy/unit09_program-keypair.jsonThe corresponding public key is the program ID. Update program-id.md
and Anchor.toml accordingly.
To run a local validator and deploy the program:
# In one terminal
solana-test-validator
# In another terminal
cd contracts/unit09-program
anchor deploy --provider.cluster localnetMonitor the logs in the validator terminal for any errors.
Ensure your keypair has enough SOL on devnet, then run:
cd contracts/unit09-program
anchor deploy --provider.cluster devnetConfirm the program is visible on a Solana explorer and that the IDL matches the committed version.
To upgrade the program on a cluster where it is already deployed, follow the upgrade flow:
- Make code changes.
- Rebuild:
anchor build. - Deploy using the same program ID, assuming you still have the
upgrade authority:
anchor deploy --provider.cluster devnet
- Make sure off-chain services are compatible with the new IDL if account layouts or instruction signatures have changed.
Document breaking changes in the changelog and deployment notes.
For a quick end-to-end experience, use the local demo stack under:
examples/unit09-local-demo/
This directory typically includes:
docker-compose.yml— definitions for:- Solana localnet
- PostgreSQL
- API service
- Worker service
- Optional dashboard
- Helper scripts in
scripts/
From the directory:
cd examples/unit09-local-demo
docker compose up -dWait for containers to start. You can inspect logs with:
docker compose logs -f api
docker compose logs -f workerIf the repo includes seeding scripts, they might look like:
pnpm ts-node scripts/seed_demo_data.ts
pnpm ts-node scripts/demo_workflow.tsThese scripts usually:
- Register a sample repository with Unit09
- Run the pipeline
- Populate the dashboard with sample modules and forks
- Dashboard:
http://localhost:<dashboard-port> - API:
http://localhost:<api-port>(for example, 8080)
You can test a typical endpoint such as:
curl http://localhost:8080/health
curl http://localhost:8080/reposYou may want to run pieces manually for debugging:
solana-test-validatorConfigure your CLI to point at it:
solana config set --url http://localhost:8899Deploy the program as described earlier.
You can use Docker just for dependencies.
Example for PostgreSQL:
docker run --name unit09-postgres -e POSTGRES_USER=unit09 -e POSTGRES_PASSWORD=unit09_password -e POSTGRES_DB=unit09_dev -p 5432:5432 -d postgres:15If a queue such as Redis is used:
docker run --name unit09-redis -p 6379:6379 -d redis:7From the monorepo root or services/api:
export UNIT09_CONFIG_ENV=development
cd services/api
pnpm devor
pnpm startdepending on the scripts defined in package.json.
export UNIT09_CONFIG_ENV=development
cd services/worker
pnpm devThe worker subscribes to job queues and interacts with the core engine.
export UNIT09_CONFIG_ENV=development
cd services/scheduler
pnpm devThe scheduler should periodically enqueue jobs such as repository observations and metrics sync.
cd apps/dashboard
pnpm devVisit the indicated URL (for example, http://localhost:3000).
In production, you will likely want to:
- Use container images published to a registry
- Deploy to a container orchestration platform (for example, Kubernetes)
- Use managed or hardened Postgres, Redis, and object storage
- Front the API with a reverse proxy or API gateway
- Use TLS certificates for all public endpoints
Dockerfiles might live under infra/docker/ or within each service
directory.
Example (from repository root):
docker build -f infra/docker/Dockerfile.api -t unit09-api:latest .
docker build -f infra/docker/Dockerfile.worker -t unit09-worker:latest .
docker build -f infra/docker/Dockerfile.scheduler -t unit09-scheduler:latest .Tag and push to your registry:
docker tag unit09-api:latest registry.example.com/unit09-api:0.2.0
docker push registry.example.com/unit09-api:0.2.0Repeat for other services.
Kubernetes manifests may live under infra/k8s/:
infra/k8s/
namespaces.yaml
deployments/
api-deployment.yaml
worker-deployment.yaml
scheduler-deployment.yaml
services/
api-service.yaml
dashboard-service.yaml
ingress/
ingress.yaml
configmaps/
engine-config.yaml
worker-config.yaml
Apply manifests:
kubectl apply -f infra/k8s/namespaces.yaml
kubectl apply -f infra/k8s/configmaps/
kubectl apply -f infra/k8s/deployments/
kubectl apply -f infra/k8s/services/
kubectl apply -f infra/k8s/ingress/Adjust resource requests and limits, replica counts, and environment variables as needed.
If the repository includes Terraform configuration in infra/terraform/,
you can use it to manage cloud infrastructure such as:
- Kubernetes clusters
- Databases
- Load balancers
- Storage buckets
Example workflow:
cd infra/terraform
terraform init
terraform plan
terraform applyReview plans carefully before applying in production.
Never commit secrets to the repository. Use environment variables or a secret manager instead.
Common secrets include:
- Database passwords
- Queue or message broker credentials
- Object storage access keys
- API keys for external services
- Solana keypairs or signer URLs (when not using local files)
In Kubernetes, store secrets via kubectl create secret or an external
secret provider. In Docker Compose, use .env files that are not
committed to version control.
For production safety, you should:
-
Collect logs from API, worker, scheduler, and apps.
-
Monitor:
- Request rates and error rates
- Job queue depth
- Pipeline failure rates
- Latency of key endpoints
- Resource usage (CPU, memory, disk, network)
If the repository includes infra/monitoring/ with Prometheus and
Grafana configuration, you can use those as a starting point.
Example components:
prometheus.yml— scrape and alert rulesgrafana-dashboards/unit09-overview.json— dashboards for:- Repositories observed per hour
- Module generation success rate
- Worker job throughput
For stateless services such as API and worker:
- Build new images.
- Update deployment manifests or service definitions.
- Use rolling deployments or blue-green strategies.
Ensure backward compatibility when possible, especially if the on-chain program or IDL has changed.
If you introduce schema changes:
- Use migrations managed by your chosen ORM or migration tool.
- Apply migrations in a step compatible with both old and new services.
- Roll out services after the migration step completes.
Document migrations and rollback strategies.
Program upgrades are high-impact events. Consider:
- Testing thoroughly on a staging cluster.
- Communicating upgrade windows to users.
- Pausing certain operations if needed during upgrade.
Keep program-id.md and Anchor.toml in sync with the deployed program
ID and cluster.
- Check the Solana validator or cluster logs.
- Verify the program keypair path in
Anchor.tomlis correct. - Confirm your wallet has enough SOL to pay for deployment fees.
- Ensure the
solana.clusterURL in your config is reachable. - Check network access from the container or host.
- Verify commitment levels match your expectations.
- Inspect job queue status (for example, Redis or your chosen broker).
- Check worker logs for error messages and stack traces.
- Validate that the engine configuration is correct and that the program is deployed to the intended cluster.
- Confirm the API base URL configured in the dashboard matches your running API service.
- Test API routes directly with
curl. - Ensure that seeding or pipeline runs have populated repositories and modules on-chain and in the database.
A Unit09 deployment consists of:
- A Solana program that acts as the canonical on-chain brain
- A set of services and tools that observe, decompose, and evolve code
- Supporting infrastructure for storage, queues, metrics, and deployment
Start with:
- Localnet and program deployment
- Local demo stack via Docker Compose
- Manual services for debugging
- Then evolve toward staging and production deployments
Adopt the pieces that fit your use case, and extend or replace others when needed. Unit09 is designed to be modular, including how you deploy it.