This guide covers running and deploying the Open Hardware Manager (OHM) in containerized environments.
- Quick Start
- Configuration
- Usage Modes
- Production Deployment
- Cloud Platform Deployment
- Monitoring and Logging
- Security Considerations
- Troubleshooting
-
Clone and navigate to the project:
cd supply-graph-ai -
Copy the environment template:
cp env.template .env
-
Edit the
.envfile with your configuration:nano .env # or your preferred editor -
Start the API server:
docker-compose up ohm-api
-
Access the API:
- API Documentation: http://localhost:8001/docs
- Health Check: http://localhost:8001/health
- API Base URL: http://localhost:8001/v1
-
Build the image:
docker build -t open-matching-engine . -
Run the API server:
docker run -p 8001:8001 \ -e API_KEYS="your-api-key" \ -v $(pwd)/storage:/app/storage \ -v $(pwd)/logs:/app/logs \ open-matching-engine api
-
Run CLI commands:
docker run --rm \ -v $(pwd)/storage:/app/storage \ -v $(pwd)/test-data:/app/test-data \ open-matching-engine cli okh validate /app/test-data/manifest.okh.json
The container supports configuration through environment variables. See env.template for a complete list of available options.
API_HOST: API server host (default: 0.0.0.0)API_PORT: API server port (default: 8001)API_KEYS: Comma-separated list of API keys for authenticationLOG_LEVEL: Logging level (default: INFO)DEBUG: Enable debug mode (default: false)
STORAGE_PROVIDER: Storage provider (local, aws_s3, azure_blob, gcp_storage)STORAGE_BUCKET_NAME: Storage bucket/container name
LLM_ENABLED: Enable LLM integration (default: false)LLM_PROVIDER: LLM provider (openai, anthropic, google, azure, local)LLM_MODEL: Specific model to useLLM_QUALITY_LEVEL: Quality level (hobby, professional, medical)
The container expects the following volume mounts:
/app/storage: Persistent storage directory/app/logs: Log files directory/app/test-data: Test data directory (optional)
Start the FastAPI server:
docker run -p 8001:8001 open-matching-engine apiRun CLI commands:
# Show CLI help
docker run --rm open-matching-engine cli --help
# Validate an OKH file
docker run --rm \
-v $(pwd)/test-data:/app/test-data \
open-matching-engine cli okh validate /app/test-data/manifest.okh.json
# List packages
docker run --rm open-matching-engine cli package list
# Run matching
docker run --rm \
-v $(pwd)/test-data:/app/test-data \
open-matching-engine cli match okh /app/test-data/manifest.okh.json-
Build production image:
docker build -t ome-prod . -
Run with production settings:
docker run -d \ --name ome-api \ -p 8001:8001 \ -e API_KEYS="your-production-api-key" \ -e LOG_LEVEL="INFO" \ -e STORAGE_PROVIDER="aws_s3" \ -e AWS_S3_BUCKET="your-bucket" \ -v ome-storage:/app/storage \ -v ome-logs:/app/logs \ ome-prod
Create a docker-compose.prod.yml:
version: '3.8'
services:
ohm-api:
build:
context: .
dockerfile: Dockerfile
container_name: ohm-api-prod
ports:
- "8001:8001"
environment:
- API_HOST=0.0.0.0
- API_PORT=8001
- LOG_LEVEL=INFO
- DEBUG=false
- API_KEYS=${API_KEYS}
- STORAGE_PROVIDER=${STORAGE_PROVIDER}
- STORAGE_BUCKET_NAME=${STORAGE_BUCKET_NAME}
- LLM_ENABLED=${LLM_ENABLED}
- LLM_PROVIDER=${LLM_PROVIDER}
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
volumes:
- ohm-storage:/app/storage
- ohm-logs:/app/logs
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8001/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
volumes:
ohm-storage:
ohm-logs:Deploy with:
docker-compose -f docker-compose.prod.yml up -d-
Build and push image:
gcloud builds submit --tag gcr.io/PROJECT_ID/open-matching-engine
-
Deploy to Cloud Run:
gcloud run deploy open-matching-engine \ --image gcr.io/PROJECT_ID/open-matching-engine \ --platform managed \ --region us-central1 \ --allow-unauthenticated \ --port 8001 \ --memory 4Gi \ --cpu 2 \ --max-instances 10 \ --set-env-vars="API_KEYS=your-api-key,STORAGE_PROVIDER=gcp_storage"
-
Create ECS task definition:
{ "family": "open-matching-engine", "networkMode": "awsvpc", "requiresCompatibilities": ["FARGATE"], "cpu": "1024", "memory": "2048", "executionRoleArn": "arn:aws:iam::ACCOUNT:role/ecsTaskExecutionRole", "containerDefinitions": [{ "name": "ohm-api", "image": "ACCOUNT.dkr.ecr.REGION.amazonaws.com/open-matching-engine:latest", "portMappings": [{ "containerPort": 8001, "protocol": "tcp" }], "environment": [ {"name": "API_KEYS", "value": "your-api-key"}, {"name": "STORAGE_PROVIDER", "value": "aws_s3"}, {"name": "AWS_S3_BUCKET", "value": "your-bucket"} ], "logConfiguration": { "logDriver": "awslogs", "options": { "awslogs-group": "/ecs/open-matching-engine", "awslogs-region": "us-east-1", "awslogs-stream-prefix": "ecs" } } }] } -
Create ECS service:
aws ecs create-service \ --cluster your-cluster \ --service-name ohm-api \ --task-definition open-matching-engine \ --desired-count 2 \ --launch-type FARGATE \ --network-configuration "awsvpcConfiguration={subnets=[subnet-12345],securityGroups=[sg-12345],assignPublicIp=ENABLED}"
- Deploy with Azure CLI:
az container create \ --resource-group myResourceGroup \ --name ome-api \ --image your-registry.azurecr.io/open-matching-engine:latest \ --cpu 2 \ --memory 4 \ --ports 8001 \ --environment-variables \ API_KEYS=your-api-key \ STORAGE_PROVIDER=azure_blob \ AZURE_STORAGE_ACCOUNT_NAME=your-account \ --registry-login-server your-registry.azurecr.io \ --registry-username your-username \ --registry-password your-password
-
Apply Kubernetes manifests:
kubectl apply -f k8s-deployment.yaml
-
Check deployment status:
kubectl get pods -n ome kubectl get services -n ome kubectl get ingress -n ome
-
Access the application:
kubectl port-forward -n ohm service/ohm-api-service 8001:80
The application provides several health check endpoints:
GET /health- Basic health checkGET /- API information and status
Configure logging through environment variables:
LOG_LEVEL=INFO # DEBUG, INFO, WARNING, ERROR
LOG_FILE=logs/app.logAdd Prometheus metrics endpoint:
# In your FastAPI app
from prometheus_fastapi_instrumentator import Instrumentator
instrumentator = Instrumentator()
instrumentator.instrument(app).expose(app)For production, consider using:
- ELK Stack (Elasticsearch, Logstash, Kibana)
- Fluentd for log collection
- CloudWatch (AWS) or Stackdriver (GCP) for cloud logging
-
Use strong API keys:
API_KEYS="$(openssl rand -hex 32),$(openssl rand -hex 32)" -
Enable HTTPS in production:
- Use reverse proxy (nginx, traefik)
- Configure SSL certificates
- Set secure headers
-
Network security:
- Use private networks where possible
- Configure firewall rules
- Implement rate limiting
- Use non-root user (already configured)
- Scan images for vulnerabilities:
docker scan open-matching-engine
- Keep base images updated
- Use secrets management for sensitive data
- Encrypt data at rest
- Use secure storage backends
- Implement data retention policies
- Regular security audits
-
Container won't start:
docker logs <container-id>
-
API not accessible:
- Check port mapping
- Verify firewall settings
- Check container health
-
Storage issues:
- Verify volume mounts
- Check storage provider credentials
- Ensure proper permissions
-
Memory issues:
- Monitor memory usage
- Adjust container limits
- Check for memory leaks
Enable debug mode for troubleshooting:
docker run -e DEBUG=true -e LOG_LEVEL=DEBUG open-matching-engine api-
Monitor resource usage:
docker stats <container-id>
-
Scale horizontally:
docker-compose up --scale ohm-api=3
- Use environment-specific configurations
- Implement proper logging and monitoring
- Regular security updates
- Backup strategies for persistent data
- Disaster recovery planning
- Performance testing and optimization
For deployment issues:
- Check the logs for error messages
- Verify environment variable configuration
- Ensure all required volumes are mounted
- Check network connectivity for external services
- Review the troubleshooting section above