Deploy your FastAPI application to Kubernetes with this comprehensive guide for API Forge. Learn how to use the included Helm chart to deploy PostgreSQL, Redis, Temporal, and your FastAPI app to production Kubernetes clusters with proper secrets management, TLS encryption, and health checks.
API Forge provides a production-ready Helm chart for deploying your complete FastAPI stack to Kubernetes. This FastAPI Kubernetes deployment includes:
- FastAPI Application - Containerized app with health checks and auto-scaling
- Temporal Worker - Distributed workflow processing
- PostgreSQL - Production database with TLS and mTLS
- Redis - Caching and session storage with TLS (optional via config.yaml)
- Temporal Server - Workflow orchestration (optional via config.yaml)
- Kubernetes Secrets - Secure credential management
- NetworkPolicies - Service-to-service security
- ConfigMaps - Environment-specific configuration
The Helm chart automatically synchronizes with your config.yaml to enable/disable services (Redis, Temporal) and provides a single-command deployment experience.
Before deploying to Kubernetes, ensure you have:
- Kubernetes Cluster - v1.24+ (Minikube, GKE, EKS, AKS, or on-prem)
- kubectl - Configured and connected to your cluster
- Helm - v3.0+ (required for deployment)
- Docker - For building images
- Image Registry - Docker Hub, GCR, ECR, or private registry (Minikube can use local images)
Deploy the entire stack with the CLI (recommended):
# Deploy to Kubernetes using Helm
uv run api-forge-cli deploy up k8s
# Check deployment status
kubectl get pods -n api-forge-prod
# Get application URL
kubectl get svc -n api-forge-prod appAccess your FastAPI application:
kubectl port-forward -n api-forge-prod svc/app 8000:8000
open http://localhost:8000/docsWhat the CLI does automatically:
- Synchronizes
config.yamlsettings (redis.enabled, temporal.enabled) to Helmvalues.yaml - Builds Docker images if needed (or uses existing Minikube images)
- Generates secrets and certificates (if not already created)
- Creates namespace via Helm
- Creates Kubernetes secrets from generated files using
infra/helm/api-forge/scripts/apply-secrets.sh - Packages and deploys the Helm chart with all resources
- Runs initialization jobs (postgres-verifier, temporal schema setup)
- Forces pod recreation via timestamp annotations to ensure latest code
- Waits for services to be ready and validates deployment
For manual deployment or customization using Helm commands directly, see the detailed sections below.
Kubernetes deployment is managed with Helm under infra/helm/:
infra/helm/api-forge/
├── Chart.yaml # Helm chart metadata
├── values.yaml # Default configuration values
├── templates/ # Kubernetes resource templates
│ ├── namespace.yaml # Namespace definition
│ ├── configmaps/ # Configuration templates
│ │ ├── app-env.yaml # App environment ConfigMap
│ │ ├── postgres-config.yaml # PostgreSQL configuration
│ │ ├── redis-config.yaml # Redis configuration
│ │ ├── temporal-config.yaml # Temporal configuration
│ │ └── universal-entrypoint.yaml # Entrypoint script
│ ├── deployments/ # Deployment templates
│ │ ├── app.yaml # FastAPI application
│ │ ├── worker.yaml # Temporal worker
│ │ ├── postgres.yaml # PostgreSQL database
│ │ ├── redis.yaml # Redis cache (conditional)
│ │ └── temporal.yaml # Temporal server (conditional)
│ ├── services/ # Service templates
│ │ ├── app.yaml
│ │ ├── postgres.yaml
│ │ ├── redis.yaml
│ │ └── temporal.yaml
│ ├── jobs/ # Initialization job templates
│ │ ├── postgres-verifier.yaml
│ │ ├── temporal-namespace-init.yaml
│ │ └── temporal-schema-setup.yaml
│ ├── persistentvolumeclaims/ # Storage templates
│ │ ├── postgres-data.yaml
│ │ └── redis-data.yaml
│ ├── networkpolicies/ # Security policy templates
│ │ ├── app-netpol.yaml
│ │ └── postgres-netpol.yaml
│ └── _helpers.tpl # Template helpers
└── scripts/ # Deployment scripts
├── apply-secrets.sh # Deploy secrets to K8s
└── build-images.sh # Build Docker images
Key Features:
- Conditional Resources: Redis and Temporal are deployed only if enabled in
config.yaml - Dynamic Configuration: ConfigMaps generated from your project's
config.yamland.env - Automatic Sync: CLI synchronizes settings before each deployment
- Timestamp Annotations: Forces pod recreation to ensure latest Docker images
The CLI provides comprehensive database management commands for Kubernetes deployments, supporting both bundled PostgreSQL (deployed in the cluster) and external databases (like Aiven, AWS RDS, Google Cloud SQL).
# Initialize database with roles, schemas, and permissions
uv run api-forge-cli k8s db init
# Verify database configuration and test authentication
uv run api-forge-cli k8s db verify
# Synchronize local password files to database (after password changes)
uv run api-forge-cli k8s db sync
# Check database health and performance metrics
uv run api-forge-cli k8s db status
# Create a backup of the database
uv run api-forge-cli k8s db backup
# Reset database to clean state (DESTRUCTIVE - dev/test only)
uv run api-forge-cli k8s db resetTo use an external managed PostgreSQL database instead of the bundled one:
-
Configure the external database:
# Using connection string uv run api-forge-cli k8s db create --external \ --connection-string "postgres://admin:secret@db.example.com:5432/mydb?sslmode=require" # Or using individual parameters uv run api-forge-cli k8s db create --external \ --host db.aivencloud.com --port 20369 \ --username avnadmin --password secret \ --database defaultdb --sslmode require
This command will:
- Update
.envwithPRODUCTION_DATABASE_URL - Configure database credentials in
config.yaml - Generate necessary password files in
infra/secrets/keys/
- Update
-
Initialize the database (creates roles, schemas, grants permissions):
uv run api-forge-cli k8s db init
-
Verify the setup (tests connectivity and credentials):
uv run api-forge-cli k8s db verify
-
Deploy - the application will automatically use the external database:
uv run api-forge-cli deploy up k8s
Important Notes:
- The
initcommand creates application users (appuser,backupuser,temporaluser) and theappschema - The
verifycommand now tests password authentication to catch mismatches early - The
synccommand updates database passwords to match your local secret files - In production, the app automatically uses
search_path=appto isolate tables from thepublicschema - Connection strings preserve existing query parameters (like
?sslmode=require) while adding production settings
Using the CLI (included in deploy up k8s):
The CLI automatically handles image building and checks for existing images.
Using the Helm script:
# Build all images with the Helm build script
./infra/helm/api-forge/scripts/build-images.shThis builds:
api-forge-app:latest- FastAPI applicationapi-forge-postgres:latest- PostgreSQL with custom configapi-forge-redis:latest- Redis with TLS supportapi-forge-temporal:latest- Temporal server
For Minikube (local development):
# Use Minikube's Docker daemon (no registry push needed)
eval $(minikube docker-env)
./infra/helm/api-forge/scripts/build-images.shFor production clusters (requires registry):
# Build images
./infra/helm/api-forge/scripts/build-images.sh
# Tag for your registry
docker tag api-forge-app:latest your-registry/api-forge-app:v1.0.0
docker tag api-forge-postgres:latest your-registry/api-forge-postgres:v1.0.0
docker tag api-forge-redis:latest your-registry/api-forge-redis:v1.0.0
docker tag api-forge-temporal:latest your-registry/api-forge-temporal:v1.0.0
# Push to registry
docker push your-registry/api-forge-app:v1.0.0
docker push your-registry/api-forge-postgres:v1.0.0
docker push your-registry/api-forge-redis:v1.0.0
docker push your-registry/api-forge-temporal:v1.0.0
# Update values.yaml with registry paths
# Edit infra/helm/api-forge/values.yaml:
# image:
# app: your-registry/api-forge-app:v1.0.0
# postgres: your-registry/api-forge-postgres:v1.0.0Using the CLI:
# The CLI automatically generates secrets on first deployment
uv run api-forge-cli deploy up k8sUsing the script manually:
# Generate all secrets and certificates
./infra/secrets/generate_secrets.shThis creates in infra/secrets/:
keys/postgres_password.txt- PostgreSQL superuser passwordkeys/postgres_app_user_pw.txt- Application database user passwordkeys/postgres_app_ro_pw.txt- Read-only user passwordkeys/postgres_app_owner_pw.txt- Schema owner passwordkeys/postgres_temporal_pw.txt- Temporal database user passwordkeys/redis_password.txt- Redis authentication passwordkeys/session_signing_secret.txt- Session JWT signing keykeys/csrf_signing_secret.txt- CSRF token signing keykeys/oidc_google_client_secret.txt- Google OAuth client secretkeys/oidc_microsoft_client_secret.txt- Microsoft OAuth client secretkeys/oidc_keycloak_client_secret.txt- Keycloak OAuth client secretcerts/ca.crt,certs/ca.key- Certificate Authority for mTLScerts/postgres.crt,certs/postgres.key- PostgreSQL TLS certificatecerts/redis.crt,certs/redis.key- Redis TLS certificate
Note: The script is idempotent and will not overwrite existing secrets.
Using the CLI:
# The CLI creates the namespace automatically via Helm
uv run api-forge-cli deploy up k8sManual alternative with Helm:
# The namespace is created by the Helm chart
# Configured in infra/helm/api-forge/values.yaml:
# namespace: api-forge-prod
# Or create manually if needed
kubectl create namespace api-forge-prodUsing the CLI:
# The CLI creates all secrets from generated files automatically
uv run api-forge-cli deploy up k8sUsing the Helm script:
# Deploy all secrets to your namespace
./infra/helm/api-forge/scripts/apply-secrets.shThis script reads all secret files from infra/secrets/keys/ and infra/secrets/certs/, then creates or updates the following Kubernetes secrets in the api-forge-prod namespace:
postgres-secrets- Database passwords for all userspostgres-tls- PostgreSQL TLS certificate and keypostgres-ca- Certificate Authority for client verificationredis-secrets- Redis authentication passwordredis-tls- Redis TLS certificate and keyapp-secrets- Session/CSRF signing keys and OIDC client secrets
Manual alternative:
# Set namespace
NAMESPACE=api-forge-prod
# PostgreSQL secrets
kubectl create secret generic postgres-secrets \
--from-file=postgres_password=infra/secrets/keys/postgres_password.txt \
--from-file=postgres_app_user_pw=infra/secrets/keys/postgres_app_user_pw.txt \
--from-file=postgres_app_ro_pw=infra/secrets/keys/postgres_app_ro_pw.txt \
--from-file=postgres_app_owner_pw=infra/secrets/keys/postgres_app_owner_pw.txt \
--from-file=postgres_temporal_pw=infra/secrets/keys/postgres_temporal_pw.txt \
-n $NAMESPACE --dry-run=client -o yaml | kubectl apply -f -
# PostgreSQL TLS
kubectl create secret tls postgres-tls \
--cert=infra/secrets/certs/postgres.crt \
--key=infra/secrets/certs/postgres.key \
-n $NAMESPACE --dry-run=client -o yaml | kubectl apply -f -
# PostgreSQL CA
kubectl create secret generic postgres-ca \
--from-file=ca.crt=infra/secrets/certs/ca.crt \
-n $NAMESPACE --dry-run=client -o yaml | kubectl apply -f -
# Redis secrets
kubectl create secret generic redis-secrets \
--from-file=redis_password=infra/secrets/keys/redis_password.txt \
-n $NAMESPACE --dry-run=client -o yaml | kubectl apply -f -
# Redis TLS
kubectl create secret tls redis-tls \
--cert=infra/secrets/certs/redis.crt \
--key=infra/secrets/certs/redis.key \
-n $NAMESPACE --dry-run=client -o yaml | kubectl apply -f -
# Application secrets
kubectl create secret generic app-secrets \
--from-file=session_signing_secret=infra/secrets/keys/session_signing_secret.txt \
--from-file=csrf_signing_secret=infra/secrets/keys/csrf_signing_secret.txt \
--from-file=oidc_google_client_secret=infra/secrets/keys/oidc_google_client_secret.txt \
--from-file=oidc_microsoft_client_secret=infra/secrets/keys/oidc_microsoft_client_secret.txt \
--from-file=oidc_keycloak_client_secret=infra/secrets/keys/oidc_keycloak_client_secret.txt \
-n $NAMESPACE --dry-run=client -o yaml | kubectl apply -f -Using External Secrets Operator (production recommendation):
For production, use External Secrets Operator to sync secrets from cloud providers:
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: app-secrets
namespace: api-forge-prod
spec:
refreshInterval: 1h
secretStoreRef:
name: aws-secrets-manager
kind: SecretStore
target:
name: app-secrets
data:
- secretKey: session_signing_secret
remoteRef:
key: api-forge/session-secret
- secretKey: csrf_signing_secret
remoteRef:
key: api-forge/csrf-secret
- secretKey: oidc_google_client_secret
remoteRef:
key: api-forge/google-client-secretUsing the CLI (recommended):
# Deploy everything with one command
uv run api-forge-cli deploy up k8s
# The CLI performs these steps:
# 1. Syncs config.yaml → values.yaml (redis.enabled, temporal.enabled)
# 2. Builds/checks Docker images
# 3. Generates secrets if needed
# 4. Applies secrets via script
# 5. Packages Helm chart
# 6. Installs/upgrades Helm release
# 7. Monitors deployment statusManual Helm deployment:
# Navigate to Helm chart directory
cd infra/helm/api-forge
# Package the chart
helm package .
# Install the chart
helm install api-forge ./api-forge-0.1.0.tgz \
--namespace api-forge-prod \
--create-namespace
# Or upgrade if already installed
helm upgrade api-forge ./api-forge-0.1.0.tgz \
--namespace api-forge-prod \
--install
# Check release status
helm list -n api-forge-prod
helm status api-forge -n api-forge-prodCustomizing with values.yaml:
# Create custom values file
cat > custom-values.yaml <<EOF
redis:
enabled: false # Disable Redis deployment
app:
replicaCount: 3 # Scale to 3 replicas
image:
pullPolicy: Always # Always pull latest images
EOF
# Deploy with custom values
helm install api-forge ./infra/helm/api-forge \
--namespace api-forge-prod \
--create-namespace \
--values custom-values.yaml
# Or override specific values via CLI
helm install api-forge ./infra/helm/api-forge \
--namespace api-forge-prod \
--set redis.enabled=false \
--set app.replicaCount=3Check Helm release:
# Using the CLI (recommended)
uv run api-forge-cli deploy status k8s
uv run api-forge-cli deploy history
# Or using Helm directly
helm list -n api-forge-prod
helm status api-forge -n api-forge-prod
# View deployed resources
helm get manifest api-forge -n api-forge-prodCheck Kubernetes resources:
# Get all resources in namespace
kubectl get all -n api-forge-prod
# Check specific resource types
kubectl get pods -n api-forge-prod
kubectl get services -n api-forge-prod
kubectl get deployments -n api-forge-prod
kubectl get jobs -n api-forge-prod
kubectl get pvc -n api-forge-prod
# Check resource details
kubectl describe deployment app -n api-forge-prod
kubectl describe service app -n api-forge-prodCheck job status:
# List all jobs
kubectl get jobs -n api-forge-prod
# Check specific jobs
kubectl get job postgres-verifier -n api-forge-prod
kubectl get job temporal-namespace-init -n api-forge-prod
kubectl get job temporal-schema-setup -n api-forge-prod
# Wait for job completion
kubectl wait --for=condition=complete job/postgres-verifier \
-n api-forge-prod --timeout=300s
# View job logs
kubectl logs -n api-forge-prod job/postgres-verifier
kubectl logs -n api-forge-prod job/temporal-namespace-init
kubectl logs -n api-forge-prod job/temporal-schema-setupJobs run automatically after Helm deployment and perform these tasks:
- postgres-verifier: Validates PostgreSQL TLS certificates and permissions
- temporal-namespace-init: Creates Temporal namespace
- temporal-schema-setup: Initializes Temporal database schemas
Rerun jobs if needed:
# Delete and recreate a job (jobs are immutable)
kubectl delete job postgres-verifier -n api-forge-prod
helm upgrade api-forge ./infra/helm/api-forge -n api-forge-prod
# Or redeploy with CLI
uv run api-forge-cli deploy up k8sPort forward to access the application:
# Forward application port
kubectl port-forward -n api-forge-prod svc/app 8000:8000
# In another terminal, test health endpoints
curl http://localhost:8000/health/live
curl http://localhost:8000/health/ready
curl http://localhost:8000/health
# Access FastAPI documentation
open http://localhost:8000/docsView application logs:
# Application logs
kubectl logs -n api-forge-prod -l app.kubernetes.io/name=app --tail=100 -f
# Worker logs
kubectl logs -n api-forge-prod -l app.kubernetes.io/name=worker --tail=100 -f
# PostgreSQL logs
kubectl logs -n api-forge-prod -l app.kubernetes.io/name=postgres --tail=100
# Redis logs (if enabled)
kubectl logs -n api-forge-prod -l app.kubernetes.io/name=redis --tail=100
# Temporal logs (if enabled)
kubectl logs -n api-forge-prod -l app.kubernetes.io/name=temporal --tail=100Exec into containers for debugging:
# Shell into app container
kubectl exec -it -n api-forge-prod deployment/app -- /bin/bash
# Test database connection
kubectl exec -it -n api-forge-prod deployment/app -- \
psql -h postgres -U appuser -d appdb -c "SELECT version();"
# Test Redis connection (if enabled)
kubectl exec -it -n api-forge-prod deployment/redis -- redis-cli pingConfiguration is managed through Helm's values.yaml file located at infra/helm/api-forge/values.yaml.
Key configuration sections:
# Namespace
namespace: api-forge-prod
# Image configuration
image:
app: api-forge-app:latest
postgres: api-forge-postgres:latest
redis: api-forge-redis:latest
temporal: api-forge-temporal:latest
pullPolicy: IfNotPresent # Use 'Always' for production registries
# Replica counts
app:
replicaCount: 1
worker:
replicaCount: 1
# Service enablement (synced from config.yaml)
redis:
enabled: true
temporal:
enabled: true
# Resource limits
resources:
app:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "1000m"The CLI automatically synchronizes settings from config.yaml to values.yaml before each deployment:
Synced settings:
config.redis.enabled→redis.enabledin values.yamlconfig.temporal.enabled→temporal.enabledin values.yaml
This ensures your Kubernetes deployment matches your application configuration.
How it works:
# When you run:
uv run api-forge-cli deploy up k8s
# The CLI:
# 1. Reads config.yaml
# 2. Updates values.yaml with redis.enabled and temporal.enabled
# 3. Reports synced changes
# 4. Proceeds with Helm deploymentOption 1: Modify values.yaml directly
# Edit the values file
vim infra/helm/api-forge/values.yaml
# Deploy changes
uv run api-forge-cli deploy up k8s
# Or manually:
helm upgrade api-forge ./infra/helm/api-forge -n api-forge-prodOption 2: Create custom values file
# Create custom overrides
cat > custom-values.yaml <<EOF
app:
replicaCount: 3
redis:
enabled: false
resources:
app:
requests:
memory: "512Mi"
cpu: "500m"
EOF
# Deploy with custom values
helm upgrade api-forge ./infra/helm/api-forge \
-n api-forge-prod \
--values custom-values.yamlOption 3: Override via CLI flags
# Override specific values
helm upgrade api-forge ./infra/helm/api-forge \
-n api-forge-prod \
--set app.replicaCount=3 \
--set redis.enabled=falseHelm templates create ConfigMaps dynamically from your project files:
- app-env - Environment variables from
.envandconfig.yaml - postgres-config - PostgreSQL configuration files
- redis-config - Redis configuration
- temporal-config - Temporal configuration
- universal-entrypoint - Container entrypoint script
Updating ConfigMaps:
# Update config.yaml or .env locally
vim config.yaml
# Redeploy with Helm (ConfigMaps are recreated)
helm upgrade api-forge ./infra/helm/api-forge -n api-forge-prod
# Or use CLI
uv run api-forge-cli deploy up k8s
# Restart pods to pick up changes (forced by timestamp annotation)
# Pods automatically restart on each deploymentKubernetes automatically restarts unhealthy pods:
livenessProbe:
httpGet:
path: /health/live
port: 8000
initialDelaySeconds: 60
periodSeconds: 30
timeoutSeconds: 10
failureThreshold: 3Kubernetes only routes traffic to ready pods:
readinessProbe:
httpGet:
path: /health/ready
port: 8000
initialDelaySeconds: 30
periodSeconds: 15
timeoutSeconds: 10
failureThreshold: 3API Forge provides comprehensive health endpoints:
/health/live- Simple liveness check (returns 200 if app is running)/health/ready- Readiness check (validates database, Redis, Temporal connections)/health- Detailed health status with metrics
Resource configuration is managed through values.yaml. The templates dynamically read these values:
# infra/helm/api-forge/values.yaml
app:
resources:
requests:
cpu: 250m
memory: 256Mi
limits:
cpu: 1000m
memory: 1GiProduction Sizing Guidelines:
| Component | Requests (CPU/Mem) | Limits (CPU/Mem) | Notes |
|---|---|---|---|
| App | 250m / 256Mi | 1000m / 1Gi | Scale horizontally with HPA |
| Worker | 250m / 256Mi | 1000m / 1Gi | Conservative scale-down for workflows |
| PostgreSQL | 500m / 1Gi | 2000m / 4Gi | Consider managed DB for HA |
| Redis | 250m / 256Mi | 1000m / 1Gi | Match maxMemory config |
| Temporal | 500m / 1Gi | 2000m / 4Gi | Single instance sufficient for most loads |
The Helm chart includes built-in HPA support for the app and worker deployments. Enable autoscaling in values.yaml:
# infra/helm/api-forge/values.yaml
app:
replicas: 1 # Base replicas when HPA is disabled
autoscaling:
enabled: true
minReplicas: 1
maxReplicas: 5
targetCPUUtilizationPercentage: 70
targetMemoryUtilizationPercentage: 80
behavior:
scaleDown:
stabilizationWindowSeconds: 300 # Wait 5 min before scaling down
percentValue: 10 # Scale down 10% at a time
periodSeconds: 60
scaleUp:
stabilizationWindowSeconds: 0 # Scale up immediately
percentValue: 100
podsValue: 4 # Add up to 4 pods at once
periodSeconds: 15
worker:
autoscaling:
enabled: true
minReplicas: 1
maxReplicas: 5
behavior:
scaleDown:
stabilizationWindowSeconds: 600 # Workers scale down more conservatively
periodSeconds: 120 # to avoid disrupting running workflowsWhen autoscaling.enabled: true, the HPA controller manages replica count automatically based on CPU/memory metrics.
Check HPA status:
kubectl get hpa -n api-forge-prod
kubectl describe hpa app -n api-forge-prodPDBs ensure service availability during voluntary disruptions (node drains, upgrades). The chart includes PDBs for all services:
# infra/helm/api-forge/values.yaml
app:
podDisruptionBudget:
enabled: true
maxUnavailable: 1 # Allow 1 pod to be unavailable (works with any replica count)
# Or use minAvailable (but blocks eviction when replicas=1):
# minAvailable: 1
postgres:
podDisruptionBudget:
enabled: true
maxUnavailable: 1
redis:
podDisruptionBudget:
enabled: true
maxUnavailable: 1Note: Use
maxUnavailableinstead ofminAvailablewhen running single-replica deployments. WithminAvailable: 1and only 1 replica, Kubernetes cannot evict the pod during voluntary disruptions (node drains, upgrades), causing a deadlock.
Check PDB status:
kubectl get pdb -n api-forge-prod
kubectl describe pdb app -n api-forge-prodIf you prefer manual HPA configuration or need custom metrics:
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: app-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: app
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80ClusterIP (internal only):
apiVersion: v1
kind: Service
metadata:
name: postgres
spec:
type: ClusterIP
ports:
- port: 5432
targetPort: 5432
selector:
app: postgresLoadBalancer (external access):
apiVersion: v1
kind: Service
metadata:
name: app
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8000
selector:
app: appAPI Forge includes built-in Ingress support via CLI flags. Enable external access with:
# Basic ingress (HTTP)
uv run api-forge-cli deploy up k8s --ingress
# Custom hostname with TLS
uv run api-forge-cli deploy up k8s --ingress --ingress-host api.example.com --ingress-tls-secret api-tlsFor comprehensive Ingress documentation including TLS setup, cloud provider configurations, and troubleshooting, see the Ingress Configuration Guide.
Restrict pod-to-pod communication:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: app-netpol
spec:
podSelector:
matchLabels:
app: app
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: ingress-nginx
ports:
- protocol: TCP
port: 8000
egress:
- to:
- podSelector:
matchLabels:
app: postgres
ports:
- protocol: TCP
port: 5432
- to:
- podSelector:
matchLabels:
app: redis
ports:
- protocol: TCP
port: 6379Request persistent storage for databases:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-data
spec:
accessModes:
- ReadWriteOnce
storageClassName: standard
resources:
requests:
storage: 10GiMount in deployments:
volumeMounts:
- name: data
mountPath: /var/lib/postgresql/data
volumes:
- name: data
persistentVolumeClaim:
claimName: postgres-dataUse appropriate storage classes for your cloud provider:
- AWS:
gp3(General Purpose SSD) - GCP:
standard-rwo(Standard persistent disk) - Azure:
managed-premium(Premium SSD)
View logs for troubleshooting:
# Application logs
kubectl logs -n my-project-prod deployment/app --tail=100
# Worker logs
kubectl logs -n my-project-prod deployment/worker --tail=100
# PostgreSQL logs
kubectl logs -n my-project-prod deployment/postgres --tail=100
# Follow logs in real-time
kubectl logs -n my-project-prod deployment/app -fExpose Prometheus metrics:
# In your FastAPI app
from prometheus_client import make_asgi_app
# Mount Prometheus metrics endpoint
metrics_app = make_asgi_app()
app.mount("/metrics", metrics_app)If using Prometheus Operator:
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: app-metrics
spec:
selector:
matchLabels:
app: app
endpoints:
- port: http
path: /metrics
interval: 30sAPI Forge provides built-in rollback capabilities using Helm's native release management. If a deployment fails or introduces issues, you can quickly restore to a previous working state.
Check the revision history to see all deployments:
# Using the CLI (recommended)
uv run api-forge-cli deploy history
# Or using Helm directly
helm history api-forge -n api-forge-prodExample output:
📜 Release History: api-forge
┏━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━┓
┃ Revision ┃ Updated ┃ Status ┃ Chart ┃ Description ┃
┡━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━┩
│ 3 │ 2025-12-02 22:30:00 │ deployed │ api-forge-0.1.0 │ Upgrade complete │
│ 2 │ 2025-12-02 20:00:00 │ superseded │ api-forge-0.1.0 │ Upgrade complete │
│ 1 │ 2025-12-01 10:00:00 │ superseded │ api-forge-0.1.0 │ Install complete │
└──────────┴─────────────────────┴────────────┴────────────────────┴─────────────────────┘
Restore to the immediately previous working version:
# Using the CLI (recommended)
uv run api-forge-cli deploy rollback
# Skip confirmation prompt (for automation)
uv run api-forge-cli deploy rollback --yes
# Using Helm directly
helm rollback api-forge -n api-forge-prodRestore to a specific revision number:
# Using the CLI
uv run api-forge-cli deploy rollback 2
# Using Helm directly
helm rollback api-forge 2 -n api-forge-prodThe deployment automatically rolls back if pods fail to start. This is enabled by the --rollback-on-failure flag in Helm upgrade:
# The CLI does this automatically, but for manual deployments:
helm upgrade --install api-forge ./infra/helm/api-forge \
--namespace api-forge-prod \
--wait \
--rollback-on-failureWhen a deployment fails or causes issues:
-
Check current status:
uv run api-forge-cli deploy status k8s kubectl get pods -n api-forge-prod
-
View release history:
uv run api-forge-cli deploy history -
Identify a working revision from the history table
-
Rollback to the working revision:
uv run api-forge-cli deploy rollback <revision>
-
Verify the rollback succeeded:
uv run api-forge-cli deploy status k8s kubectl get pods -n api-forge-prod
Kubernetes also maintains ReplicaSet history for quick pod rollbacks:
# View deployment rollout history
kubectl rollout history deployment/app -n api-forge-prod
# Rollback to previous ReplicaSet
kubectl rollout undo deployment/app -n api-forge-prod
# Rollback to specific revision
kubectl rollout undo deployment/app -n api-forge-prod --to-revision=2Note: The
revisionHistoryLimitsetting invalues.yamlcontrols how many old ReplicaSets are retained. Default is 3.
Check pod status:
kubectl get pods -n my-project-prod
kubectl describe pod -n my-project-prod <pod-name>Common issues:
- ImagePullBackOff: Image doesn't exist or registry auth missing
- CrashLoopBackOff: Application crashes on startup
- Pending: Insufficient resources or PVC not bound
Verify PostgreSQL is running:
kubectl get pods -n my-project-prod -l app=postgres
kubectl logs -n my-project-prod deployment/postgresTest connection from app pod:
kubectl exec -n my-project-prod deployment/app -- \
psql -h postgres -U appuser -d appdb -c "SELECT 1;"Check service:
kubectl get svc -n my-project-prod
kubectl describe svc -n my-project-prod appCheck endpoints:
kubectl get endpoints -n my-project-prod appPort forward for testing:
kubectl port-forward -n my-project-prod svc/app 8000:8000
curl http://localhost:8000/healthname: Deploy to Kubernetes with Helm
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install Helm
uses: azure/setup-helm@v3
with:
version: 'v3.13.0'
- name: Build and push Docker images
run: |
docker build -t ${{ secrets.REGISTRY }}/api-forge-app:${{ github.sha }} .
docker push ${{ secrets.REGISTRY }}/api-forge-app:${{ github.sha }}
# Build other images as needed
- name: Configure kubectl
uses: azure/k8s-set-context@v3
with:
kubeconfig: ${{ secrets.KUBECONFIG }}
- name: Deploy secrets
run: |
# Ensure secrets exist (idempotent)
./infra/helm/api-forge/scripts/apply-secrets.sh
env:
# Secrets should be stored in GitHub Secrets
POSTGRES_PASSWORD: ${{ secrets.POSTGRES_PASSWORD }}
SESSION_SECRET: ${{ secrets.SESSION_SECRET }}
- name: Deploy with Helm
run: |
helm upgrade api-forge ./infra/helm/api-forge \
--install \
--namespace api-forge-prod \
--create-namespace \
--set image.app=${{ secrets.REGISTRY }}/api-forge-app:${{ github.sha }} \
--set image.pullPolicy=Always \
--wait \
--timeout 10m
- name: Verify deployment
run: |
helm status api-forge -n api-forge-prod
kubectl get pods -n api-forge-prod
kubectl rollout status deployment/app -n api-forge-proddeploy:
stage: deploy
image: alpine/helm:3.13.0
script:
# Configure kubectl
- kubectl config set-cluster k8s --server="$K8S_SERVER"
- kubectl config set-credentials gitlab --token="$K8S_TOKEN"
- kubectl config set-context default --cluster=k8s --user=gitlab
- kubectl config use-context default
# Build images (if using GitLab registry)
- docker build -t $CI_REGISTRY_IMAGE/app:$CI_COMMIT_SHA .
- docker push $CI_REGISTRY_IMAGE/app:$CI_COMMIT_SHA
# Deploy secrets
- ./infra/helm/api-forge/scripts/apply-secrets.sh
# Deploy with Helm
- helm upgrade api-forge ./infra/helm/api-forge
--install
--namespace api-forge-prod
--create-namespace
--set image.app=$CI_REGISTRY_IMAGE/app:$CI_COMMIT_SHA
--wait
--timeout 10m
# Verify
- helm status api-forge -n api-forge-prod
- kubectl rollout status deployment/app -n api-forge-prod
only:
- main
### ArgoCD GitOps
For GitOps-style deployments with ArgoCD:
```yaml
# argocd-application.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: api-forge
namespace: argocd
spec:
project: default
source:
repoURL: https://github.com/your-org/your-repo
targetRevision: main
path: infra/helm/api-forge
helm:
valueFiles:
- values.yaml
parameters:
- name: image.app
value: your-registry/api-forge-app:v1.0.0
- name: app.replicaCount
value: "3"
destination:
server: https://kubernetes.default.svc
namespace: api-forge-prod
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=trueDeploy ArgoCD application:
kubectl apply -f argocd-application.yaml- Use Helm for deployments - Provides templating, versioning, and rollback capabilities
- Sync config.yaml settings - Let the CLI handle redis.enabled and temporal.enabled synchronization
- Set resource requests and limits - Configure in
values.yamlfor all containers - Enable HPA for production - Set
app.autoscaling.enabled: truefor automatic scaling - Enable PDBs - Ensure
podDisruptionBudget.enabled: truefor service availability during maintenance - Implement health checks - Configure liveness and readiness probes
- Use secrets properly - Never store sensitive data in ConfigMaps or values.yaml
- Enable NetworkPolicies - Restrict pod-to-pod communication
- Use Ingress with TLS - Secure external access with TLS certificates
- Use PersistentVolumes - Ensure data persistence for stateful services
- Tag images with versions - Avoid using
latestin production - Monitor and log - Implement comprehensive monitoring and logging
- Test locally first - Use Minikube to test deployments before production
- Use External Secrets Operator - For production secret management
- Leverage Helm rollbacks - Use
deploy rollbackCLI command if issues arise
- Use
helm diff- Preview changes before applying (requires helm-diff plugin) - Leverage hooks - Use Helm hooks for pre/post-install actions
- Version your charts - Increment Chart.yaml version for each change
- Test templates - Use
helm templateto render templates locally - Use
.helmignore- Exclude unnecessary files from chart packages
- Ingress Configuration - External access, TLS, and routing
- Docker Dev Environment - Local testing before deployment
- Docker Compose Production - Alternative deployment
- Testing Strategy - Test before deploying
- Secrets Management - Comprehensive secrets guide
- Helm Migration Plan - Migration from Kustomize to Helm
- Kubernetes Documentation
- Helm Documentation
- Helm Best Practices
- External Secrets Operator
- cert-manager
- ArgoCD - GitOps continuous delivery