This is a simplified, production-ready deployment of Sentry using a single Docker Compose file.
- Single Command Deployment: Just
docker compose up -d - Automatic Initialization: Secret generation, database migrations, and setup handled automatically
- Volume-Based Configuration: All configuration stored in Docker volumes
- Minimal File Clutter: Only volumes for user data are external
- Production Ready: Optimized for production deployments
- Docker Engine 20.10+ with Compose V2
- At least 4GB RAM (8GB+ recommended)
- 20GB+ disk space
# Copy the environment file
cp .env.example .env
# Edit the configuration
nano .envRequired Configuration:
- Set
SENTRY_MAIL_HOSTto your mail server hostname - Optionally customize
SENTRY_SYSTEM_SECRET_KEY(will auto-generate if not set)
# Start all services
docker compose -f docker-compose.production.yml --env-file .env up -d
# Watch the initialization (first run only)
docker compose -f docker-compose.production.yml --env-file .env logs -f init
# Once init is complete, all services will start automaticallyOn first deployment, you'll be prompted to create an admin user:
docker compose -f docker-compose.production.yml --env-file .env run --rm web createuserOpen your browser to: http://localhost:9000 (or your configured SENTRY_BIND port)
All configuration is done through .env:
| Variable | Required | Description | Default |
|---|---|---|---|
SENTRY_SYSTEM_SECRET_KEY |
Yes | Secret key for encryption | Auto-generated |
SENTRY_MAIL_HOST |
Yes | Mail server hostname | - |
SENTRY_BIND |
No | Port to expose Sentry | 9000 |
COMPOSE_PROFILES |
No | Features to enable | feature-complete |
SENTRY_EVENT_RETENTION_DAYS |
No | Days to retain events | 90 |
See .env.example for all available options.
Choose the features you need:
feature-complete(default): All Sentry features including performance monitoring, profiling, replays, etc.errors-only: Minimal setup for error monitoring only (uses fewer resources)
Set in .env:
COMPOSE_PROFILES=errors-onlyAdvanced configuration can be done by editing files in the Docker volumes after first run:
# List configuration volumes
docker volume ls | grep sentry-.*-config
# Edit Sentry config
docker run --rm -v sentry-config:/config -it alpine vi /config/config.yml
# Edit Sentry Python config
docker run --rm -v sentry-config:/config -it alpine vi /config/sentry.conf.py# All services
docker compose -f docker-compose.production.yml logs -f
# Specific service
docker compose -f docker-compose.production.yml logs -f web
# Last 100 lines
docker compose -f docker-compose.production.yml logs --tail=100# Restart all
docker compose -f docker-compose.production.yml restart
# Restart specific service
docker compose -f docker-compose.production.yml restart web# Stop all services (data is preserved)
docker compose -f docker-compose.production.yml stop
# Start all services
docker compose -f docker-compose.production.yml start
# Stop and remove containers (data still preserved in volumes)
docker compose -f docker-compose.production.yml down# Update .env with new image versions
nano .env
# Pull new images
docker compose -f docker-compose.production.yml pull
# Recreate containers with new images
docker compose -f docker-compose.production.yml up -d
# Watch the upgrade process
docker compose -f docker-compose.production.yml logs -f init webMigrations run automatically during init. To run manually:
docker compose -f docker-compose.production.yml run --rm web upgradedocker compose -f docker-compose.production.yml run --rm web createuserdocker compose -f docker-compose.production.yml run --rm web shellAll persistent data is stored in Docker volumes:
# List data volumes
docker volume ls | grep sentry-postgres
docker volume ls | grep sentry-redis
docker volume ls | grep sentry-kafka
docker volume ls | grep sentry-clickhouse
docker volume ls | grep sentry-seaweedfs
# Backup a volume (example: postgres)
docker run --rm \
-v sentry-postgres:/data \
-v $(pwd):/backup \
alpine tar czf /backup/sentry-postgres-backup.tar.gz -C /data .# Stop services
docker compose -f docker-compose.production.yml down
# Restore a volume (example: postgres)
docker run --rm \
-v sentry-postgres:/data \
-v $(pwd):/backup \
alpine sh -c "cd /data && tar xzf /backup/sentry-postgres-backup.tar.gz"
# Start services
docker compose -f docker-compose.production.yml up -dCleanup runs automatically via the sentry-cleanup cron job based on SENTRY_EVENT_RETENTION_DAYS.
To run manually:
docker compose -f docker-compose.production.yml run --rm web cleanup --days 90WARNING: This will delete all data!
# Stop and remove containers
docker compose -f docker-compose.production.yml down
# Remove all volumes
docker volume rm $(docker volume ls -q | grep sentry-)
# Start fresh
docker compose -f docker-compose.production.yml up -dAll services have built-in health checks:
# View service health status
docker compose -f docker-compose.production.yml ps# View resource usage
docker stats
# View disk usage
docker system df -vSentry can send metrics to a StatsD server. Set in .env:
STATSD_ADDR=your-statsd-server:8125# Check logs
docker compose -f docker-compose.production.yml logs
# Check specific service
docker compose -f docker-compose.production.yml logs postgres
docker compose -f docker-compose.production.yml logs kafka
docker compose -f docker-compose.production.yml logs clickhouse# Re-run initialization
docker compose -f docker-compose.production.yml up init --force-recreate
# Check init logs
docker compose -f docker-compose.production.yml logs initIncrease Docker memory limit or adjust ClickHouse memory usage in .env:
# Lower ClickHouse memory usage (default is 0.3 = 30% of host memory)
# Set MAX_MEMORY_USAGE_RATIO via custom ClickHouse config- Increase Docker resources (CPU, memory, disk)
- Use SSD storage for Docker volumes
- Adjust retention period to reduce data volume
- Consider using
errors-onlyprofile if you don't need full features
- nginx: Reverse proxy and load balancer
- web: Sentry web application
- cron: Scheduled cleanup tasks
- worker: Background task workers
- relay: Event ingestion proxy
- postgres: Main database
- clickhouse: Analytics database
- kafka: Message queue
- redis: Cache and task queue
- symbolicator: Debug symbol processing
- snuba: Analytics query engine
- vroom: Profiling service (feature-complete only)
- uptime-checker: Uptime monitoring (feature-complete only)
Data Volumes (persist user data):
sentry-data: Uploaded files and artifactssentry-postgres: PostgreSQL databasesentry-redis: Redis datasentry-kafka: Kafka message logssentry-clickhouse: ClickHouse analytics datasentry-seaweedfs: S3-compatible object storagesentry-symbolicator: Cached debug symbolssentry-vroom: Profiling data
Configuration Volumes (managed by init container):
sentry-config: Sentry configuration filessentry-relay-config: Relay configurationsentry-*-config: Various service configurations
Ephemeral Volumes (can be deleted):
sentry-nginx-cache: Nginx cachesentry-kafka-log: Kafka logs
- Change Default Credentials: Ensure
SENTRY_SYSTEM_SECRET_KEYis set to a strong random value - Network Isolation: Consider using Docker networks to isolate services
- TLS/SSL: Place nginx behind a TLS-terminating reverse proxy (Traefik, nginx, etc.)
- Firewall: Only expose port
SENTRY_BIND(default 9000) to the network - Regular Updates: Keep Sentry images updated with security patches
- Backups: Regularly backup data volumes
-
Use Specific Image Versions: Don't use
:nightlytags in productionSENTRY_IMAGE=ghcr.io/getsentry/sentry:24.1.0
-
Set Resource Limits: Configure Docker resource limits for services
-
Use External Database: For large deployments, use managed PostgreSQL/ClickHouse
-
Enable Monitoring: Set up StatsD/Prometheus monitoring
-
Configure Backups: Automate volume backups
-
Use a Reverse Proxy: Add TLS termination with Let's Encrypt
-
Scale Workers: Add more worker containers for high-volume installations
This simplified deployment differs from the official installation:
- ✅ Single compose file: No multiple YAML files to manage
- ✅ No install script: Everything handled by Docker Compose
- ✅ Volume-based config: Configuration persists in volumes
- ✅ Init container: Setup runs automatically on first start
- ✅ Production focused: Optimized for deployment simplicity
For issues specific to this deployment setup, open an issue in this repository.
For general Sentry questions, see:
Same as Sentry self-hosted: BSL 1.1