This guide explains how to use PostgreSQL as the database and Redis for caching in the Phantom API project.
The Phantom API now supports:
- Database flexibility: SQLite (development) or PostgreSQL (production)
- Redis caching: Intelligent caching layer for improved performance
- Graceful degradation: Application continues if Redis is unavailable
- Migration utilities: Easy migration from SQLite to PostgreSQL
# Start all services (PostgreSQL + Redis + Phantom API)
docker compose up -d
# View logs
docker compose logs -f phantom-api# Install dependencies
cd phantom-api-backend
yarn install
# Set environment variables
cp .env.example .env
# Edit .env file with your configuration
# Start PostgreSQL and Redis (using Docker)
docker compose up -d postgres redis
# Start the application
yarn dev# Database Configuration
DATABASE_TYPE=postgresql # or 'sqlite'
# PostgreSQL Settings
POSTGRES_HOST=localhost
POSTGRES_PORT=5432
POSTGRES_DB=phantom_api
POSTGRES_USER=phantom_user
POSTGRES_PASSWORD=phantom_password
POSTGRES_SSL=false
POSTGRES_POOL_SIZE=10
# Redis Caching
REDIS_ENABLED=true
REDIS_HOST=localhost
REDIS_PORT=6379
REDIS_PASSWORD=
REDIS_DB=0
CACHE_TTL=300 # 5 minutes defaultThe application automatically selects the database based on DATABASE_TYPE:
sqlite(default): Uses SQLite for developmentpostgresql: Uses PostgreSQL for production
- Tables are created automatically from API calls
- Schema evolution: new columns added when new fields are detected
- Foreign key relationships maintained
- All existing dynamic functionality preserved
JavaScript → SQLite → PostgreSQL
string → TEXT → VARCHAR(255)
text → TEXT → TEXT
integer → INTEGER → INTEGER
boolean → INTEGER → BOOLEAN
number → REAL → DECIMAL
date → TEXT → DATE
datetime → TEXT → TIMESTAMP
json → TEXT → TEXT
- Query Results: API responses cached by query parameters
- Metadata: Table schemas and resource definitions
- Table Schemas: Dynamic table structure information
- Sessions: User authentication data (future enhancement)
phantom:api:resource:{tableName}:query:{hash}
phantom:metadata:{resourceName}
phantom:schema:{tableName}
phantom:session:user:{userId}
- Automatic: Cache cleared on CREATE, UPDATE, DELETE operations
- Manual: Admin endpoints for cache management
- TTL-based: Configurable time-to-live for all cache entries
- Query Results: 50-90% faster response times for repeated queries
- Metadata: Near-instant schema lookups
- Reduced Database Load: Fewer database queries
# Basic migration
yarn migrate:postgres
# With backup and validation
yarn migrate:postgres --backup --validate
# Show help
yarn migrate:postgres:help-
Backup your SQLite database:
cp data/phantom.db data/phantom.db.backup
-
Set up PostgreSQL:
docker compose up -d postgres
-
Run migration:
yarn migrate:postgres --sqlite-path ./data/phantom.db --backup --validate
-
Update environment:
# In .env file DATABASE_TYPE=postgresql -
Restart application:
yarn dev
The migration tool validates:
- All tables are created in PostgreSQL
- Record counts match between SQLite and PostgreSQL
- Schema integrity is maintained
- Foreign key relationships are preserved
curl http://localhost:3000/healthResponse includes:
- Database connection status
- Redis cache status
- Memory usage
- System information
API responses include cache headers:
X-Cache: HIT- Response served from cacheX-Cache: MISS- Response from database
Structured logs include:
- Cache hit/miss rates
- Database query performance
- Redis connection status
- Migration progress
GET endpoints automatically cached:
GET /api/users # Cached for 5 minutes
GET /api/users/123 # Cached for 10 minutes
Write operations invalidate cache:
POST /api/users # Invalidates users cache
PUT /api/users/123 # Invalidates users cache
DELETE /api/users/123 # Invalidates users cache
- All existing APIs work unchanged
- SQLite remains the default for development
- Graceful fallback if Redis is unavailable
services:
postgres:
image: postgres:15-alpine
volumes:
- postgres_data:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
redis:
image: redis:7-alpine
volumes:
- redis_data:/data
command: redis-server --appendonly yes
phantom-api:
environment:
DATABASE_TYPE: postgresql
REDIS_ENABLED: true- Use strong passwords for PostgreSQL
- Enable Redis authentication in production
- Use SSL connections where applicable
- Configure proper firewall rules
# Connection pooling
POSTGRES_POOL_SIZE=20
# SSL for production
POSTGRES_SSL=true# Increase cache TTL for stable data
CACHE_TTL=1800 # 30 minutes
# Use Redis password
REDIS_PASSWORD=secure_password- Use pagination for large result sets
- Implement query optimization
- Monitor cache hit rates
- Tune cache TTL based on data volatility
-
PostgreSQL Connection Failed
# Check PostgreSQL status docker compose logs postgres # Verify credentials psql -h localhost -U phantom_user -d phantom_api
-
Redis Connection Issues
# Check Redis status docker compose logs redis # Test connection redis-cli -h localhost ping
-
Migration Failures
# Check logs yarn migrate:postgres --verbose # Validate data manually yarn migrate:postgres --dry-run
Enable verbose logging:
NODE_ENV=development
DEBUG=phantom:*# Start databases only
docker compose up -d postgres redis
# Develop with hot reload
yarn dev
# Run tests
yarn test
# Check cache status
curl http://localhost:3000/health# Run all tests
yarn test
# Test with PostgreSQL
DATABASE_TYPE=postgresql yarn test
# Test cache functionality
REDIS_ENABLED=true yarn test- Monitor performance improvements
- Implement additional caching strategies
- Add cache warming for common queries
- Consider read replicas for PostgreSQL
- Implement cache analytics
For more information, see: