-
Notifications
You must be signed in to change notification settings - Fork 0
Distributed Locks
Nick edited this page Mar 10, 2026
·
2 revisions
PATAS supports horizontal scaling across multiple instances using distributed locks to coordinate operations and prevent duplicate work.
When running PATAS on multiple instances (e.g., for high availability or load distribution), distributed locks ensure that:
- Pattern mining operations don't run simultaneously on different instances
- Rule promotion/deprecation operations are coordinated
- No duplicate work is performed
PATAS uses a two-tier locking strategy:
- Redis (Preferred): Fast, distributed locks with automatic expiration
- PostgreSQL Advisory Locks (Fallback): Database-level locks when Redis is unavailable
- Pattern Mining Locks: Prevent concurrent pattern mining operations
- Rule Promotion Locks: Coordinate rule promotion/deprecation across instances
- Install and start Redis:
# Using Docker
docker run -d -p 6379:6379 redis:latest
# Or using system package manager
sudo apt-get install redis-server- Configure PATAS to use Redis:
# In .env or environment variables
REDIS_URL=redis://localhost:6379/0
ENABLE_DISTRIBUTED_LOCKS=true
LOCK_TIMEOUT_SECONDS=3600If Redis is not configured, PATAS automatically falls back to PostgreSQL advisory locks. No additional configuration is needed.
For single-instance deployments, you can disable distributed locks:
ENABLE_DISTRIBUTED_LOCKS=false-
Lock Key:
pattern_mining:{days}:{min_spam_count} - Timeout: Configurable (default: 3600 seconds)
-
Behavior: If another instance is already mining patterns with the same parameters, the operation returns immediately with
already_in_progressstatus
-
Lock Keys:
-
rule_promotion:shadow- For shadow rule promotion -
rule_promotion:monitor- For active rule monitoring
-
- Timeout: Configurable (default: 3600 seconds)
- Behavior: Only one instance can promote or monitor rules at a time
When using Redis, locks are automatically refreshed (heartbeat) to prevent expiration during long-running operations:
- Heartbeat Interval: 60 seconds (configurable)
- Automatic Extension: Lock TTL is refreshed while the operation is running
- Redis Connection Failure: Automatically falls back to PostgreSQL advisory locks
- Lock Acquisition Failure: Operation returns with appropriate error message
- Lock Release Failure: Logged as warning, doesn't affect operation result
- Use Redis for Production: Provides better performance and automatic expiration
- Set Appropriate Timeouts: Ensure lock timeout exceeds expected operation duration
-
Monitor Lock Contention: Watch for frequent
already_in_progressresponses - Single Instance: Disable distributed locks for single-instance deployments to reduce overhead
If you see already_in_progress errors:
- Check if another instance is running the same operation
- Verify lock timeout is sufficient for your operation duration
- Check Redis/PostgreSQL connectivity
If Redis is unavailable:
- PATAS automatically falls back to PostgreSQL locks
- Check Redis connectivity:
redis-cli ping - Verify
REDIS_URLconfiguration
If locks expire during long operations:
- Increase
LOCK_TIMEOUT_SECONDSconfiguration - Check Redis memory and connection pool settings
- Monitor operation duration and optimize if needed
- Scaling Guide - Horizontal scaling strategies
- Production Deployment Guide - Production deployment best practices
- Configuration - Complete configuration reference