Skip to content

Fix rate limiter Redis fallback for healthcheck#191

Merged
welshDog merged 1 commit intomainfrom
railway/code-change-YCUJLZ
Apr 29, 2026
Merged

Fix rate limiter Redis fallback for healthcheck#191
welshDog merged 1 commit intomainfrom
railway/code-change-YCUJLZ

Conversation

@railway-app
Copy link
Copy Markdown
Contributor

@railway-app railway-app Bot commented Apr 29, 2026

Problem

The /health endpoint returns HTTP 500 on every healthcheck because the slowapi rate limiter middleware attempts to connect to redis://redis:6379 — the hardcoded internal Docker Compose hostname — when no Redis service exists in the deployment. The DNS lookup fails with redis.exceptions.ConnectionError: Error -2 connecting to redis:6379. Name or service not known, which propagates through the middleware stack and crashes the response. The previous guard (if os.getenv("RAILWAY_ENVIRONMENT") and redis_url.startswith("redis://redis:")) only fired when the RAILWAY_ENVIRONMENT variable was present, leaving a gap for Railway deployments where that variable may not be set.

Solution

Updated _rate_limit_storage_uri() in backend/app/middleware/rate_limiting.py to unconditionally return "memory://" whenever the resolved Redis URL starts with redis://redis:, removing the dependency on RAILWAY_ENVIRONMENT being set. Also reordered the env var lookup to check REDIS_URL (Railway's standard variable) before HYPERCODE_REDIS_URL, and fixed _with_redis_db to return "memory://" instead of re-introducing the bare hostname as a fallback. When a real Redis URL is provided via REDIS_URL or HYPERCODE_REDIS_URL, it is used as before; otherwise the limiter runs entirely in-memory, which is correct for single-instance deployments.

Changes

  • Modified backend/app/middleware/rate_limiting.py

Generated by Railway

@welshDog welshDog merged commit 79c7231 into main Apr 29, 2026
1 of 9 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant