feat: Improved rate limit handling#945
Conversation
Review Summary by QodoImplement per-host rate limit coordination with separate client groups
WalkthroughsDescription• Separates rate limit tracking into two HTTP client groups for independent pacing • Introduces RateLimitCoordinator to track per-host cooldowns across requests • Adds TORBOX_CLIENT_SLOW for slow operations with dedicated rate limit handling • Improves 429 response handling without requiring timeout overrides • Adds comprehensive test coverage for rate limit coordination and handler logic Diagramflowchart LR
A["HTTP Requests"] --> B["RateLimitHandler"]
B --> C["RateLimitCoordinator"]
C --> D["Per-Host Cooldowns"]
E["Fast Operations"] --> F["TORBOX_CLIENT"]
G["Slow Operations"] --> H["TORBOX_CLIENT_SLOW"]
F --> B
H --> B
B --> I["429 Response"]
I --> D
D --> J["Pause Dequeuing"]
File Changes1. server/RdtClient.Service.Test/Helpers/RateLimitCoordinatorTest.cs
|
Code Review by Qodo
1. TorBox user call unhandled
|
…and improve test consistency across methods. Updated logging and error handling for rate-limiting scenarios.
fdc8c55 to
67afe18
Compare
…ption Needed to move the retry handler before the TimeOut handler, and drive the throw from the TimeOut configured by the user for the provider.
Separates calls into two groups, which allows pacing headers to be tracked separately greatly increasing the reliability of the rate limiting.
Other improvements to the handling of 429 responses so that it's no longer necessary to override request timeouts to mean this on slow queries.
The custom handler is not needed on the standard HTTP client resiliency pipeline now, since the rate limiting on this pipeline will only be a few seconds, and cannot exceed the timeout typically, where on the slow pipeline (used for creating new downloads on TorBox) the limitations are exceptionally long so the handler is needed to determine when a timeout would happen from the throttle, and properly report it to the RateLimitCoordinator.
Other providers should consider testing with the TorBox standard pipeline and adopting it as any provider using CloudFlare or implementing standard headers for pacing and rate limits would benefit without need for changes. If they use really low rate limits on some API groups like TorBox, then use the slow pipeline.
For future consideration, if other providers have more rate limits groups to monitor, or need notably different settings it might be better to use a factory implemented by the provider classes to generate the needed pipelines independently.