Skip to content

feat: Improved rate limit handling#945

Merged
rogerfar merged 3 commits intorogerfar:mainfrom
omgbeez:upstream/even-better-ratelimitng
Mar 14, 2026
Merged

feat: Improved rate limit handling#945
rogerfar merged 3 commits intorogerfar:mainfrom
omgbeez:upstream/even-better-ratelimitng

Conversation

@omgbeez
Copy link
Contributor

@omgbeez omgbeez commented Mar 7, 2026

Separates calls into two groups, which allows pacing headers to be tracked separately greatly increasing the reliability of the rate limiting.

Other improvements to the handling of 429 responses so that it's no longer necessary to override request timeouts to mean this on slow queries.

The custom handler is not needed on the standard HTTP client resiliency pipeline now, since the rate limiting on this pipeline will only be a few seconds, and cannot exceed the timeout typically, where on the slow pipeline (used for creating new downloads on TorBox) the limitations are exceptionally long so the handler is needed to determine when a timeout would happen from the throttle, and properly report it to the RateLimitCoordinator.

Other providers should consider testing with the TorBox standard pipeline and adopting it as any provider using CloudFlare or implementing standard headers for pacing and rate limits would benefit without need for changes. If they use really low rate limits on some API groups like TorBox, then use the slow pipeline.

For future consideration, if other providers have more rate limits groups to monitor, or need notably different settings it might be better to use a factory implemented by the provider classes to generate the needed pipelines independently.

@qodo-code-review
Copy link
Contributor

Review Summary by Qodo

Implement per-host rate limit coordination with separate client groups

✨ Enhancement

Grey Divider

Walkthroughs

Description
• Separates rate limit tracking into two HTTP client groups for independent pacing
• Introduces RateLimitCoordinator to track per-host cooldowns across requests
• Adds TORBOX_CLIENT_SLOW for slow operations with dedicated rate limit handling
• Improves 429 response handling without requiring timeout overrides
• Adds comprehensive test coverage for rate limit coordination and handler logic
Diagram
flowchart LR
  A["HTTP Requests"] --> B["RateLimitHandler"]
  B --> C["RateLimitCoordinator"]
  C --> D["Per-Host Cooldowns"]
  E["Fast Operations"] --> F["TORBOX_CLIENT"]
  G["Slow Operations"] --> H["TORBOX_CLIENT_SLOW"]
  F --> B
  H --> B
  B --> I["429 Response"]
  I --> D
  D --> J["Pause Dequeuing"]
Loading

Grey Divider

File Changes

1. server/RdtClient.Service.Test/Helpers/RateLimitCoordinatorTest.cs 🧪 Tests +68/-0

New unit tests for rate limit coordinator

server/RdtClient.Service.Test/Helpers/RateLimitCoordinatorTest.cs


2. server/RdtClient.Service.Test/Helpers/RateLimitHandlerTest.cs 🧪 Tests +47/-3

Enhanced tests for rate limit handler behavior

server/RdtClient.Service.Test/Helpers/RateLimitHandlerTest.cs


3. server/RdtClient.Service.Test/Services/TorrentClients/TorBoxDebridClientTest.cs 🧪 Tests +86/-178

Updated tests to use rate limit coordinator mock

server/RdtClient.Service.Test/Services/TorrentClients/TorBoxDebridClientTest.cs


View more (11)
4. server/RdtClient.Service/DiConfig.cs ⚙️ Configuration changes +10/-13

Register coordinator and split HTTP clients

server/RdtClient.Service/DiConfig.cs


5. server/RdtClient.Service/Helpers/IRateLimitCoordinator.cs ✨ Enhancement +10/-0

New interface for rate limit coordination

server/RdtClient.Service/Helpers/IRateLimitCoordinator.cs


6. server/RdtClient.Service/Helpers/RateLimitCoordinator.cs ✨ Enhancement +84/-0

New coordinator implementation for per-host cooldowns

server/RdtClient.Service/Helpers/RateLimitCoordinator.cs


7. server/RdtClient.Service/Helpers/RateLimitHandler.cs ✨ Enhancement +40/-19

Refactored to use coordinator and extract retry logic

server/RdtClient.Service/Helpers/RateLimitHandler.cs


8. server/RdtClient.Service/Services/DebridClients/TorBoxDebridClient.cs ✨ Enhancement +238/-222

Integrated coordinator and separated client usage

server/RdtClient.Service/Services/DebridClients/TorBoxDebridClient.cs


9. server/RdtClient.Service/Services/TorrentRunner.cs ✨ Enhancement +10/-21

Use coordinator instead of static dequeue time

server/RdtClient.Service/Services/TorrentRunner.cs


10. server/RdtClient.Web.Test/Controllers/TorrentsControllerNzbTest.cs 🧪 Tests +6/-3

Updated test to inject coordinator mock

server/RdtClient.Web.Test/Controllers/TorrentsControllerNzbTest.cs


11. server/RdtClient.Web/Controllers/TorrentsController.cs ✨ Enhancement +4/-4

Use coordinator for rate limit status endpoint

server/RdtClient.Web/Controllers/TorrentsController.cs


12. server/RdtClient.Web/Program.cs ⚙️ Configuration changes +2/-1

Add Polly logging level configuration

server/RdtClient.Web/Program.cs


13. client/src/app/torrent-table/torrent-table.component.html 📝 Documentation +1/-1

Update rate limit notification message text

client/src/app/torrent-table/torrent-table.component.html


14. client/src/app/torrent-table/torrent-table.component.ts ✨ Enhancement +3/-6

Simplify sort direction toggle logic

client/src/app/torrent-table/torrent-table.component.ts


Grey Divider

Qodo Logo

@qodo-code-review
Copy link
Contributor

qodo-code-review bot commented Mar 7, 2026

Code Review by Qodo

🐞 Bugs (3) 📘 Rule violations (0) 📎 Requirement gaps (0)

Grey Divider


Action required

1. TorBox user call unhandled 🐞 Bug ⛯ Reliability
Description
AddTorrentMagnet/AddTorrentFile call GetClient().User.GetAsync(true) outside HandleErrors
and via the default TorBox client, which is registered without RateLimitHandler. A 429/cooldown on
the user endpoint can therefore bypass the new coordinator-based rate-limit handling and bubble up
as a non-RateLimitException failure.
Code

server/RdtClient.Service/Services/DebridClients/TorBoxDebridClient.cs[R269-270]

            var user = await GetClient().User.GetAsync(true);
-            var result = await GetClient().Torrents.AddMagnetAsync(magnetLink, user.Data?.Settings?.SeedTorrents ?? 3, as_queued: asQueued);
-
+            var result = await GetClient(DiConfig.TORBOX_CLIENT_SLOW).Torrents.AddMagnetAsync(magnetLink, user.Data?.Settings?.SeedTorrents ?? 3, as_queued: asQueued);
Evidence
The default TorBox named client (TORBOX_CLIENT) is configured without RateLimitHandler, while
only TORBOX_CLIENT_SLOW includes it. TorBoxDebridClient.GetClient() defaults to TORBOX_CLIENT,
and the add methods call User.GetAsync(true) through this default client and not wrapped by
HandleErrors, so any rate limit response (e.g., 429 translated by RateLimitHandler) will not be
converted into RateLimitException nor update the coordinator for this call path.

server/RdtClient.Service/DiConfig.cs[79-90]
server/RdtClient.Service/Services/DebridClients/TorBoxDebridClient.cs[20-21]
server/RdtClient.Service/Services/DebridClients/TorBoxDebridClient.cs[265-282]
server/RdtClient.Service/Services/DebridClients/TorBoxDebridClient.cs[203-226]
server/RdtClient.Service/Helpers/RateLimitHandler.cs[48-54]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
`AddTorrentMagnet` and `AddTorrentFile` call `GetClient().User.GetAsync(true)` outside of the `HandleErrors` wrapper and through the default TorBox client. Since the default TorBox named HttpClient is registered without `RateLimitHandler`, 429-based cooldown behavior can bypass the coordinator and cause non-RateLimitException failures.

### Issue Context
- `TORBOX_CLIENT` currently has resilience policies but does not have `RateLimitHandler`, which is the component that turns 429 into `RateLimitException` and updates `IRateLimitCoordinator`.
- `TORBOX_CLIENT_SLOW` does have `RateLimitHandler`, and add operations use it for the add call—but not for the user preflight call.

### Fix Focus Areas
- server/RdtClient.Service/Services/DebridClients/TorBoxDebridClient.cs[265-282]
- server/RdtClient.Service/Services/DebridClients/TorBoxDebridClient.cs[203-226]
- server/RdtClient.Service/DiConfig.cs[79-90]

### Suggested approach
- Option A (minimal/local): In `AddTorrentMagnet` / `AddTorrentFile`, fetch the user with the slow client and through `HandleErrors`, e.g. `var user = await HandleErrors(() => GetClient(DiConfig.TORBOX_CLIENT_SLOW).User.GetAsync(true));`.
- Option B (systemic): Add `.AddHttpMessageHandler<RateLimitHandler>()` to `TORBOX_CLIENT` as well, so all TorBox calls participate in 429->cooldown coordination.
- Consider adding/adjusting unit tests to cover rate limiting occurring on the user endpoint during add flows.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Remediation recommended

2. Rate limit banner stuck 🐞 Bug ✧ Quality
Description
When cooldowns expire, the server no longer pushes a SignalR rateLimitStatus update to clear the
UI, and the UI doesn’t poll for changes. As a result, the banner can keep showing “Processing paused
until …” after processing has already resumed, until refresh or another rate-limit event.
Code

server/RdtClient.Service/Services/TorrentRunner.cs[R350-356]

+            var nextAllowedAt = coordinator.GetMaxNextAllowedAt();
+            if (nextAllowedAt > DateTimeOffset.UtcNow)
            {
-                logger.LogDebug($"Dequeuing torrents is paused until {NextDequeueTime}, {NextDequeueTime - DateTimeOffset.Now} remaining");
+                logger.LogDebug($"Dequeuing torrents is paused until {nextAllowedAt}, {nextAllowedAt - DateTimeOffset.Now} remaining");
            }
            else
            {
-                if (NextDequeueTime != DateTimeOffset.MinValue)
-                {
-                    NextDequeueTime = DateTimeOffset.MinValue;
-
-                    await remoteService.UpdateRateLimitStatus(new()
-                    {
-                        NextDequeueTime = null,
-                        SecondsRemaining = 0
-                    });
-                }
-
Evidence
TorrentRunner.Tick() checks the coordinator and either pauses or dequeues, but it does not notify
clients when the pause condition is no longer true. The only SignalR rateLimitStatus push is in
SetRateLimit(), and the Angular UI updates rateLimitStatus only from the initial HTTP call and
subsequent SignalR pushes—so without a “clear” push, the UI state can remain stale.

server/RdtClient.Service/Services/TorrentRunner.cs[344-387]
server/RdtClient.Service/Services/TorrentRunner.cs[741-753]
server/RdtClient.Service/Services/RemoteService.cs[91-94]
client/src/app/torrent-table/torrent-table.component.ts[86-94]
client/src/app/torrent-table/torrent-table.component.html[16-21]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
Rate-limit status is pushed to clients only when a rate limit is encountered. When the cooldown expires, the server resumes dequeuing but never pushes an update to clear the rate-limit banner, so the Angular UI can remain stuck showing a paused state.

### Issue Context
- The UI updates `rateLimitStatus` via one initial HTTP request and then via SignalR `rateLimitStatus` events.
- The server currently emits `rateLimitStatus` only from `TorrentRunner.SetRateLimit()`.

### Fix Focus Areas
- server/RdtClient.Service/Services/TorrentRunner.cs[344-387]
- server/RdtClient.Service/Services/TorrentRunner.cs[741-753]
- server/RdtClient.Service/Services/RemoteService.cs[91-94]
- client/src/app/torrent-table/torrent-table.component.ts[86-94]
- client/src/app/torrent-table/torrent-table.component.html[16-21]

### Suggested approach
- Track the last broadcast `nextAllowedAt` in `TorrentRunner` and, during `Tick()`, when `GetMaxNextAllowedAt()` becomes `null` or is in the past after previously being in the future, call `remoteService.UpdateRateLimitStatus(new RateLimitStatus { NextDequeueTime = null, SecondsRemaining = 0 })`.
- Optionally: compute `SecondsRemaining` based on `nextDequeueTime - now` when emitting updates, rather than using the input `retryAfter`.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Advisory comments

3. Sort direction not reset 🐞 Bug ✓ Correctness
Description
Switching to a different sort column no longer resets sort direction (it keeps the previous
direction), which is a behavior regression from the prior implementation. This can lead to
unexpected initial ordering when clicking a new column header.
Code

client/src/app/torrent-table/torrent-table.component.ts[R111-113]

+    const isSameProperty = this.sortProperty === property;
+    this.sortProperty = property;
+    this.sortDirection = isSameProperty ? (this.sortDirection === 'asc' ? 'desc' : 'asc') : this.sortDirection;
Evidence
The new logic only toggles direction when the same property is clicked; when a new property is
selected, it keeps this.sortDirection unchanged instead of resetting to a known default
(previously desc).

client/src/app/torrent-table/torrent-table.component.ts[110-114]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
The table sort handler no longer resets the sort direction when switching to a new column, which can yield surprising sort results.

### Issue Context
Previously, changing columns set the direction to a known default (desc). Now it retains the prior direction from the previously-sorted column.

### Fix Focus Areas
- client/src/app/torrent-table/torrent-table.component.ts[110-114]

### Suggested approach
Change the assignment to:
- If same column: toggle asc/desc
- Else: set `this.sortDirection = 'desc'` (or your preferred default)

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider

ⓘ The new review experience is currently in Beta. Learn more

Grey Divider

Qodo Logo

…and improve test consistency across methods. Updated logging and error handling for rate-limiting scenarios.
@omgbeez omgbeez force-pushed the upstream/even-better-ratelimitng branch from fdc8c55 to 67afe18 Compare March 7, 2026 22:57
omgbeez added 2 commits March 7, 2026 18:04
…ption

Needed to move the retry handler before the TimeOut handler, and drive the throw from the TimeOut configured by the user for the provider.
@rogerfar rogerfar merged commit a4706a5 into rogerfar:main Mar 14, 2026
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants