Skip to content

Conversation

@flyrain
Copy link
Contributor

@flyrain flyrain commented Jan 13, 2026

The old implementation silently dropped fractional token accrual. That means any rate below 1000 tokens/sec could never refill the bucket: each call adds <1 token, cast to long becomes 0, so tokens stay at 0 forever. Switching to double preserves fractional accumulation so the bucket refills correctly over time.

Checklist

  • 🛡️ Don't disclose security issues! (contact security@apache.org)
  • 🔗 Clearly explained why the changes are needed, or linked related issues: Fixes #
  • 🧪 Added/updated tests with good coverage, or manually tested (and explained how)
  • 💡 Added comments for complex logic
  • 🧾 Updated CHANGELOG.md (if needed)
  • 📚 Updated documentation in site/content/in-dev/unreleased (if needed)

dimas-b
dimas-b previously approved these changes Jan 13, 2026
Copy link
Contributor

@dimas-b dimas-b left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the fix, @flyrain 👍

I think this PR is an improvement, but an even more robust solution is probably possible.

long millisPassed = Math.subtractExact(t, lastTokenGenerationMillis);
lastTokenGenerationMillis = t;
tokens = Math.min(maxTokens, tokens + ((long) (millisPassed * tokensPerMilli)));
tokens = Math.min((double) maxTokens, tokens + (millisPassed * tokensPerMilli));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe a more robust fix would be to count the elapsed time in longer periods rather than computing small fractional increments. Note that floating point addition may not increase the value if given sufficient difference in exponents.

shell> double x = 1e100;
x ==> 1.0E100

jshell> x + 0.0000001
$2 ==> 1.0E100

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a valid point, but only happen in an extreme case. For IEEE 754 double precision, the relative epsilon is ~2.2e-16. The issue would occur when: tokens + increment == tokens, which happens when increment < tokens * epsilon

For example:

  • If maxTokens = 1e15 and tokensPerMilli = 0.001, the increment would be lost
  • But for practical values like maxTokens = 10000, adding 0.001 will always register

The Java long can reach up to 1e13, which is at the edge of losing the increment. Mostly likely it won't happen, users may just disable the rate limiter in that case.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have changed it to use long only, please take another look.

@github-project-automation github-project-automation bot moved this from PRs In Progress to Ready to merge in Basic Kanban Board Jan 13, 2026

tokens = maxTokens;
lastTokenGenerationMillis = instantSource.millis();
milliTokens = maxMilliTokens;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: why not this.*** as with the above assignments?

// After 900ms total, we should have 0.9 tokens - not enough yet
// Add 100ms more to reach exactly 1 token
clock.add(Duration.ofMillis(100));
Assertions.assertTrue(tokenBucket.tryAcquire());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: why not assertCanAcquire(tokenBucket, 1) as above?

long grant = millisPassed * tokensPerSecond;
if (tokensPerSecond != 0 && grant / tokensPerSecond != millisPassed) {
// Overflow occurred. If this much time passed, the bucket would be full anyway.
milliTokens = maxMilliTokens;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This case is not tested by TokenBucketTest. Is it necessary?

Comment on lines +60 to +61
long millisPassed = Math.subtractExact(t, lastUpdateMillis);
lastUpdateMillis = t;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMHO, it would be more robust (and easier to understand) to advance lastUpdateMillis only when millisPassed is large enough to grant at least one token. We'll move lastUpdateMillis only to the point that exactly matches the boundary that results in that exact grant. This way we can count exact tokens (not milli tokens), which increases the operational range of token values (without overflow).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants