Skip to content

Conversation

@sanity
Copy link
Collaborator

@sanity sanity commented Dec 6, 2025

Problem

The current seeding logic in SeedingManager::should_seed() uses ad-hoc thresholds that don't reflect principled design:

const CACHING_DISTANCE: f64 = 0.05;  // Magic number - why 5%?
const MAX_SEEDING_CONTRACTS: usize = 100;  // Arbitrary count
const MIN_SEEDING_CONTRACTS: usize = 25;   // More arbitrary counts

The three-tier decision tree treats all contracts equally regardless of size, doesn't consider actual demand, and conflates storage limits with proximity requirements.

Why This Matters

  1. Resource unfairness: A 10MB contract consumes the same "slot" as a 1KB contract
  2. No demand signal: Contracts persist based on distance, not whether anyone uses them
  3. Magic thresholds: The 0.05 distance cutoff has no principled basis
  4. Young network problem: Small networks should cache broadly, but threshold logic kicks in early

Design Discussion

This PR emerged from a design discussion (see issue #2231) about how seeding should work from first principles:

Core insight: Proximity emerges naturally from routing - peers near a contract's location see more GETs, keeping it fresh in their caches. We don't need explicit distance thresholds.

Key properties of the new design:

  1. Byte-budget based: Storage limit is in bytes (default 100MB), not contract count
  2. LRU eviction: Least recently accessed contracts evict first
  3. Demand-driven: Only GET/PUT/SUBSCRIBE refresh position - not UPDATE (manipulation resistant)
  4. Self-regulating: Busy networks evict faster, quiet networks retain longer

This PR

What's Included

  • New SeedingCache<T: TimeSource> struct in crates/core/src/ring/seeding_cache.rs
  • Byte-budget aware LRU eviction with 100MB default
  • AccessType enum (Get, Put, Subscribe) for manipulation-resistant refresh
  • Full integration with GET and PUT operations
  • Removal of old should_seed logic and magic thresholds
  • Eviction handling: subscribers are cleared when contracts are evicted
  • 28 unit tests covering:
    • Basic add/remove/contains
    • LRU eviction order
    • Byte budget enforcement
    • Large contracts displacing multiple small ones
    • Access refreshing LRU position
    • Contract size change handling
    • Subscriber cleanup on eviction
    • Edge cases (zero budget, oversized contracts)

Integration Changes

  • GET operations: Now call record_get_access(key, size_bytes) instead of should_seed + seed_contract
  • PUT operations: Now call seed_contract(key, size_bytes) with state size
  • Test utilities: Updated MockRing to match new API
  • SeedingManager: Refactored to use internal SeedingCache instead of DashMap<ContractKey, Score>

What's Deferred to #2164

  • Full subscription tree pruning: When a contract is evicted, sending Unsubscribed to upstream peers requires tracking upstream->downstream relationships per contract. This is documented with a TODO comment.

Testing

All 28 seeding-related tests pass:

  • ring::seeding_cache::* - LRU cache behavior
  • ring::seeding::* - SeedingManager integration
  • operations::test_utils::* - Mock ring behavior

Related


[AI-assisted - Claude]

Introduce SeedingCache, a new data structure for contract caching that
replaces the ad-hoc count-based and distance-threshold logic in
SeedingManager.

Key design principles:
- Resource-aware: Uses byte budget, not contract count
- Demand-driven: Evicts based on LRU access patterns
- Manipulation-resistant: Only GET/PUT/SUBSCRIBE refresh position,
  not UPDATE (which contract creators control)

This is the foundation for issue #2231. The cache is not yet integrated
with the rest of the system - that will come in subsequent commits.

Includes comprehensive unit tests for:
- Basic add/remove/contains operations
- LRU eviction order
- Byte budget enforcement
- Large contracts displacing multiple small ones
- Access refreshing LRU position
- Edge cases (zero budget, oversized contracts)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
@sanity sanity marked this pull request as ready for review December 6, 2025 15:58
@sanity sanity requested review from Copilot and iduartgomez December 6, 2025 15:58
@sanity
Copy link
Collaborator Author

sanity commented Dec 6, 2025

@claude Please do a critical review of this PR

@claude
Copy link

claude bot commented Dec 6, 2025

Claude finished @sanity's task —— View job


Critical Review of PR #2232

Overview

This PR introduces a principled LRU byte-budget cache to replace the current ad-hoc seeding logic. The design is well-motivated and the implementation is solid as a foundation. However, there are several critical issues that need addressing before integration.


🔴 Critical Issues

1. Incorrect LRU Eviction Budget Check (crates/core/src/ring/seeding_cache.rs:117)

The eviction loop has a subtle but serious bug:

while self.current_bytes + size_bytes > self.budget_bytes && !self.lru_order.is_empty()

Problem: This allows the cache to exceed its budget when adding contracts larger than the budget.

Scenario:

  • Budget: 100 bytes
  • Current: 0 bytes
  • Adding: 150-byte contract
  • Condition: 0 + 150 > 100 → true, evicts everything
  • Result: Cache has 150 bytes (50% over budget!)

This is intentional per the test at line 418 (test_contract_larger_than_budget), but it's a design flaw. The cache should either:

  1. Reject oversized contracts (return them in evicted list)
  2. Accept them but document the budget is a soft limit
  3. Make budget enforcement strict with a separate "max contract size" parameter

Recommendation: Option 3 is best. Add a max_contract_size parameter and reject contracts that exceed it. This prevents a single huge contract from consuming unlimited resources.


2. Zero-Budget Edge Case is Broken (crates/core/src/ring/seeding_cache.rs:390-402)

The test test_zero_budget_edge_case reveals broken behavior:

assert!(cache.is_empty() || cache.current_bytes() == 100);

This test accepts either outcome, which is a red flag. With zero budget:

  • The contract gets added (line 136: self.current_bytes += size_bytes)
  • Nothing can be evicted (the while loop never runs since we're adding to 0, and 0 + 100 > 0 is true but the list is empty)
  • Result: Cache has 100 bytes when budget is 0

Fix: Add an early return when size_bytes > budget_bytes:

pub fn record_access(...) -> Vec<ContractKey> {
    // Early reject for oversized contracts
    if size_bytes > self.budget_bytes {
        return vec![]; // Or return an error type
    }
    // ... rest of implementation
}

3. Missing Integration with Access Operations

The PR description states "GET/PUT/SUBSCRIBE refresh position" but there's no integration code showing where these operations call record_access. Critical questions:

  • Does every GET trigger record_access? (Could be expensive)
  • Does SUBSCRIBE add to the cache or just to subscribers?
  • When a contract leaves the cache, who initiates upstream unsubscription?

Recommendation: Add integration points or at least stub comments showing where the hooks will go:

// In get_op.rs (example):
// cache.record_access(key, state.len() as u64, AccessType::Get);

4. No Eviction Callback Mechanism

When contracts are evicted, the caller receives Vec<ContractKey>, but there's no mechanism to trigger cleanup:

  • Unsubscribe from upstream peer
  • Delete cached state
  • Notify subscribers

The current SeedingManager has both seeding_contract (what we cache) and subscribers (who we notify). This cache only tracks caching decisions but doesn't integrate with subscription management.

Recommendation: Add an eviction callback:

pub struct SeedingCache<T: TimeSource, F: FnMut(ContractKey)> {
    // ...
    on_evict: Option<F>,
}

// In eviction:
if let Some(ref mut callback) = self.on_evict {
    callback(oldest_key);
}

Or document that the caller MUST process the returned eviction list.


⚠️ Design Concerns

5. LRU Order Maintenance is O(n)

Line 112: self.lru_order.retain(|k| k != &key);

This is O(n) for every refresh. For a cache with thousands of contracts and frequent GET operations, this could become a bottleneck.

Better approach: Use a doubly-linked intrusive list or the lru crate which maintains O(1) operations. The VecDeque is only efficient for push/pop, not for arbitrary removal.

Mitigation: If keeping VecDeque, add a comment about the performance trade-off and consider benchmarking.


6. Manipulation Resistance is Incomplete

The design excludes UPDATE from refreshing cache position (good!), but there's a subtle attack:

Attack: A malicious peer could subscribe/unsubscribe repeatedly to keep a contract fresh in everyone's cache, consuming resources without actually using the contract.

Mitigation: Consider:

  • Rate-limiting SUBSCRIBE refresh (e.g., only refresh if >1 minute since last refresh)
  • Tracking unique subscribers rather than subscription count
  • Making eviction also consider "coldness" (time since last GET, weighted higher)

7. No Configuration Validation

The new function accepts any budget_bytes, including 0. This leads to pathological behavior. Add validation:

pub fn new(budget_bytes: u64, time_source: T) -> Result<Self, &'static str> {
    if budget_bytes == 0 {
        return Err("budget_bytes must be non-zero");
    }
    Ok(Self { /* ... */ })
}

Or at minimum document the behavior in zero/low budget scenarios.


🟡 Code Quality Issues

8. Test Coverage Gaps

The tests are comprehensive for the cache itself, but missing:

  • Concurrent access: The cache will be used from multiple threads. No tests verify thread safety (though the types look correct).
  • Integration scenarios: What happens when a contract is in both the cache AND has active subscribers?
  • Subscription cascade: The PR description mentions cascade unsubscription but there are no tests.
  • Size changes: What if a contract's state changes size after caching?

9. Type Safety: Size Can Change

Contracts can grow or shrink with updates. The cache stores size_bytes at access time, but if the contract updates, the cached size becomes stale.

Scenario:

  1. Cache contract A (100 bytes)
  2. Contract updates to 1MB
  3. Cache evicts based on old 100-byte size
  4. Budget accounting is now completely wrong

Fix: Either:

  • Require caller to update size on state changes: cache.update_size(key, new_size)
  • Make record_access always update size (requires passing size even on refresh)
  • Document that size is immutable per contract (seems unrealistic)

10. Unsafe Arithmetic

Line 121: self.current_bytes = self.current_bytes.saturating_sub(removed.size_bytes);

Using saturating_sub hides bugs. If current_bytes goes negative, something is seriously wrong (double-eviction, size mismatch, etc.). Better to use checked arithmetic and panic:

self.current_bytes = self.current_bytes.checked_sub(removed.size_bytes)
    .expect("BUG: current_bytes underflow - cache accounting broken");

This makes bugs visible during testing rather than silently hiding them.


🟢 What's Good

Strengths

  1. Well-documented motivation: The PR description clearly explains WHY this is needed
  2. Time abstraction: Using TimeSource makes tests deterministic
  3. Comprehensive unit tests: Good coverage of core logic (modulo the gaps above)
  4. Clear API: record_access returning evictions is a clean design
  5. AccessType enum: Good manipulation resistance thinking
  6. Foundation-only approach: Smart to land the cache separately before integration

📋 Recommendations for Moving Forward

Before Merging

  1. Fix zero-budget edge case (critical)
  2. Fix over-budget contracts (add max contract size or document soft limit)
  3. Address LRU O(n) performance (document or fix)
  4. Add size update mechanism (document or implement)
  5. Validate budget in constructor (or document zero-budget behavior)

Before Integration

  1. Add eviction callbacks (or document caller responsibility)
  2. Show integration points (at least in comments/stubs)
  3. Test subscription cascade (integration test in follow-up PR)
  4. Consider manipulation resistance refinements (rate limiting)
  5. Benchmark with realistic workloads (thousands of contracts, high GET rate)

🎯 Verdict

Direction: ✅ Yes, this is the right approach. Replacing magic numbers with principled LRU+byte-budget is a clear improvement.

Implementation: ⚠️ Needs work before merging. The critical bugs (zero budget, over-budget) must be fixed. The integration story needs clarity.

Recommendation:

  • Fix critical issues 1-2
  • Address design concerns 5, 7, 9 (at minimum document them)
  • Add TODO comments showing integration points
  • Merge as foundation
  • Follow up with integration PR that includes cascade tests

The design philosophy is sound. The execution just needs refinement before this becomes production-ready.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR introduces a new LRU-based seeding cache with byte-budget awareness as a foundation for replacing the current ad-hoc seeding logic. The design shifts from magic-number thresholds to a principled, demand-driven approach where storage limits are byte-based and contracts are retained based on actual access patterns (GET/PUT/SUBSCRIBE).

Key Changes

  • Implements SeedingCache<T: TimeSource> with byte-budget aware LRU eviction
  • Introduces AccessType enum to distinguish manipulation-resistant operations (GET/PUT/SUBSCRIBE) from UPDATE operations
  • Provides comprehensive test coverage (12 unit tests) for cache operations, eviction logic, and edge cases

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 6 comments.

File Description
crates/core/src/ring/seeding_cache.rs New module implementing byte-budget LRU cache with access tracking, eviction logic, and comprehensive unit tests
crates/core/src/ring/mod.rs Adds seeding_cache module to the ring module

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 106 to 113
if let Some(existing) = self.contracts.get_mut(&key) {
// Already cached - refresh position
existing.last_accessed = now;
existing.last_access_type = access_type;

// Move to back of LRU (most recently used)
self.lru_order.retain(|k| k != &key);
self.lru_order.push_back(key);
Copy link

Copilot AI Dec 6, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When refreshing an existing contract (line 106), the code doesn't handle the case where size_bytes has changed. Contract state can grow or shrink over time (e.g., through PUT operations), but the existing contract's size_bytes field and current_bytes tracking are not updated to reflect the new size.

This will cause incorrect byte budget accounting. For example:

  • Contract initially added with 100 bytes
  • Contract state grows to 200 bytes
  • record_access(key, 200, AccessType::Put) is called
  • The cached size_bytes remains 100, and current_bytes is not adjusted
  • The cache now underestimates its memory usage by 100 bytes

Recommendation: Update the existing contract's size and adjust current_bytes accordingly:

if let Some(existing) = self.contracts.get_mut(&key) {
    // Update byte accounting if size changed
    let size_delta = size_bytes as i64 - existing.size_bytes as i64;
    self.current_bytes = if size_delta >= 0 {
        self.current_bytes + size_delta as u64
    } else {
        self.current_bytes.saturating_sub((-size_delta) as u64)
    };
    
    existing.size_bytes = size_bytes;
    existing.last_accessed = now;
    existing.last_access_type = access_type;
    
    // Move to back of LRU (most recently used)
    self.lru_order.retain(|k| k != &key);
    self.lru_order.push_back(key);
}

Additionally, consider whether size increases should trigger eviction if the new size would exceed the budget.

Copilot uses AI. Check for mistakes.
Comment on lines 394 to 399
// With zero budget, any contract should be immediately evicted
// Actually, this means we can never cache anything
let evicted = cache.record_access(key, 100, AccessType::Get);

// With zero budget, the contract is added but we can't evict anything
// (nothing to evict). This is a pathological edge case.
Copy link

Copilot AI Dec 6, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The test comment is confusing and contradicts the assertion. The comment says "With zero budget, any contract should be immediately evicted" and "this means we can never cache anything", but then the assertion allows for cache.current_bytes() == 100, meaning the contract was actually cached.

Looking at the record_access implementation, with a zero budget the while loop condition self.current_bytes + size_bytes > self.budget_bytes (0 + 100 > 0) is true, but !self.lru_order.is_empty() is false (nothing to evict), so the loop doesn't execute and the contract is added.

Recommendation: Update the comment to accurately describe the actual behavior:

// With zero budget, the contract is added anyway because there's nothing
// to evict. This is a pathological edge case - the cache allows exceeding
// the budget when there are no contracts to evict.
Suggested change
// With zero budget, any contract should be immediately evicted
// Actually, this means we can never cache anything
let evicted = cache.record_access(key, 100, AccessType::Get);
// With zero budget, the contract is added but we can't evict anything
// (nothing to evict). This is a pathological edge case.
// With zero budget, the contract is added anyway because there's nothing
// to evict. This is a pathological edge case - the cache allows exceeding
// the budget when there are no contracts to evict.
let evicted = cache.record_access(key, 100, AccessType::Get);

Copilot uses AI. Check for mistakes.
Comment on lines 398 to 399
// With zero budget, the contract is added but we can't evict anything
// (nothing to evict). This is a pathological edge case.
Copy link

Copilot AI Dec 6, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment on lines 398-399 is also confusing and doesn't accurately describe the code behavior. The comment says "we can't evict anything (nothing to evict)" but this applies to any empty cache, not specifically the zero budget case. The more significant issue is that the cache allows adding a contract that exceeds the budget when there's nothing to evict.

Recommendation: Remove or update this comment to focus on the actual edge case behavior (contracts added despite exceeding budget when cache is empty).

Suggested change
// With zero budget, the contract is added but we can't evict anything
// (nothing to evict). This is a pathological edge case.
// Edge case: When the cache is empty (including zero budget), a contract larger than the budget
// can still be added because there is nothing to evict. This test documents that behavior.

Copilot uses AI. Check for mistakes.
Comment on lines +234 to +258
#[test]
fn test_refresh_existing_contract() {
let time_source = MockTimeSource::new(Instant::now());
let mut cache = SeedingCache::new(1000, time_source.clone());
let key = make_key(1);

// First access
cache.record_access(key, 100, AccessType::Get);
let first_access = cache.get(&key).unwrap().last_accessed;

// Advance time and access again
let mut new_time_source = time_source.clone();
new_time_source.advance_time(Duration::from_secs(10));
cache.time_source = new_time_source;

cache.record_access(key, 100, AccessType::Put);

// Should still be one contract, but updated
assert_eq!(cache.len(), 1);
assert_eq!(cache.current_bytes(), 100); // Size unchanged

let info = cache.get(&key).unwrap();
assert_eq!(info.last_access_type, AccessType::Put);
assert!(info.last_accessed > first_access);
}
Copy link

Copilot AI Dec 6, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing test coverage for contract size changes. The test_refresh_existing_contract test passes the same size (100 bytes) on both accesses, but doesn't test what happens when a contract's size changes between accesses.

This is important because:

  1. Contract state can grow/shrink through PUT operations
  2. The current implementation has a bug where size changes aren't reflected in byte accounting (see related bug comment)

Recommendation: Add a test case like:

#[test]
fn test_contract_size_change() {
    let mut cache = make_cache(1000);
    let key = make_key(1);
    
    // Add contract with initial size
    cache.record_access(key, 100, AccessType::Get);
    assert_eq!(cache.current_bytes(), 100);
    
    // Update with larger size
    cache.record_access(key, 200, AccessType::Put);
    assert_eq!(cache.current_bytes(), 200);
    
    // Update with smaller size
    cache.record_access(key, 150, AccessType::Put);
    assert_eq!(cache.current_bytes(), 150);
}

Copilot uses AI. Check for mistakes.
Comment on lines +112 to +113
self.lru_order.retain(|k| k != &key);
self.lru_order.push_back(key);
Copy link

Copilot AI Dec 6, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using VecDeque::retain to move an element to the back of the LRU is O(n) where n is the number of cached contracts. This operation happens on every access to an already-cached contract, which could be a hot path.

For better performance with large caches, consider using a doubly-linked list or maintaining a secondary index (e.g., HashMap<ContractKey, usize> mapping to position in lru_order). Alternatively, consider a crate like lru which implements this efficiently.

That said, this may be acceptable depending on expected cache sizes. If the cache typically holds dozens of contracts, this is fine. If it could hold thousands, this becomes a bottleneck.

Recommendation: Either:

  1. Document the performance characteristics and expected cache size
  2. Consider a more efficient data structure if cache sizes are expected to be large
  3. Add a TODO to optimize this if it becomes a bottleneck during integration

Copilot uses AI. Check for mistakes.
};
self.contracts.insert(key, contract);
self.lru_order.push_back(key);
self.current_bytes += size_bytes;
Copy link

Copilot AI Dec 6, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Potential integer overflow when adding to current_bytes on line 136. If self.current_bytes + size_bytes overflows u64, this will wrap around, causing incorrect byte accounting.

While this is unlikely in practice (would require petabytes of data), defensive programming suggests using checked or saturating arithmetic.

Recommendation: Use checked arithmetic and handle the overflow case:

self.current_bytes = self.current_bytes.saturating_add(size_bytes);

This ensures the cache gracefully handles pathological cases rather than silently wrapping around.

Suggested change
self.current_bytes += size_bytes;
self.current_bytes = self.current_bytes.saturating_add(size_bytes);

Copilot uses AI. Check for mistakes.
sanity and others added 2 commits December 6, 2025 12:45
Addresses review feedback from Claude and Copilot on PR #2232:

1. **Contract size changes now update byte accounting**
   - When a contract's size changes (e.g., state grows via PUT), the
     cache now correctly updates `current_bytes` to reflect the new size
   - Previously, size changes were ignored, causing incorrect accounting

2. **Use saturating arithmetic for safety**
   - `current_bytes` now uses `saturating_add` to prevent theoretical
     overflow in pathological cases (petabytes of data)

3. **Document O(n) LRU performance**
   - Added comment explaining that `VecDeque::retain` is O(n)
   - Acceptable for typical cache sizes (dozens to hundreds)
   - Note to consider `lru` crate if scaling to thousands

4. **Clarify edge case behavior in tests**
   - Zero-budget test: Clearly document that budget is a soft limit
     when the cache is empty (nothing to evict)
   - Over-budget test: Explain design decision to accept oversized
     contracts rather than making them unfindable

5. **Add tests for contract size changes**
   - test_contract_size_change_increases: Verify byte accounting on grow
   - test_contract_size_change_decreases: Verify byte accounting on shrink
   - test_contract_size_change_triggers_no_eviction: Document that size
     changes don't trigger eviction (only new contracts do)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
@sanity sanity enabled auto-merge December 6, 2025 18:48
@sanity sanity disabled auto-merge December 6, 2025 18:53
sanity and others added 2 commits December 6, 2025 13:16
Complete integration of the byte-budget aware LRU seeding cache:

- Replace score-based seeding with LRU access tracking
- Update GET operations to call record_get_access with state size
- Update PUT operations to call seed_contract with state size
- Update test utilities and test infrastructure
- Remove old should_seed logic and magic distance thresholds
- Add eviction handling: clear subscribers when contracts evicted
- Update tests to use new API

The seeding decision is now simple: cache contracts accessed via
GET/PUT/SUBSCRIBE, let LRU eviction handle limits when byte budget
is exceeded (default 100MB).

Full upstream unsubscription on eviction (sending Unsubscribed
messages) is deferred to #2164 (subscription tree branch pruning)
as it requires tracking upstream->downstream relationships.

Also fixes pre-existing clippy warnings in transport_perf benchmark.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Resolve conflicts:
- Keep LRU byte-budget seed_contract(key, size) API
- Add announce_contract_cached() calls from main
- Remove dropped contract notification code (handled internally by SeedingManager)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
@sanity sanity added this pull request to the merge queue Dec 6, 2025
@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to failed status checks Dec 6, 2025
@sanity sanity added this pull request to the merge queue Dec 6, 2025
@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to failed status checks Dec 6, 2025
Resolve conflicts:
- Keep simplified subscription handling (eviction handled internally)
- Keep comment in transport_perf benchmark

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
@sanity sanity enabled auto-merge December 6, 2025 21:07
The subscription request in BroadcastTo state was incorrectly using
start_subscription_request_internal with track_parent=false, which
meant the parent operation wasn't being tracked. This caused subscribe
operations to have state=None when receiving ReturnSub messages,
triggering invalid_transition errors.

Changed to use start_subscription_request which properly tracks the
parent-child relationship.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
@sanity sanity added this pull request to the merge queue Dec 6, 2025
@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to failed status checks Dec 6, 2025
@sanity sanity merged commit 7f82c45 into main Dec 6, 2025
14 of 15 checks passed
@sanity sanity deleted the issue-2231-lru-seeding branch December 6, 2025 22:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants