Skip to content

refactor(harness): drive ui::approval events from agent::events stream#136

Open
ytallo wants to merge 1 commit into
mainfrom
refactor/approval-reactive-fanout
Open

refactor(harness): drive ui::approval events from agent::events stream#136
ytallo wants to merge 1 commit into
mainfrom
refactor/approval-reactive-fanout

Conversation

@ytallo
Copy link
Copy Markdown
Contributor

@ytallo ytallo commented May 14, 2026

Summary

The approval-gate already emits approval_requested / approval_resolved frames into the agent::events stream, and the harness fanout already runs a durable:subscriber against that stream. The 1-second approval poll in spawn_approval_poll was a second, duplicate path for the same data — diffing approval::list_pending snapshots and emitting ui::approval::* events with up to a second of latency.

This change forwards approval frames directly from the existing stream subscriber and hydrates new browsers on subscribe instead of on a timer.

What changed

Live updates — the agent::events stream handler now classifies each frame and forwards approval frames to all-sessions subscribers as ui::approval::requested::<browser_id> / ui::approval::resolved::<browser_id>. Latency drops from ≤1s (poll cadence) to one stream RTT.

Reconnect hydrationui::subscribe for all-sessions browsers now triggers a one-shot replay: enumerate sessions via state::list, call approval::list_pending per session, push one ui::approval::requested per pending entry to the new browser only. Fire-and-forget; the subscribe response doesn't block on it.

Deletedspawn_approval_poll, APPROVAL_POLL_INTERVAL_MS, the diff_approvals helper, and FanoutPumps.approval_poll. approval::list_pending stays as the hydration RPC.

Contract

The wire format (ui::approval::requested::<browser_id> / ui::approval::resolved::<browser_id> payloads) is unchanged. The web reducer (harness/web/src/useStatus.ts) and the TUI (harness-tui/src/bus.rs) need no changes. Legacy field shapes (tool_call_id, tool_name) keep working — covered by tests.

Tests

48 → 62 tests in the harness suite. New coverage:

  • classify_approval_frame: happy path for approval_requested / approval_resolved; ignores non-approval types; drops missing function_call_id; drops empty session_id; accepts legacy tool_call_id / tool_name.
  • hydration_payloads: emits one push per pending entry; skips malformed entries; empty input is a no-op; filters non-pending status (guards against timed-out approvals reappearing on reconnect).
  • hydration_pushes_for: orchestration across multiple sessions; empty input; sessions with no pending skipped.
  • approval_pushes_for: channel naming convention pinned per browser; zero-browser case.

Removed: two diff_approvals tests (dead with the poll).

Test plan

  • cargo test -p harness — 62 passed (5 suites)
  • cargo clippy --lib --tests -- -D warnings — clean
  • Manual smoke: prompt the agent to shell::fs::write; approval row appears within stream RTT (not 1s); reload mid-pending → row reappears; multi-tab → both tabs clear on resolve
  • Verify durable:subscriber does not replay historical stream frames on attach. If it does, hydration + replay would double-push; the web reducer's id-dedup mitigates but doesn't fully prevent it.

Summary by CodeRabbit

  • Performance Improvements
    • Approval notifications now use real-time event streaming instead of periodic polling, improving responsiveness and reducing latency.
    • Enhanced reconnection handling with faster synchronization of pending approvals when users rejoin sessions.

Review Change Stack

The approval-gate already writes approval_requested / approval_resolved
frames to the agent::events stream, and the fanout already subscribes to
that stream. The 1s approval poll was duplicate machinery layered on top.

Forward approval frames from the existing stream subscriber to
all-sessions browsers as ui::approval::{requested,resolved}::<browser_id>.
Hydrate pending approvals once per new all-sessions subscriber by calling
approval::list_pending at subscribe time instead of on a timer.

Removes: spawn_approval_poll, APPROVAL_POLL_INTERVAL_MS, diff_approvals,
and FanoutPumps.approval_poll. approval::list_pending stays as the
hydration RPC.
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented May 14, 2026

📝 Walkthrough

Walkthrough

This PR replaces approval UI fanout from periodic polling to a reactive event-driven pipeline. The approval polling pump and diff logic are removed, replaced with reactive classification of approval_requested/approval_resolved frames in the agent::events stream. A new ApprovalUiPush model and hydration helpers synthesize and distribute pending-approval payloads. Subscribe-time hydration on all-sessions connection replays pending approvals for late joins.

Changes

Approval fanout reactive migration

Layer / File(s) Summary
Remove old approval polling infrastructure
harness/src/fanout.rs
Approval polling cadence constant, approval_poll task handles in FanoutPumps and spawn_subscribers, and the poll-based diff_approvals and spawn_approval_poll implementation are removed. Existing approval-diff tests are deleted.
Add approval push intent model and helpers
harness/src/fanout.rs
ApprovalUiPush enum classifies approval frames; classify_approval_frame detects approval events; hydration_payloads synthesizes requested-ready payloads from approval::list_pending (filtering status=pending); hydration_pushes_for and approval_pushes_for compute (channel, payload) pairs for browser/session distribution.
Integrate reactive approval forwarding into event stream
harness/src/fanout.rs
Extended agent::events stream subscriber classifies approval frames via classify_approval_frame and reactively forwards them to all-sessions UI subscribers on ui::approval::requested/resolved::<browser_id> channels, eliminating reliance on polling.
Wire subscribe-time hydration and capture III handle
harness/src/lib.rs, harness/src/fanout.rs
iii_for_subscribe clone is captured in ui::subscribe closure to enable async reference; subscribe handler branches on session_id.is_none() and spawns hydrate_all_sessions_subscriber for all-sessions browsers, fetching active sessions and pending approvals to replay on new subscriptions. Fanout documentation updated to reflect new hydration behavior.
Add comprehensive tests for reactive approval pipeline
harness/src/fanout.rs
New unit tests cover approval-frame classification across approval intent types, hydration payload generation with status=pending filtering, and channel-pair computation helpers across browsers and sessions.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Poem

🐰 Farewell, polling loop so slow,
Approval frames now reactively flow,
Subscribe and hydrate, late-joiners bloom,
Async hands dispatch to the browser room,
Events classified with intent so clear—
The fanout whispers, "approval is here!" 🎉

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title accurately summarizes the main change: replacing polling-based approval events with a reactive stream-driven approach using agent::events.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch refactor/approval-reactive-fanout

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions
Copy link
Copy Markdown
Contributor

skill-check — worker

0 verified, 25 skipped (no docs/).

Layer Result
structure
vale
ai

Three for three. Nicely done.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (5)
harness/src/fanout.rs (4)

522-540: 💤 Low value

hydration_payloads silently overwrites any incoming type field on the entry.

Today approval::list_pending doesn't put a type on entries, so this is a no-op. But the function inserts "type": "approval_requested" unconditionally to feed classify_approval_frame, so if list_pending's response ever grows a type field (e.g. some future "type": "approval_resolved" straggler), this code will rewrite it before classifying. The status=="pending" pre-filter mostly saves you, but it's worth a sentence in the doc comment so a future reader doesn't wonder why we'd ever stamp type over a real value.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@harness/src/fanout.rs` around lines 522 - 540, hydration_payloads currently
unconditionally overwrites any existing "type" field on each pending entry
before calling classify_approval_frame; change the logic inside
hydration_payloads so it only inserts "type": "approval_requested" when the
entry does not already have a "type" key (i.e., check obj.contains_key("type")
first), preserve the rest of the flow/classification with
classify_approval_frame, and add a brief doc comment above hydration_payloads
explaining that we only stamp a default type when missing to avoid clobbering
upstream-provided types such as "approval_resolved".

8-19: 💤 Low value

Module doc comment is stale: reactive approval forwarding isn't mentioned.

Item 1 still describes only ui::session::event::<browser_id> forwarding, but the same agent::events subscriber now also classifies approval frames and pushes ui::approval::{requested,resolved}::<browser_id>. While you're here, item 2 also lists only the sessions-changed poll — the file actually wires cost/workers polls and the iii-directory on-change pumps too. Worth a one-paragraph refresh so new readers don't trust the count.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@harness/src/fanout.rs` around lines 8 - 19, The module doc comment is out of
date: update the header paragraph to list all upstream pumps and what they do —
mention the agent::events stream subscriber now also classifies approval frames
and pushes ui::approval::{requested,resolved}::<browser_id> in addition to
ui::session::event::<browser_id>, and extend the second item to enumerate the
sessions-changed poll plus the cost/workers polls and the iii-directory
on-change pumps that are wired in this file; keep the description concise and
accurate so readers see the full set of pumps and their high-level
responsibilities (agent::events handler, sessions-changed poll, cost/workers
polls, iii-directory on-change).

567-583: 💤 Low value

Hydration fans approval::list_pending sequentially across all sessions.

For each session this is a wire RTT under a 5s timeout, so a host with N active sessions makes the late-joiner wait up to N × RTT before the first synthesized requested lands. futures::future::join_all (or try_join_all) over the per-session calls would cut this to a single RTT without changing semantics. Not a blocker — hydration is fire-and-forget — but a cheap win on systems with many sessions.

♻️ Sketch
-    let mut per_session: Vec<(String, Vec<Value>)> = Vec::with_capacity(sessions.len());
-    for sid in &sessions {
-        let resp = iii
-            .trigger(TriggerRequest {
-                function_id: "approval::list_pending".into(),
-                payload: json!({ "session_id": sid }),
-                action: None,
-                timeout_ms: Some(STATE_LIST_TIMEOUT_MS),
-            })
-            .await;
-        let entries = resp
-            .ok()
-            .and_then(|v| v.get("pending").and_then(|p| p.as_array()).cloned())
-            .unwrap_or_default();
-        if !entries.is_empty() {
-            per_session.push((sid.clone(), entries));
-        }
-    }
+    let calls = sessions.iter().map(|sid| {
+        let iii = Arc::clone(&iii);
+        let sid = sid.clone();
+        async move {
+            let resp = iii
+                .trigger(TriggerRequest {
+                    function_id: "approval::list_pending".into(),
+                    payload: json!({ "session_id": sid }),
+                    action: None,
+                    timeout_ms: Some(STATE_LIST_TIMEOUT_MS),
+                })
+                .await;
+            let entries = resp
+                .ok()
+                .and_then(|v| v.get("pending").and_then(|p| p.as_array()).cloned())
+                .unwrap_or_default();
+            (sid, entries)
+        }
+    });
+    let per_session: Vec<(String, Vec<Value>)> = futures::future::join_all(calls)
+        .await
+        .into_iter()
+        .filter(|(_, entries)| !entries.is_empty())
+        .collect();
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@harness/src/fanout.rs` around lines 567 - 583, The loop that calls
iii.trigger(TriggerRequest { function_id: "approval::list_pending", ... }) for
each sid runs sequentially and causes N×RTT latency; change it to collect each
trigger future (referencing sessions, iii.trigger, TriggerRequest,
STATE_LIST_TIMEOUT_MS) into a Vec of futures and use futures::future::join_all
(or try_join_all) to await them concurrently, then iterate the joined results to
extract "pending" arrays and push non-empty (sid.clone(), entries) into
per_session preserving the same extraction logic
(ok().and_then(...).unwrap_or_default()) so behavior is unchanged but all RPCs
happen in parallel.

384-400: ⚡ Quick win

Reactive approval pushes bypass backpressure and stale-browser GC.

Two consistency gaps versus the rest of the file:

  1. Unlike push_to_browser (used by cost/workers/hydration paths), these tokio::spawn-ed triggers don't count against PER_BROWSER_QUEUE_CAP or arm the ui::session::resync deduper. A fast burst of approval_* frames to a slow browser can run unbounded in-flight, undercutting the per-browser cap guarantee.
  2. Unlike the per-session forward immediately below (lines 432-446), there's no is_function_not_foundevict_browser path. An all-sessions browser that closed without ui::unsubscribe is only GC'd when it happens to also be subscribed to a specific session; pure all-sessions clients will silently log function_not_found on every approval frame forever.

Routing these pushes through push_to_browser (and folding in the same GC branch on function_not_found) would fix both. If you keep the raw-spawn form, at least add the GC arm — it's the same six lines as 432-442.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@harness/src/fanout.rs` around lines 384 - 400, The approval push loop using
approval_pushes_for currently tokio::spawn(s) direct iii.trigger calls which
bypass per-browser backpressure (PER_BROWSER_QUEUE_CAP) and the
ui::session::resync deduper and also omits the is_function_not_found →
evict_browser GC path; change the code to call the existing push_to_browser path
(reuse push_to_browser(...) with the same TriggerRequest/payload) so pushes
honor PER_BROWSER_QUEUE_CAP and arm resync, and ensure the function_not_found
handling from the per-session forward (the evict_browser branch guarded by
is_function_not_found) is executed for approval pushes too; if you decide to
keep raw tokio::spawn, then at minimum implement the same queue
accounting/resync arming and replicate the six-line
is_function_not_found→evict_browser GC branch used in lines ~432-442 around the
per-session forward.
harness/src/lib.rs (1)

278-278: ⚡ Quick win

Simplify to use iii.clone() directly, matching the pattern elsewhere in the file.

Wrapping iii.clone() in Arc::new creates unnecessary double indirection: Arc<Arc<T>>. Lines 145, 194, and 379 all capture iii.clone() directly into closures without Arc wrapping and work fine. Since the comment at lines 371–372 confirms III is already internally Arc-wrapped, the outer Arc::new() is redundant. Use let iii_for_subscribe = iii.clone(); instead for consistency.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@harness/src/lib.rs` at line 278, The variable iii_for_subscribe is wrapped
with Arc::new(iii.clone()) causing an Arc<Arc<T>> double indirection; replace
that line with let iii_for_subscribe = iii.clone(); so the closure captures the
existing Arc-wrapped III directly (match the pattern used for iii.clone()
elsewhere and avoid the redundant outer Arc::new()).
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@harness/src/fanout.rs`:
- Around line 509-512: The synthesized "approval_resolved" push in
harness/src/fanout.rs uses ApprovalUiPush::Resolved with a payload containing
only "function_call_id" and "tool_call_id" (call_id), so update the TypeScript
shape to match that minimal payload: edit the approval_resolved type in
harness/web/src/types.ts (the type referenced by resolvedCallId()) to either
remove the declared decision and reason fields or mark them optional, leaving
only function_call_id and tool_call_id present; alternatively, if you prefer to
keep the existing TS shape, change the Rust fanout to include decision and
reason in the JSON payload so both sides agree.

In `@harness/src/lib.rs`:
- Around line 312-319: The tokio::spawn is fire-and-forget and drops any errors
from fanout::hydrate_all_sessions_subscriber, so wrap the call to
fanout::hydrate_all_sessions_subscriber(iii_for_hydrate, fanout_for_hydrate,
browser_for_hydrate).await inside an error-checking block in the spawned task
and log failures (e.g., using error! or processLogger) with the error and
contextual info; locate the existing tokio::spawn invocation and replace it with
an async move that calls the hydrate_all_sessions_subscriber, matches on the
Result, and logs any Err along with identifying context
(iii_for_hydrate/fanout_for_hydrate/browser_for_hydrate) so hydration failures
are observable.

---

Nitpick comments:
In `@harness/src/fanout.rs`:
- Around line 522-540: hydration_payloads currently unconditionally overwrites
any existing "type" field on each pending entry before calling
classify_approval_frame; change the logic inside hydration_payloads so it only
inserts "type": "approval_requested" when the entry does not already have a
"type" key (i.e., check obj.contains_key("type") first), preserve the rest of
the flow/classification with classify_approval_frame, and add a brief doc
comment above hydration_payloads explaining that we only stamp a default type
when missing to avoid clobbering upstream-provided types such as
"approval_resolved".
- Around line 8-19: The module doc comment is out of date: update the header
paragraph to list all upstream pumps and what they do — mention the
agent::events stream subscriber now also classifies approval frames and pushes
ui::approval::{requested,resolved}::<browser_id> in addition to
ui::session::event::<browser_id>, and extend the second item to enumerate the
sessions-changed poll plus the cost/workers polls and the iii-directory
on-change pumps that are wired in this file; keep the description concise and
accurate so readers see the full set of pumps and their high-level
responsibilities (agent::events handler, sessions-changed poll, cost/workers
polls, iii-directory on-change).
- Around line 567-583: The loop that calls iii.trigger(TriggerRequest {
function_id: "approval::list_pending", ... }) for each sid runs sequentially and
causes N×RTT latency; change it to collect each trigger future (referencing
sessions, iii.trigger, TriggerRequest, STATE_LIST_TIMEOUT_MS) into a Vec of
futures and use futures::future::join_all (or try_join_all) to await them
concurrently, then iterate the joined results to extract "pending" arrays and
push non-empty (sid.clone(), entries) into per_session preserving the same
extraction logic (ok().and_then(...).unwrap_or_default()) so behavior is
unchanged but all RPCs happen in parallel.
- Around line 384-400: The approval push loop using approval_pushes_for
currently tokio::spawn(s) direct iii.trigger calls which bypass per-browser
backpressure (PER_BROWSER_QUEUE_CAP) and the ui::session::resync deduper and
also omits the is_function_not_found → evict_browser GC path; change the code to
call the existing push_to_browser path (reuse push_to_browser(...) with the same
TriggerRequest/payload) so pushes honor PER_BROWSER_QUEUE_CAP and arm resync,
and ensure the function_not_found handling from the per-session forward (the
evict_browser branch guarded by is_function_not_found) is executed for approval
pushes too; if you decide to keep raw tokio::spawn, then at minimum implement
the same queue accounting/resync arming and replicate the six-line
is_function_not_found→evict_browser GC branch used in lines ~432-442 around the
per-session forward.

In `@harness/src/lib.rs`:
- Line 278: The variable iii_for_subscribe is wrapped with Arc::new(iii.clone())
causing an Arc<Arc<T>> double indirection; replace that line with let
iii_for_subscribe = iii.clone(); so the closure captures the existing
Arc-wrapped III directly (match the pattern used for iii.clone() elsewhere and
avoid the redundant outer Arc::new()).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: ca9ca733-d3dc-4844-8692-f1792d452c32

📥 Commits

Reviewing files that changed from the base of the PR and between 673f85b and d84a235.

📒 Files selected for processing (2)
  • harness/src/fanout.rs
  • harness/src/lib.rs

Comment thread harness/src/fanout.rs
Comment on lines +509 to +512
"approval_resolved" => Some(ApprovalUiPush::Resolved(json!({
"function_call_id": call_id,
"tool_call_id": call_id,
}))),
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Inspect resolved-frame consumers to confirm they only need the call id.
rg -nP -C5 '\bapproval::resolved\b|approval_resolved|ApprovalResolved' --type=ts --type=rust
# And the old diff_approvals/poll-resolved payload shape, if still in history.
rg -nP -C5 '\bdiff_approvals\b|approval::list_pending'

Repository: iii-hq/workers

Length of output: 40684


🏁 Script executed:

rg -nP -C3 'decision.*allow.*deny|decision.*"allow"|decision.*"deny"' harness/web/src --type=ts -A2

Repository: iii-hq/workers

Length of output: 1255


🏁 Script executed:

rg -nP 'resolvedCallId|payload\.(decision|reason)' harness/web/src --type=ts -B2 -A2

Repository: iii-hq/workers

Length of output: 727


The synthesized payload correctly sends only the call id, but the TypeScript type definition mismatch could mislead future developers.

The payload at lines 509-512 intentionally drops decision and reason fields from the upstream approval_resolved event. The web consumer in harness/web/src/useStatus.ts confirms this is safe — it only extracts the call id via resolvedCallId() to filter pending approvals and never accesses the decision or reason fields from the payload.

However, the TypeScript type definition in harness/web/src/types.ts (lines 241-242) declares decision and reason as part of the approval_resolved payload, creating a contract mismatch. If future changes attempt to render or audit the outcome based on these type hints, they will find the fields undefined. Consider either sending the full fields or updating the type definition to match the actual minimal payload.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@harness/src/fanout.rs` around lines 509 - 512, The synthesized
"approval_resolved" push in harness/src/fanout.rs uses ApprovalUiPush::Resolved
with a payload containing only "function_call_id" and "tool_call_id" (call_id),
so update the TypeScript shape to match that minimal payload: edit the
approval_resolved type in harness/web/src/types.ts (the type referenced by
resolvedCallId()) to either remove the declared decision and reason fields or
mark them optional, leaving only function_call_id and tool_call_id present;
alternatively, if you prefer to keep the existing TS shape, change the Rust
fanout to include decision and reason in the JSON payload so both sides agree.

Comment thread harness/src/lib.rs
Comment on lines +312 to +319
tokio::spawn(async move {
fanout::hydrate_all_sessions_subscriber(
iii_for_hydrate,
fanout_for_hydrate,
browser_for_hydrate,
)
.await;
});
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Add error logging for hydration failures.

The fire-and-forget tokio::spawn silently discards any errors from hydrate_all_sessions_subscriber. If hydration fails (e.g., due to state::list or approval::list_pending errors), late-joining all-sessions subscribers won't see pending approvals, and the failure won't be observable.

Consider wrapping the spawned task with error logging to improve observability.

📋 Proposed fix to add error logging
                 if is_all_sessions {
                     let fanout_for_hydrate = Arc::clone(&fanout);
                     let browser_for_hydrate = browser_id.clone();
                     tokio::spawn(async move {
-                        fanout::hydrate_all_sessions_subscriber(
+                        if let Err(e) = fanout::hydrate_all_sessions_subscriber(
                             iii_for_hydrate,
                             fanout_for_hydrate,
                             browser_for_hydrate,
                         )
-                        .await;
+                        .await {
+                            eprintln!("hydrate_all_sessions_subscriber failed: {e:?}");
+                        }
                     });
                 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
tokio::spawn(async move {
fanout::hydrate_all_sessions_subscriber(
iii_for_hydrate,
fanout_for_hydrate,
browser_for_hydrate,
)
.await;
});
tokio::spawn(async move {
if let Err(e) = fanout::hydrate_all_sessions_subscriber(
iii_for_hydrate,
fanout_for_hydrate,
browser_for_hydrate,
)
.await {
eprintln!("hydrate_all_sessions_subscriber failed: {e:?}");
}
});
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@harness/src/lib.rs` around lines 312 - 319, The tokio::spawn is
fire-and-forget and drops any errors from
fanout::hydrate_all_sessions_subscriber, so wrap the call to
fanout::hydrate_all_sessions_subscriber(iii_for_hydrate, fanout_for_hydrate,
browser_for_hydrate).await inside an error-checking block in the spawned task
and log failures (e.g., using error! or processLogger) with the error and
contextual info; locate the existing tokio::spawn invocation and replace it with
an async move that calls the hydrate_all_sessions_subscriber, matches on the
Result, and logs any Err along with identifying context
(iii_for_hydrate/fanout_for_hydrate/browser_for_hydrate) so hydration failures
are observable.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant