Add auto-continuation for tool approval workflow#217
Add auto-continuation for tool approval workflow#217sethconvex wants to merge 22 commits intomainfrom
Conversation
📝 WalkthroughWalkthroughAdds a tool-approval workflow across the agent: new Agent methods to approve/deny tool calls, persistent approval metadata and indexes, streaming/atomic finish coordination, UI and Convex example integrations, message post-processing to apply approvals, and extensive tests covering approval behaviors and edge cases. Changes
Sequence Diagram(s)sequenceDiagram
participant User as User
participant UI as ChatApproval UI
participant Chat as approval.ts (Convex)
participant Agent as Agent
participant DB as Database
User->>UI: Send prompt
UI->>Chat: sendMessage({prompt, threadId})
Chat->>DB: Save user message
Chat->>Chat: Schedule generateResponse
Chat-->>UI: Return messageId
Chat->>Agent: streamText() -> generation
Agent->>Agent: Encounter tool-call needing approval
Agent->>DB: Save tool-approval-request
Agent-->>UI: Stream approval prompt
User->>UI: Click approve/deny
UI->>Chat: submitApproval({approvalId, approved})
Chat->>Chat: Schedule handleApproval/handleDenial
alt Approved
Chat->>Agent: approveToolCall({approvalId})
Agent->>Agent: _findToolCallInfo -> execute tool
Agent->>DB: Save tool-approval-response + tool-result
Agent->>Agent: Continue generation (streamText)
else Denied
Chat->>Agent: denyToolCall({approvalId, reason})
Agent->>DB: Save denial + execution-denied result
Agent->>Agent: Continue generation (streamText)
end
Agent->>Chat: Stream final output
Chat->>DB: Save final messages
Chat-->>UI: Stream completion
Estimated Code Review Effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
This stack of pull requests is managed by Graphite. Learn more about stacking. |
commit: |
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
|
this is not ready yet |
24a4a3f to
8640c1a
Compare
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
example/convex/usage_tracking/usageHandler.ts (1)
40-57: New token detail fields are validated but not persisted.The fields
inputTokenDetails,outputTokenDetails, andraware added to the args schema (lines 41-43) but are not included in theusageobject being inserted into the database (lines 51-57). This means AI SDK v6 token breakdown data is silently discarded.Proposed fix to persist the new fields
usage: { promptTokens: args.usage.inputTokens ?? 0, completionTokens: args.usage.outputTokens ?? 0, totalTokens: args.usage.totalTokens ?? 0, reasoningTokens: args.usage.reasoningTokens, cachedInputTokens: args.usage.cachedInputTokens, + inputTokenDetails: args.usage.inputTokenDetails, + outputTokenDetails: args.usage.outputTokenDetails, + raw: args.usage.raw, },Verify that the
vUsagetype in therawUsagetable schema (tables.ts line 15) supports these new optional fields.src/component/messages.ts (1)
203-221: ClarifyoverrideOrdersemantics to avoid a first user‑message off‑by‑one.When
forceNewOrder=true,start.tsincrements the order and passes it asoverrideOrder(the intended final order). However,addMessagesthen applies the user-message increment again: whenpromptMessageIdis not passed (which is the case forforceNewOrder),promptMessageis undefined, so the condition at line 289 fails andorder++executes unconditionally at line 294. This results in the first user message gettingoverrideOrder + 1instead ofoverrideOrder.Consider either documenting that
overrideOrderis the previous order (not the target order for the first message), or skip the increment for the first user message whenoverrideOrderis set.♻️ Possible adjustment
- if (overrideOrder !== undefined) { + const hasOverrideOrder = overrideOrder !== undefined; + if (hasOverrideOrder) { order = overrideOrder; stepOrder = -1; // Will be incremented to 0 for the first message } ... - if (message.message.role === "user") { - if (promptMessage && promptMessage.order === order) { - // ... - } else { - order++; - } - stepOrder = 0; - } else { + if (message.message.role === "user") { + if (hasOverrideOrder && stepOrder === -1) { + // keep order as overrideOrder for the first user message + } else if (promptMessage && promptMessage.order === order) { + // ... + } else { + order++; + } + stepOrder = 0; + } else {src/client/start.ts (1)
84-171: GuardforceNewOrderagainst missingpromptMessageId.When
forceNewOrderis true,saveInputMessagesis skipped. If a caller passes a rawprompt/messageswithoutpromptMessageId, the input won’t be persisted and the computed order can be wrong. Consider asserting this usage or only skipping when a prompt message already exists.💡 Proposed guard
@@ - const saveMessages = opts.storageOptions?.saveMessages ?? "promptAndOutput"; + const saveMessages = opts.storageOptions?.saveMessages ?? "promptAndOutput"; + if (args.forceNewOrder && !args.promptMessageId) { + throw new Error("forceNewOrder requires promptMessageId"); + }
🤖 Fix all issues with AI agents
In `@src/client/index.ts`:
- Around line 1551-1558: The approval path is zeroing out userId which breaks
tools expecting ctx.userId; update the toolCtx construction (the toolCtx object
used when creating wrappedTool, and the branch using tool.__acceptsCtx) to
mirror the normal generation context by propagating the thread’s user id instead
of setting userId: undefined — e.g., set userId to the existing ctx.userId (or
the thread's userId if stored elsewhere) so wrappedTool receives the correct
ctx.userId for auth/scoping.
- Around line 1534-1666: Update _findToolCallInfo to make approval handling
idempotent by scanning the thread/message history for any existing content part
with type "tool-approval-response" and the same approvalId; if found, return
null so callers (approveToolCall and denyToolCall) short-circuit and do not
re-run tool execution. Specifically, inside _findToolCallInfo examine each
message.content parts (look for parts where part.type ===
"tool-approval-response" && part.approvalId === approvalId) and return null
immediately when matched; keep the existing behavior of returning tool call info
when no approval response exists.
In `@src/mapping.ts`:
- Around line 626-639: The code wrongly JSON.stringify's non-validated
part.output, corrupting structured values; remove JSON.stringify and pass
part.output directly into normalizeToolOutput so normalizeToolOutput handles
validation/wrapping consistently. Specifically, in the block computing output
(referencing part.output, validate, vToolResultOutput, and normalizeToolOutput),
replace the branch that does JSON.stringify(part.output) with simply
normalizeToolOutput(part.output) when validate(...) is false, keeping the
subsequent execution-denied conversion logic unchanged.
3031eb4 to
fc02d99
Compare
| - `createTool({ args:` - should be `inputSchema` | ||
| - `createTool({ handler:` - should be `execute` |
There was a problem hiding this comment.
we own createTool so these are technically up to us - I guess inputSchema is fine since it's zod which might be a bit unexpected to be for args, but handler takes different arguments than execute so I'm nervous we're leading people astray by making them match exactly... I know this is the wrong PR for this, but curious what the intent here was.
| approvalId, | ||
| reason, | ||
| }); | ||
| await result.consumeStream(); |
There was a problem hiding this comment.
Are streams default for approving/ denying? Or does it remember the previous setting? Do you have to stream? Just curious
src/client/index.ts
Outdated
| // Continue generation with forceNewOrder to create a separate message | ||
| return this.streamText( | ||
| ctx, | ||
| { threadId }, | ||
| { promptMessageId: toolResultId, forceNewOrder: true }, | ||
| { saveStreamDeltas: { chunking: "word", throttleMs: 100 } }, |
There was a problem hiding this comment.
We shouldn't assume they want to save deltas or assume the chunking strategy here. I suggest we save the message and return the messageId and have them call streamText with promptMessageId themself to make it explicit. Then they can also decide whether to do it, e.g. if there were multiple tool responses that need to finish before continuing.
I also will look into what this forceNewOrder parameter is, because I don't think that should be necessary, and fear it may break other assumptions about how messages are combined. e.g. the assumption that only user messages increment the order.
src/client/streamText.ts
Outdated
| } catch (error) { | ||
| // If an error occurs during streaming (e.g., in onStepFinish callbacks), | ||
| // make sure to abort the streaming message so it doesn't get stuck | ||
| if (streamer && !streamConsumed) { |
There was a problem hiding this comment.
it seems impossible for streamConsumed to be true at this point, based on my understanding of JS awaits.
|
Thanks for your work on this. I hoped the needsApproval flow on createTool were what we needed, but I'm running into an issue with runtime tools though and wondering if I'm missing something. We have ~20 tools passed to streamText at call time (they need per-request workspace context via closures) and 4 destructive tools on the agent definition via createTool with needsApproval. The initial streamText call gets all tools via the tools parameter. But after approveToolCall, the continuation only sees this.options.tools (the 4 agent-level ones)....so if the user instructs our agent to "delete X and then create Y" and approves the delete, the agent can't use createWhatever in the continuation ...because it was a runtime tool. We worked around it but it's super fragile since we're depending on internal message formats and the __acceptsCtx pattern. Is there a cleaner way to handle this that I'm missing? Or would it make sense for approveToolCall/denyToolCall to accept an optional tools parameter for the continuation? |
@kerns |
Fixes the intermittent crash "TypeError: The stream is not in a state that permits enqueue" when using saveStreamDeltas with tool calls. The bug occurs when: 1. Tool executes successfully -> onStepFinish callback saves to DB 2. Stream finishes reading -> AI SDK closes stream via attemptClose() 3. Delta-save mutation still in flight -> tries to write to closed stream 4. Crash: Cannot enqueue chunk on closed stream This fix (from commit 87e3657 on seth/fix-193 branch) implements atomic stream finish by: - Deferring final step save when streaming is enabled - Saving step atomically with stream finish in same mutation - Stream stays open until database confirms all saves complete Changes: - streamText.ts: Track pendingFinalStep, defer save, atomic finish - streaming.ts: Add markFinishedExternally() and getOrCreateStreamId() - start.ts: Add finishStreamId parameter to save() - messages.ts: Atomically finish stream with message save Resolves issue #181 Addresses user report: #217 Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Fix Applied: Atomic Stream FinishThis commit fixes the race condition causing Root CauseWhen tools execute quickly, the stream can close before delta-save mutations complete:
SolutionRestored the atomic finish pattern from commit 87e3657 (which existed on an unmerged branch):
Verification
The fix is safe - it was developed in parallel but never merged during AI SDK 6.0 upgrade. |
Fixes the intermittent crash "TypeError: The stream is not in a state that permits enqueue" when using saveStreamDeltas with tool calls. The bug occurs when: 1. Tool executes successfully -> onStepFinish callback saves to DB 2. Stream finishes reading -> AI SDK closes stream via attemptClose() 3. Delta-save mutation still in flight -> tries to write to closed stream 4. Crash: Cannot enqueue chunk on closed stream This fix (from commit 87e3657 on seth/fix-193 branch) implements atomic stream finish by: - Deferring final step save when streaming is enabled - Saving step atomically with stream finish in same mutation - Stream stays open until database confirms all saves complete Changes: - streamText.ts: Track pendingFinalStep, defer save, atomic finish - streaming.ts: Add markFinishedExternally() and getOrCreateStreamId() - start.ts: Add finishStreamId parameter to save() - messages.ts: Atomically finish stream with message save Resolves issue #181 Addresses user report: #217 Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
These methods encapsulate the complexity of the AI SDK v6 tool approval workflow, reducing boilerplate in application code. Before: ~340 lines of manual tool finding, execution, and message handling After: ~140 lines using the clean helper API The helpers handle: - Finding the tool call info from the approval ID - Executing the tool (for approval) with proper context injection - Saving tool-approval-response and tool-result messages - Continuing generation with forceNewOrder for clean message separation This is a workaround for AI SDK v6 issue #10980 where the native approval flow doesn't generate proper tool_result blocks for Anthropic. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
When pagination cuts off a tool-result from its tool-call, we handle it gracefully by creating a standalone tool part. The warning was noisy for this expected pagination behavior. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- inputTokenDetails: noCacheTokens, cacheReadTokens, cacheWriteTokens - outputTokenDetails: textTokens, reasoningTokens - raw: kept as v.any() since it's provider-specific Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add tool approval flow to Key Patterns architecture section - Replace upgrade section with detailed AI-friendly guidance including: - Detection patterns for v5 code (parameters, handler, textEmbeddingModel) - Dependency update commands - Before/after transformation examples for tools, embeddings, step limits - Verification steps and common issues - New v6 features (tool approval, reasoning streaming, token details) - Remove outdated TODO comment in deltas.ts (partial tool calls now handled) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
When users upgrade to v0.6.0 with old AI SDK v5 dependencies, TypeScript now shows helpful error messages pointing to the migration guide: - languageModel: Shows error if model has specificationVersion "v2" instead of "v3" - createTool args: Shows "⚠️ 'args' was removed... Rename to 'inputSchema'" - createTool handler: Shows "⚠️ 'handler' was removed... Rename to 'execute'" This helps users (and AI assistants) understand what needs to change before they try to run the code. Also adds scripts/check-upgrade.js CLI tool for scanning codebases. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add idempotency guard to prevent duplicate tool execution on retry (_findToolCallInfo now returns null if approval already handled) - Propagate thread userId to tool context for proper auth/scoping - Remove unnecessary JSON.stringify that corrupted tool outputs Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add MIGRATION.md to package.json files array so it ships with npm package - Update compile-time error messages to point to local MIGRATION.md - Enhance CLAUDE.md with: - AI SDK v6 requirement note upfront - Compatible sibling packages (@convex-dev/rag@^0.7.0) - Type import changes (LanguageModelV2 → V3, EmbeddingModel → EmbeddingModelV3) - generateObject mode: "json" removal documentation Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Update createTool calls in examples: args -> inputSchema, handler -> execute - Fix lint errors in check-upgrade.js (Node.js globals, unused catch var) - Remove unused ToolApprovalRequest/Response imports from mapping.ts Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Tests cover: - _findToolCallInfo: finding tool calls, idempotency guard, missing approvals - UIMessage approval states: pending, approved, denied - Message grouping with mixed approval/non-approval tools - Tool result handling when tool call is on previous page - execution-denied output conversion for provider compatibility - createTool with needsApproval function and boolean Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Tests that document actual bugs and edge cases: BUGS FOUND: 1. Pagination limit: Approvals older than 20 messages won't be found (_findToolCallInfo uses hardcoded numItems: 20) 2. Duplicate toolCallId: With duplicate IDs (shouldn't happen but possible), code finds first match in iteration order, not the one with the approval 3. Cross-agent tool lookup: approveToolCall fails if called on an agent that doesn't have the tool registered, even if approval is valid 4. Tool errors swallowed: Execution errors become string results instead of failing the approval workflow properly 5. Race condition (TOCTOU): Concurrent approvals could both pass the idempotency check due to non-atomic check-then-act EDGE CASES HANDLED CORRECTLY: - Tool call and approval request in different messages - Missing approvalId in request - Undefined 'approved' field in response - String content (non-array) messages - Tool input with only 'args' (no 'input') - Tool input with neither 'args' nor 'input' Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The approval workflow had a race condition where concurrent approvals
could both pass the idempotency check (in a query) and then both save
duplicate approval responses (in separate mutations).
Fix: Add approvalIdempotencyKey parameter to addMessages mutation.
When provided, the mutation atomically checks if an approval response
with that ID already exists before saving. If it does, returns the
existing message instead of creating a duplicate.
Changes:
- Add approvalIdempotencyKey to SaveMessagesArgs and addMessages mutation
- Update _findToolCallInfo to return { alreadyHandled, existingMessageId }
instead of null for already-processed approvals
- Update approveToolCall/denyToolCall to:
1. Handle alreadyHandled case by continuing from existing message
2. Pass approvalIdempotencyKey to saveMessage for atomic check
- Update tests to reflect new idempotent behavior
The check-and-write is now atomic within a single Convex mutation,
eliminating the race condition.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add third-party provider compatibility info to MIGRATION.md - Add OpenRouter provider upgrade instructions (v1.x → v2.x) - Add guidance for handling peer dependency conflicts - Add compatibility table for AI SDK v5 vs v6 providers - Enhance check-upgrade.js to detect outdated package versions - Script now checks package.json for incompatible provider versions Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Remove the complex approvalIdempotencyKey machinery. Instead, rely on _findToolCallInfo detecting already-handled approvals and throw an error. Convex's atomicity guarantees within the mutation are sufficient. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
After tool approval, the UI would flash (disappear and reappear) because: 1. Streaming messages and saved messages could have the same stepOrder but different content (saved has tool parts, streaming starts empty) 2. The deduplication logic was choosing one over the other instead of merging 3. React key changes caused component remounting Changes: - dedupeMessages now merges parts from saved and streaming messages at the same stepOrder, preserving tool context while showing streaming - Use stable React keys (order only, not stepOrder) to prevent remounting - Filter stale streaming messages based on max saved stepOrder - Keep assistant messages in the same order after approval (remove forceNewOrder) - Add AI SDK v6 token detail validators (inputTokenDetails, outputTokenDetails) - Render ChatApproval parts in chronological order Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
approveToolCall and denyToolCall now save a pending message instead of executing tools and continuing generation. Tool execution moves to startGeneration where runtime tools and streaming options are available. The caller explicitly calls streamText/generateText with the returned messageId as promptMessageId. Also: remove forceNewOrder, add content merge logic in addMessages for pending approval messages, pin convex devDependency, update migration URL to GitHub, and expand approval test coverage. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Fixes the intermittent crash "TypeError: The stream is not in a state that permits enqueue" when using saveStreamDeltas with tool calls. The bug occurs when: 1. Tool executes successfully -> onStepFinish callback saves to DB 2. Stream finishes reading -> AI SDK closes stream via attemptClose() 3. Delta-save mutation still in flight -> tries to write to closed stream 4. Crash: Cannot enqueue chunk on closed stream This fix (from commit 87e3657 on seth/fix-193 branch) implements atomic stream finish by: - Deferring final step save when streaming is enabled - Saving step atomically with stream finish in same mutation - Stream stays open until database confirms all saves complete Changes: - streamText.ts: Track pendingFinalStep, defer save, atomic finish - streaming.ts: Add markFinishedExternally() and getOrCreateStreamId() - start.ts: Add finishStreamId parameter to save() - messages.ts: Atomically finish stream with message save Resolves issue #181 Addresses user report: #217 Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Previously, approveToolCall and denyToolCall required database lookups to find tool context (toolCallId, toolName, args, parentMessageId) from just the approvalId. This was O(n) and had pagination bugs when approvals were beyond the 20-message window. This commit changes the API to accept all required context directly: - approveToolCall now requires: toolCallId, toolName, args, parentMessageId, approvalId - denyToolCall now requires: toolCallId, toolName, parentMessageId, approvalId Frontend already has this information when displaying approval UI, so this eliminates the inefficient _findToolCallInfo method entirely. Changes: - Replace _findToolCallInfo (87 lines, O(3n)) with _checkAlreadyHandled (27 lines, O(n)) - Update approveToolCall and denyToolCall signatures to require full context - Update example/convex/chat/approval.ts to pass new parameters - Update example/ui/chat/ChatApproval.tsx to pass tool context from UIMessage - Skip obsolete tests that tested _findToolCallInfo internals Benefits: - O(1) instead of O(n) - no database lookup needed - No pagination bugs - frontend passes data directly - More explicit API - clear what data is required - Idempotency still preserved via _checkAlreadyHandled Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
a2a4e81 to
76ec038
Compare
Replace inefficient O(n) message scanning with O(1) indexed database queries. This eliminates pagination bugs and keeps the simple approveToolCall API. ## Changes **Schema (src/component/schema.ts):** - Add indexed fields to messages table: - approvalId, approvalToolCallId, approvalToolName, approvalToolInput, approvalStatus - Add indexes: by_approvalId, by_threadId_approvalStatus - Auto-populated from message content when saving approval requests **Component (src/component/messages.ts):** - extractApprovalFields(): Extract tool context from approval-request parts - Auto-populate indexed fields during message insertion - getByApprovalId query: O(1) lookup by approvalId - updateApprovalStatus mutation: Mark approvals as handled **Client (src/client/index.ts):** - Revert approveToolCall to simple API (threadId + approvalId only) - Revert denyToolCall to simple API - Replace _checkAlreadyHandled with indexed _findToolCallInfo - Use O(1) database queries instead of O(n) message scanning **Example & Frontend:** - Revert to passing only approvalId (simple API) - No complex tool context extraction needed **Tests:** - Update tests to reflect FIXED behavior (no longer documenting bugs) - Pagination bug: FIXED - approvals found regardless of position - Split message bug: Now correctly requires both parts in same message - Duplicate toolCallId: FIXED - uses indexed data from approval message - Idempotency: Works via approvalStatus field updates ## Benefits ✅ O(1) indexed lookup instead of O(n) message scanning ✅ No pagination bugs (20-message limit eliminated) ✅ Simple API preserved (just threadId + approvalId) ✅ Database as source of truth with validation ✅ Frontend stays simple (no complex extraction) ✅ Standard database optimization pattern (denormalization for performance) Storage cost: ~105 bytes per approval request (negligible) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
76ec038 to
9ae2e58
Compare
Fixes the intermittent crash "TypeError: The stream is not in a state that permits enqueue" when using saveStreamDeltas with tool calls. The bug occurs when: 1. Tool executes successfully -> onStepFinish callback saves to DB 2. Stream finishes reading -> AI SDK closes stream via attemptClose() 3. Delta-save mutation still in flight -> tries to write to closed stream 4. Crash: Cannot enqueue chunk on closed stream This fix (from commit 87e3657 on seth/fix-193 branch) implements atomic stream finish by: - Deferring final step save when streaming is enabled - Saving step atomically with stream finish in same mutation - Stream stays open until database confirms all saves complete Changes: - streamText.ts: Track pendingFinalStep, defer save, atomic finish - streaming.ts: Add markFinishedExternally() and getOrCreateStreamId() - start.ts: Add finishStreamId parameter to save() - messages.ts: Atomically finish stream with message save Resolves issue #181 Addresses user report: #217 Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
## Summary - Update example `createTool` calls to AI SDK v6 syntax (`args` → `inputSchema`, `handler` → `execute`) - Remove noisy console.warn for paginated tool results (expected pagination behavior, not an error) - Remove unused `ToolApprovalRequest`/`ToolApprovalResponse` imports from `mapping.ts` Cherry-picked from #217 (commits `8640c1a`, `77817f7`). ## Test plan - [ ] `npm run build` passes - [ ] `npm run typecheck` passes - [ ] Example code compiles with AI SDK v6 🤖 Generated with [Claude Code](https://claude.com/claude-code)
Fixes the intermittent crash "TypeError: The stream is not in a state that permits enqueue" when using saveStreamDeltas with tool calls. The bug occurs when: 1. Tool executes successfully -> onStepFinish callback saves to DB 2. Stream finishes reading -> AI SDK closes stream via attemptClose() 3. Delta-save mutation still in flight -> tries to write to closed stream 4. Crash: Cannot enqueue chunk on closed stream This fix (from commit 87e3657 on seth/fix-193 branch) implements atomic stream finish by: - Deferring final step save when streaming is enabled - Saving step atomically with stream finish in same mutation - Stream stays open until database confirms all saves complete Changes: - streamText.ts: Track pendingFinalStep, defer save, atomic finish - streaming.ts: Add markFinishedExternally() and getOrCreateStreamId() - start.ts: Add finishStreamId parameter to save() - messages.ts: Atomically finish stream with message save Resolves issue #181 Addresses user report: #217 Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
## Summary Fixes the intermittent crash "TypeError: The stream is not in a state that permits enqueue" when using `saveStreamDeltas` with tool calls (issue #181). The bug occurs when: 1. Tool executes successfully → `onStepFinish` callback saves to DB 2. Stream finishes reading → AI SDK closes stream via `attemptClose()` 3. Delta-save mutation still in flight → tries to write to closed stream 4. Crash: Cannot enqueue chunk on closed stream The fix implements atomic stream finish by: - Deferring final step save when streaming is enabled - Saving step atomically with stream finish in same mutation - Stream stays open until database confirms all saves complete Cherry-picked from #217 (commit `5cd4c5e`). Resolves #181. ## Test plan - [ ] `npm run build` passes - [ ] `npm test` passes - [ ] No more "stream is not in a state that permits enqueue" errors during tool calling with streaming 🤖 Generated with [Claude Code](https://claude.com/claude-code)
## Summary
- Add TypeScript compile-time errors when users accidentally use AI SDK v5 patterns with v6:
- `createTool({ args: ... })` → helpful error pointing to `inputSchema`
- `createTool({ handler: ... })` → helpful error pointing to `execute`
- `languageModel` with v2 spec → error about needing v3
- Include MIGRATION.md in npm package with step-by-step upgrade instructions
- Enhance CLAUDE.md with comprehensive AI SDK v6 upgrade guidance
- Remove TODO comment in `deltas.ts` (partial tool calls now handled)
Cherry-picked from #217 (commits `a81e3e3`, `5919965`, `74ce289`, `6f7009d`).
## Test plan
- [ ] `npm run build` passes
- [ ] `npm run typecheck` passes
- [ ] Using v5 patterns shows helpful compile-time errors
🤖 Generated with [Claude Code](https://claude.com/claude-code)
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
example/convex/rag/ragAsTools.ts (1)
17-19:⚠️ Potential issue | 🟡 MinorAdd missing
returnsvalidator tosendMessageaction.The
sendMessageaction is missing thereturnsvalidator. As per coding guidelines, all Convex functions must includereturnsvalidators.🛡️ Proposed fix
export const sendMessage = action({ args: { threadId: v.string(), prompt: v.string() }, + returns: v.null(), handler: async (ctx, { threadId, prompt }) => {As per coding guidelines: "ALWAYS include argument and return validators for all Convex functions. If a function doesn't return anything, include
returns: v.null()."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@example/convex/rag/ragAsTools.ts` around lines 17 - 19, The sendMessage Convex action is missing a returns validator; update the action definition for sendMessage to include a returns entry (e.g., returns: v.null() if it doesn’t return a value) alongside args so it satisfies the Convex validators requirement—modify the action({...}) call that defines sendMessage to add returns: v.null().
🧹 Nitpick comments (21)
src/client/types.ts (1)
96-97:AgentPrompt.modeldoes not apply theAssertAISDKv6guard, creating a gap.
Config.languageModel(line 110) is guarded byAssertAISDKv6<LanguageModel>, but the per-callmodeloverride inAgentPromptuses plainLanguageModel. A caller using AI SDK v5 can silently pass a v5 model via the per-call override without any type-level error.♻️ Proposed fix
- model?: LanguageModel; + model?: AssertAISDKv6<LanguageModel>;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/client/types.ts` around lines 96 - 97, AgentPrompt.model currently uses plain LanguageModel while Config.languageModel is wrapped with AssertAISDKv6<LanguageModel>, so change the AgentPrompt.model type to AssertAISDKv6<LanguageModel> (i.e., model?: AssertAISDKv6<LanguageModel>) to enforce the v6 guard; update imports to ensure AssertAISDKv6 is available and adjust any call sites passing a model override to satisfy the new constrained type.src/component/schema.ts (1)
97-101: Index naming is inconsistent with the existing convention in this file.Existing indexes (
threadId_status_tool_order_stepOrder,embeddingId_threadId, etc.) use no prefix, while the two new indexes use aby_prefix. Consider aligning on one style — either rename toapprovalIdandthreadId_approvalStatus, or plan to migrate existing indexes.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/component/schema.ts` around lines 97 - 101, The new indexes index("by_approvalId", ["approvalId"]) and index("by_threadId_approvalStatus", ["threadId", "approvalStatus"]) use a by_ prefix that breaks the existing naming convention (e.g., threadId_status_tool_order_stepOrder, embeddingId_threadId); rename them to approvalId and threadId_approvalStatus respectively and update any code that references index("by_approvalId", ...) or index("by_threadId_approvalStatus", ...) to the new names so naming is consistent across the schema.src/client/search.ts (1)
282-299:hasApprovalResponseis dead code — subsumed by the newapprovalRequestsByToolCallId.hascondition.
hasApprovalResponse(id)=approvalRequestsByToolCallId.has(id) && approvalResponseIds.has(…), so whenever condition 3 is true, condition 4 (approvalRequestsByToolCallId.has(p.toolCallId)) is already true. Condition 3 never independently keeps a tool-call, and thehasApprovalResponsehelper itself is now unreachable.♻️ Proposed simplification
- // Helper: check if tool call has a corresponding approval response - const hasApprovalResponse = (toolCallId: string) => { - const approvalId = approvalRequestsByToolCallId.get(toolCallId); - return approvalId !== undefined && approvalResponseIds.has(approvalId); - }; - for (const doc of docs) { if ( doc.message?.role === "assistant" && Array.isArray(doc.message.content) ) { const content = doc.message.content.filter( (p) => p.type !== "tool-call" || toolResultIds.has(p.toolCallId) || - hasApprovalResponse(p.toolCallId) || approvalRequestsByToolCallId.has(p.toolCallId), );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/client/search.ts` around lines 282 - 299, The hasApprovalResponse helper is now redundant because its logic is subsumed by approvalRequestsByToolCallId.has; remove the hasApprovalResponse function and simplify the content filter in the docs loop to only keep a tool-call when toolResultIds.has(p.toolCallId) || approvalRequestsByToolCallId.has(p.toolCallId); update any references to hasApprovalResponse accordingly and delete approvalResponseIds-related checks if they are no longer used elsewhere.example/convex/usage_tracking/usageHandler.ts (1)
40-69: New token detail fields are accepted in args but silently dropped in the handler.Lines 40-55 add
inputTokenDetails,outputTokenDetails, andrawto theusagevalidator, but the handler (lines 63-69) only persistspromptTokens,completionTokens,totalTokens,reasoningTokens, andcachedInputTokens. The new detailed breakdown data is discarded on write.If this is intentional for a minimal example, consider adding a comment noting this. Otherwise, persist the new fields to
rawUsage:💡 Option: persist the new fields
usage: { promptTokens: args.usage.inputTokens ?? 0, completionTokens: args.usage.outputTokens ?? 0, totalTokens: args.usage.totalTokens ?? 0, reasoningTokens: args.usage.reasoningTokens, cachedInputTokens: args.usage.cachedInputTokens, + inputTokenDetails: args.usage.inputTokenDetails, + outputTokenDetails: args.usage.outputTokenDetails, },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@example/convex/usage_tracking/usageHandler.ts` around lines 40 - 69, The handler for the insert into rawUsage (handler function calling ctx.db.insert("rawUsage")) accepts new fields inputTokenDetails, outputTokenDetails, and raw in the validator but does not persist them; update the handler to include these fields when building the record (e.g., include args.usage.inputTokenDetails, args.usage.outputTokenDetails and args.usage.raw or persist args.raw) so the detailed breakdown isn't dropped, or if dropping is intentional add a brief comment in the handler noting the omission; reference inputTokenDetails, outputTokenDetails, raw, handler, ctx.db.insert, and rawUsage to locate where to change.example/convex/tools/agentAsTool.ts (1)
21-36: Nit: innertool()calls name the first execute parameterargsinstead ofinput.For consistency with the outer
createToolusage (line 50) and the v0.6.0 migration convention, consider renamingargs→inputin these innertool()execute functions. Not a bug — just a naming consistency nit.📝 Suggested rename
doSomething: tool({ description: "Call this function when asked to do something", inputSchema: z.object({}), - execute: async (args, options) => { + execute: async (input, options) => { console.log("doingSomething", options.toolCallId); return "hello"; }, }), doSomethingElse: tool({ description: "Call this function when asked to do something else", inputSchema: z.object({}), - execute: async (args, options) => { + execute: async (input, options) => { console.log("doSomethingElse", options.toolCallId); return "hello"; }, }),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@example/convex/tools/agentAsTool.ts` around lines 21 - 36, The inner tool() definitions for doSomething and doSomethingElse use the parameter name args in their execute functions; rename that first parameter to input to match the outer createTool usage and the v0.6.0 migration convention. Update the execute signatures in the doSomething and doSomethingElse tool({ ... execute: async (input, options) => { ... } }) implementations and adjust any local references inside those functions from args to input so naming is consistent with createTool and other tool() usages.CLAUDE.md (1)
91-225: Comprehensive upgrade guide — looks good overall.The v0.6.0 migration guide clearly documents the key changes (inputSchema/execute, embeddingModel, stopWhen, type imports, generateObject mode removal). The before/after examples are consistent with the actual code changes in this PR.
One minor clarification: Line 107 — the parenthetical "(v0.6.0 has type conflicts with AI SDK v6)" is ambiguous since v0.6.0 refers to
@convex-dev/ragv0.6.0, not@convex-dev/agent. Consider clarifying as "(rag v0.6.0 has type conflicts with AI SDK v6)" for clarity.📝 Suggested clarification
- - `@convex-dev/rag@^0.7.0` (v0.6.0 has type conflicts with AI SDK v6) + - `@convex-dev/rag@^0.7.0` (rag v0.6.0 has type conflicts with AI SDK v6)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@CLAUDE.md` around lines 91 - 225, Update the ambiguous parenthetical under "Compatible sibling packages" that currently reads "(v0.6.0 has type conflicts with AI SDK v6)" to explicitly reference rag by changing it to "(rag v0.6.0 has type conflicts with AI SDK v6)" so the sentence clearly indicates the conflict is with `@convex-dev/rag` v0.6.0; locate the phrase near the "Compatible sibling packages:" section and replace the parenthetical accordingly.src/deltas.ts (1)
72-104: Fragile string match couplesupdateFromUIMessageChunksto AI SDK internal error text.The suppression check at line 78 (
errorMessage.toLowerCase().includes("no tool invocation found")) silently drops errors based on an exact substring of an AI SDK internal error message. If the SDK changes this wording, the error will no longer be suppressed and will begin surfacing as failed streams; conversely, any error message that coincidentally contains this phrase would also be silently swallowed.Consider a more targeted approach — e.g., a dedicated flag on the call site or an error-type discriminant — rather than substring matching.
♻️ Suggested approach
export async function updateFromUIMessageChunks( uiMessage: UIMessage, parts: UIMessageChunk[], + options?: { suppressToolInvocationErrors?: boolean }, ) { ... onError: (e) => { const errorMessage = e instanceof Error ? e.message : String(e); - if (errorMessage.toLowerCase().includes("no tool invocation found")) { + if (options?.suppressToolInvocationErrors && + errorMessage.toLowerCase().includes("no tool invocation found")) { suppressError = true; return; }Call sites that are in the approval-continuation path can pass
{ suppressToolInvocationErrors: true }explicitly.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/deltas.ts` around lines 72 - 104, The current silent suppression is based on a fragile substring match inside the onError handler; change updateFromUIMessageChunks to accept an explicit boolean option (e.g., suppressToolInvocationErrors) and only allow the suppression logic to run when that option is true. Concretely: add the suppressToolInvocationErrors parameter to the updateFromUIMessageChunks call/site and function signature, update the onError closure to check that suppressToolInvocationErrors is true before setting suppressError = true (keep the existing string check if you must for backward compatibility), and ensure callers that are on the approval-continuation path pass suppressToolInvocationErrors: true while other callers omit it or pass false. This confines suppression to intended call sites and removes the implicit coupling to SDK error text while leaving suppressError and the rest of the flow intact.src/client/approval-bugs.test.ts (3)
555-576: Unused destructured variableparentMessageId.
parentMessageIdfromsaveMessageon line 555 is never referenced in the test body.♻️ Remove unused destructuring
- const { messageId: parentMessageId } = await t.run(async (ctx) => + await t.run(async (ctx) => throwingAgent.saveMessage(ctx, {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/client/approval-bugs.test.ts` around lines 555 - 576, The test destructures a messageId into parentMessageId from throwingAgent.saveMessage but never uses it; remove the unused variable by not destructuring messageId (call await t.run(async (ctx) => throwingAgent.saveMessage(...)) without capturing messageId) or replace the destructure with a single unused binding (e.g., discard the result) to eliminate the unused parentMessageId variable referenced around throwingAgent.saveMessage in this test.
275-326: Misleading comments on message ordering.Line 275 says "Second message has CORRECT" but this is the first
saveMessagecall. Line 299 says "First message (older) has WRONG" but this is the second call (chronologically newer). The test logic and assertion are correct, but the comments create confusion.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/client/approval-bugs.test.ts` around lines 275 - 326, The inline comments around the two testAgent.saveMessage calls are misleading about message order; update the comment above the first saveMessage (which creates the approval-containing message) to say it is the newer message containing "CORRECT" and the approval request, and update the comment above the second saveMessage to say it is the older message containing "WRONG" with the duplicate toolCallId; ensure references to testAgent.saveMessage, toolCallId "duplicate-id", approvalId "dup-approval", and the subsequent _findToolCallInfo assertion remain unchanged.
592-618: Placeholder test doesn't validate anything.
expect(true).toBe(true)will always pass. While documenting the TOCTOU issue is valuable, consider converting this to atest.todo(...)or at minimum marking it as a documentation-only test with a skip, so it doesn't inflate passing test counts.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/client/approval-bugs.test.ts` around lines 592 - 618, The placeholder test "documents TOCTOU issue - check and write are separate transactions" currently contains a pointless assertion expect(true).toBe(true); replace this with a proper non-passing placeholder by using test.todo("documents TOCTOU issue - check and write are separate transactions") or mark the test to skip via test.skip(...) so it doesn't count as a successful assertion; locate the test inside the "Bug: Race condition with concurrent approvals" describe block and remove or replace the expect(true).toBe(true) line accordingly.src/react/useUIMessages.ts (2)
247-260: Heavy reliance onas anyfor parts merging is fragile.The generic
Monly constrainsorder,stepOrder, andstatus, so every parts access requiresas any. This works at runtime but bypasses type safety entirely. Consider tightening the generic constraint to include an optionalpartsfield, or introducing a narrower overload for message types that include parts.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/react/useUIMessages.ts` around lines 247 - 260, The code in useUIMessages.ts is using unsafe casts ((last as any).parts, (msg as any).parts) because the generic M doesn't declare an optional parts field; tighten the generic constraint or add a narrower overload so TypeScript knows messages may have parts: update the generic type parameter for the function (or the specific overload) to extend { parts?: unknown[] } or create a MessageWithParts<M> type and use it when computing lastParts/msgParts, then replace the as any casts with proper typed access to parts (references: lastParts, msgParts, hasParts in useUIMessages.ts and the generic M used by that function).
221-300: Repeated array spreading inreduceis quadratic.Each
[...msgs.slice(0, -1), msg]copies the entire accumulator. Fornmessages this is O(n²). Since message counts are typically small in chat UIs this is not critical, but the fix is simple: mutate the accumulator in-place viapop/push(ascombineUIMessagesdoes insrc/UIMessages.ts).♻️ Example: mutate in place
if (last.order !== msg.order || last.stepOrder !== msg.stepOrder) { - return [...msgs, msg]; + msgs.push(msg); + return msgs; } ... if (lastIsFinalized && !msgIsFinalized) { return msgs; } if (msgIsFinalized && !lastIsFinalized) { - return [...msgs.slice(0, -1), msg]; + msgs.pop(); + msgs.push(msg); + return msgs; }Apply the same pattern to the other
[...msgs.slice(0, -1), ...]sites.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/react/useUIMessages.ts` around lines 221 - 300, The reduce callback in useUIMessages (the return sorted(...).reduce((msgs, msg) => { ... })) repeatedly constructs new arrays with patterns like [ ...msgs.slice(0, -1), msg ] which makes the algorithm O(n²); change the reduce to mutate the accumulator in-place: when the accumulator is empty push msg instead of returning [msg], when replacing the last element do msgs.pop() followed by msgs.push(msg or merged) instead of returning a new array, and where code currently returns msgs leave it as-is (just return msgs). Update every site that uses [...msgs.slice(0, -1), ...] (and the initial [msg]) inside the reduce callback (referencing variables msgs, msg, last, merged, mergedParts, lastParts, msgParts) to use pop/push mutations and return the same msgs array to achieve linear performance.src/UIMessages.ts (2)
604-609: Consider makingfinalStatesa module-level constant.
finalStatesis recreated on every call tocreateAssistantUIMessage. Since it's a fixed set, hoisting it to module scope avoids repeated allocations.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/UIMessages.ts` around lines 604 - 609, Hoist the fixed Set called finalStates out of createAssistantUIMessage to a module-level constant (e.g., const FINAL_STATES = new Set([...]) ) so it is created once instead of on every call; update createAssistantUIMessage to reference this module-level constant and remove the local finalStates declaration, keeping the Set immutable by using const and an upper-case name to indicate its constant nature.
655-681:execution-deniedunconditionally overwrites state, unlike approval processing.Lines 612-652 respect
finalStatesbefore updating state, but line 664 sets"output-denied"unconditionally. This means if a tool-result already set"output-available"(because the main handler doesn't parse theexecution-deniedoutput type), it gets corrected here. If that's the intended design (post-processing as a correction layer), a brief comment explaining the asymmetry would help future readers.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/UIMessages.ts` around lines 655 - 681, execution-denied handling unconditionally sets toolCallPart.state = "output-denied", unlike the approval branch which respects finalStates; change the code that processes executionDeniedResults (loop using executionDeniedResults, allParts, ToolUIPart) so you only mutate state if the current part state is not in the same final-state guard used earlier (e.g., check finalStates or the equivalent predicate before assigning "output-denied"), and preserve the existing approval assignment logic; if the unconditional overwrite is intentional, add a short clarifying comment above the state mutation explaining that this pass is a correction layer that must override prior states.src/client/start.ts (2)
265-272: Mutatingcontext.messagesin place could surprise callers.
context.messagesis filtered and then pushed to. If the caller ofstartGenerationretains a reference to the context, they'll see modified messages. This is likely fine sincecontextis local to this function's caller, but a brief comment clarifying the mutation intent would help.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/client/start.ts` around lines 265 - 272, The code mutates context.messages in startGeneration by filtering out empty tool messages which can surprise callers holding a reference; either avoid in-place mutation by replacing the assignment with a new array copy (e.g., const filtered = context.messages.filter(...); use filtered locally) or, if intentional, add a brief inline comment above the context.messages = ... line stating that in-place mutation is deliberate and that startGeneration expects to own/modify the context.messages array; reference the context.messages property and startGeneration function when making this change.
375-376:as anycast onfinishStreamIdbypasses type safety.The
addMessagesmutation expectsId<"streamingMessages">but the parameter is typed asstring. While Convex IDs are strings at runtime, consider typing thesaveparameter as the branded ID type to get compile-time safety, or add a brief comment explaining why the cast is needed.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/client/start.ts` around lines 375 - 376, The finishStreamId is being cast with "as any", which bypasses type safety for the addMessages mutation that expects Id<\"streamingMessages\">; update the signature where finishStreamId is passed (the save parameter in the function that calls addMessages) to be typed as Id<\"streamingMessages\"> (or optional Id<\"streamingMessages\">) so you pass a correctly branded Convex ID, or if that change is not feasible add a concise comment explaining why a cast is required and replace "as any" with a safer cast to Id<\"streamingMessages\">; ensure references to finishStreamId and the addMessages mutation use the branded type instead of string.example/ui/chat/ChatApproval.tsx (1)
72-74:scrollToBottomfires on every streaming chunk.
messagesgets a new reference on each streaming update, souseEffecttriggers constantly during streaming. For an example UI this is fine, but in production you'd want to debounce or only scroll when actually near the bottom.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@example/ui/chat/ChatApproval.tsx` around lines 72 - 74, The useEffect that calls scrollToBottom on every change of messages causes continual scrolling during streaming because messages receives a new reference for each chunk; update the effect to avoid frequent firing by either debouncing calls to scrollToBottom (wrap scrollToBottom with a short debounce/timer) or gating it with a proximity check (only call scrollToBottom when the viewport is near the bottom) and/or an explicit streaming flag (e.g., check isStreaming or compare previous vs new messages length) so scrollToBottom is only invoked when appropriate; locate the useEffect and scrollToBottom references in ChatApproval to implement the debounce/near-bottom check.src/client/approval.test.ts (2)
8-11: Unused imports and variables.
actionGeneric,ActionBuilder(lines 9-10),v(line 11), and the derivedDataModel/action(lines 20-21) are never used in the test file.♻️ Remove unused imports/variables
-import type { DataModelFromSchemaDefinition } from "convex/server"; -import { actionGeneric } from "convex/server"; -import type { ActionBuilder } from "convex/server"; -import { v } from "convex/values"; import { defineSchema } from "convex/server"; ... const schema = defineSchema({}); -type DataModel = DataModelFromSchemaDefinition<typeof schema>; -const action = actionGeneric as ActionBuilder<DataModel, "public">;Also applies to: 20-21
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/client/approval.test.ts` around lines 8 - 11, Remove the unused imports and variables: delete the unused imports actionGeneric, ActionBuilder, and v from the top import block and remove the unused derived variables DataModel and action (the DataModelFromSchemaDefinition-derived type/variable and the action variable declared around lines 20-21) from the test file; keep only the imports and declarations that are actually referenced by the tests to eliminate lint warnings and dead code.
421-487: Several tests verify structure but not behavior."needsApproval receives correct input" (line 422) doesn't invoke the spy. "continuation messages stay in the same order" (line 439) doesn't test continuation. "tool receives correct context fields" (line 471) never executes the tool. These are setup validations, not behavioral tests. Consider renaming them (e.g., "tool is configured with needsApproval") or marking incomplete ones with
test.todo.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/client/approval.test.ts` around lines 421 - 487, These tests only assert configuration but not behavior—either make them real behavioral tests or mark/rename them as configuration checks/todo. For "needsApproval receives correct input" call the tool (invoke testTool.execute with a valid input matching inputSchema) so needsApprovalSpy is invoked and assert it was called with the expected args; reference needsApprovalSpy and testTool.execute. For "continuation messages stay in the same order" simulate the approval + tool continuation flow after creating the thread (use initConvexTest, createThread, approvalAgent.saveMessage and then trigger the approval path) and then call approvalAgent.listMessages to assert continuation messages preserve/increment stepOrder rather than creating a new order. For "tool receives correct context fields" actually run the tool (call contextCaptureTool.execute via the test runtime or directly) so capturedCtx is populated and assert required fields exist instead of only checking __acceptsCtx; reference contextCaptureTool.execute and capturedCtx. If you prefer not to implement behavior now, rename these tests to "is configured with ..." or replace with test.todo entries.src/component/messages.ts (2)
1109-1126:updateApprovalStatussilently no-ops when message not found.If the
approvalIddoesn't match any message (e.g., typo, stale ID), the mutation returnsnullwithout any indication of failure. Consider logging a warning or returning a boolean success indicator so callers can detect this case.♻️ Add a warning when message not found
if (message) { await ctx.db.patch(message._id, { approvalStatus: status }); + } else { + console.warn( + `updateApprovalStatus: no message found for approvalId ${approvalId}`, + ); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/component/messages.ts` around lines 1109 - 1126, The mutation updateApprovalStatus currently silently no-ops when no message matches approvalId; change its contract to return a boolean and surface a warning on misses: update the mutation's returns from v.null() to v.boolean(), log a warning via ctx.logger.warn (or ctx.log.warn) when message is not found, and return false in that branch; after a successful ctx.db.patch of message._id return true. Ensure references to updateApprovalStatus, ctx.db, message._id, and ctx.logger.warn are updated accordingly.
348-369: Content merge viaas anycast is necessary but worth a runtime guard.The cast at line 361 bypasses TypeScript's role discriminant checks. While the swap/finalize logic above ensures role compatibility, a defensive runtime assertion would catch unexpected states:
🛡️ Add a runtime assertion
if ( Array.isArray(pendingContent) && pendingContent.length > 0 && Array.isArray(messageDoc.message?.content) ) { + // Roles must match for content merge to be semantically valid + assert( + pendingMessage.message?.role === messageDoc.message?.role, + `Role mismatch during content merge: ${pendingMessage.message?.role} vs ${messageDoc.message?.role}`, + ); (messageDoc.message as any).content = [ ...pendingContent, ...messageDoc.message.content, ];🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/component/messages.ts` around lines 348 - 369, The code uses an unsafe cast ((messageDoc.message as any).content) to merge pendingContent into messageDoc.message.content; add a defensive runtime check that the role/discriminant matches before assigning: verify pendingMessage.message?.role === messageDoc.message?.role and that both content arrays are present and of compatible item shapes, and if the check fails log or throw an error and skip the merge; then perform the assignment and update messageDoc.tool = isTool(messageDoc.message) and messageDoc.text = hideFromUserIdSearch ? undefined : extractText(messageDoc.message) only when the guard passes to avoid masking real type mismatches at runtime.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.cursor/rules/convex_rules.mdc:
- Line 437: The project is using Convex v1.31.2 which requires passing the table
name as the first argument to ctx.db methods; update all old-style calls like
ctx.db.get(id), ctx.db.patch(id, ...), ctx.db.replace(id, ...), and
ctx.db.delete(id) to the new form ctx.db.get("tableName", id),
ctx.db.patch("tableName", id, ...), ctx.db.replace("tableName", id, ...), and
ctx.db.delete("tableName", id). Locate usages of ctx.db.get / ctx.db.patch /
ctx.db.replace / ctx.db.delete across src/component/ and either run the
automated codemod npx `@convex-dev/codemod`@latest explicit-ids or enable and run
ESLint rule `@convex-dev/explicit-table-ids` --fix to perform the migration
consistently, then run tests to confirm no regressions.
In `@src/client/saveInputMessages.ts`:
- Around line 43-58: The catch block around the ctx.runQuery call is swallowing
all errors; update the logic in the if (args.promptMessageId && "runQuery" in
ctx) block (where ctx.runQuery(component.messages.getMessagesByIds, {
messageIds: [...] }) is invoked) to only suppress known ID-validation errors and
otherwise re-throw or log the error before falling through: catch the error,
detect whether it is the specific Convex/ID-validation error (by error.name,
error.code, or message content), and if not, re-throw the error (or call a
logger with context including args.promptMessageId and the caught error) so real
network/auth/Convex failures are not silently ignored.
In `@src/fromUIMessages.test.ts`:
- Around line 228-231: The test currently asserts toolMessages[1].stepOrder
after only checking expect(toolMessages.length).toBeGreaterThan(0); update the
guard to ensure there are at least two messages before accessing index 1 — e.g.
replace the length check with expect(toolMessages.length).toBeGreaterThan(1) or
add an explicit conditional that fails with a clear message if
toolMessages.length < 2; target the test that references toolMessages and the
consecutive stepOrder assertion to implement this guard.
---
Outside diff comments:
In `@example/convex/rag/ragAsTools.ts`:
- Around line 17-19: The sendMessage Convex action is missing a returns
validator; update the action definition for sendMessage to include a returns
entry (e.g., returns: v.null() if it doesn’t return a value) alongside args so
it satisfies the Convex validators requirement—modify the action({...}) call
that defines sendMessage to add returns: v.null().
---
Nitpick comments:
In `@CLAUDE.md`:
- Around line 91-225: Update the ambiguous parenthetical under "Compatible
sibling packages" that currently reads "(v0.6.0 has type conflicts with AI SDK
v6)" to explicitly reference rag by changing it to "(rag v0.6.0 has type
conflicts with AI SDK v6)" so the sentence clearly indicates the conflict is
with `@convex-dev/rag` v0.6.0; locate the phrase near the "Compatible sibling
packages:" section and replace the parenthetical accordingly.
In `@example/convex/tools/agentAsTool.ts`:
- Around line 21-36: The inner tool() definitions for doSomething and
doSomethingElse use the parameter name args in their execute functions; rename
that first parameter to input to match the outer createTool usage and the v0.6.0
migration convention. Update the execute signatures in the doSomething and
doSomethingElse tool({ ... execute: async (input, options) => { ... } })
implementations and adjust any local references inside those functions from args
to input so naming is consistent with createTool and other tool() usages.
In `@example/convex/usage_tracking/usageHandler.ts`:
- Around line 40-69: The handler for the insert into rawUsage (handler function
calling ctx.db.insert("rawUsage")) accepts new fields inputTokenDetails,
outputTokenDetails, and raw in the validator but does not persist them; update
the handler to include these fields when building the record (e.g., include
args.usage.inputTokenDetails, args.usage.outputTokenDetails and args.usage.raw
or persist args.raw) so the detailed breakdown isn't dropped, or if dropping is
intentional add a brief comment in the handler noting the omission; reference
inputTokenDetails, outputTokenDetails, raw, handler, ctx.db.insert, and rawUsage
to locate where to change.
In `@example/ui/chat/ChatApproval.tsx`:
- Around line 72-74: The useEffect that calls scrollToBottom on every change of
messages causes continual scrolling during streaming because messages receives a
new reference for each chunk; update the effect to avoid frequent firing by
either debouncing calls to scrollToBottom (wrap scrollToBottom with a short
debounce/timer) or gating it with a proximity check (only call scrollToBottom
when the viewport is near the bottom) and/or an explicit streaming flag (e.g.,
check isStreaming or compare previous vs new messages length) so scrollToBottom
is only invoked when appropriate; locate the useEffect and scrollToBottom
references in ChatApproval to implement the debounce/near-bottom check.
In `@src/client/approval-bugs.test.ts`:
- Around line 555-576: The test destructures a messageId into parentMessageId
from throwingAgent.saveMessage but never uses it; remove the unused variable by
not destructuring messageId (call await t.run(async (ctx) =>
throwingAgent.saveMessage(...)) without capturing messageId) or replace the
destructure with a single unused binding (e.g., discard the result) to eliminate
the unused parentMessageId variable referenced around throwingAgent.saveMessage
in this test.
- Around line 275-326: The inline comments around the two testAgent.saveMessage
calls are misleading about message order; update the comment above the first
saveMessage (which creates the approval-containing message) to say it is the
newer message containing "CORRECT" and the approval request, and update the
comment above the second saveMessage to say it is the older message containing
"WRONG" with the duplicate toolCallId; ensure references to
testAgent.saveMessage, toolCallId "duplicate-id", approvalId "dup-approval", and
the subsequent _findToolCallInfo assertion remain unchanged.
- Around line 592-618: The placeholder test "documents TOCTOU issue - check and
write are separate transactions" currently contains a pointless assertion
expect(true).toBe(true); replace this with a proper non-passing placeholder by
using test.todo("documents TOCTOU issue - check and write are separate
transactions") or mark the test to skip via test.skip(...) so it doesn't count
as a successful assertion; locate the test inside the "Bug: Race condition with
concurrent approvals" describe block and remove or replace the
expect(true).toBe(true) line accordingly.
In `@src/client/approval.test.ts`:
- Around line 8-11: Remove the unused imports and variables: delete the unused
imports actionGeneric, ActionBuilder, and v from the top import block and remove
the unused derived variables DataModel and action (the
DataModelFromSchemaDefinition-derived type/variable and the action variable
declared around lines 20-21) from the test file; keep only the imports and
declarations that are actually referenced by the tests to eliminate lint
warnings and dead code.
- Around line 421-487: These tests only assert configuration but not
behavior—either make them real behavioral tests or mark/rename them as
configuration checks/todo. For "needsApproval receives correct input" call the
tool (invoke testTool.execute with a valid input matching inputSchema) so
needsApprovalSpy is invoked and assert it was called with the expected args;
reference needsApprovalSpy and testTool.execute. For "continuation messages stay
in the same order" simulate the approval + tool continuation flow after creating
the thread (use initConvexTest, createThread, approvalAgent.saveMessage and then
trigger the approval path) and then call approvalAgent.listMessages to assert
continuation messages preserve/increment stepOrder rather than creating a new
order. For "tool receives correct context fields" actually run the tool (call
contextCaptureTool.execute via the test runtime or directly) so capturedCtx is
populated and assert required fields exist instead of only checking
__acceptsCtx; reference contextCaptureTool.execute and capturedCtx. If you
prefer not to implement behavior now, rename these tests to "is configured with
..." or replace with test.todo entries.
In `@src/client/search.ts`:
- Around line 282-299: The hasApprovalResponse helper is now redundant because
its logic is subsumed by approvalRequestsByToolCallId.has; remove the
hasApprovalResponse function and simplify the content filter in the docs loop to
only keep a tool-call when toolResultIds.has(p.toolCallId) ||
approvalRequestsByToolCallId.has(p.toolCallId); update any references to
hasApprovalResponse accordingly and delete approvalResponseIds-related checks if
they are no longer used elsewhere.
In `@src/client/start.ts`:
- Around line 265-272: The code mutates context.messages in startGeneration by
filtering out empty tool messages which can surprise callers holding a
reference; either avoid in-place mutation by replacing the assignment with a new
array copy (e.g., const filtered = context.messages.filter(...); use filtered
locally) or, if intentional, add a brief inline comment above the
context.messages = ... line stating that in-place mutation is deliberate and
that startGeneration expects to own/modify the context.messages array; reference
the context.messages property and startGeneration function when making this
change.
- Around line 375-376: The finishStreamId is being cast with "as any", which
bypasses type safety for the addMessages mutation that expects
Id<\"streamingMessages\">; update the signature where finishStreamId is passed
(the save parameter in the function that calls addMessages) to be typed as
Id<\"streamingMessages\"> (or optional Id<\"streamingMessages\">) so you pass a
correctly branded Convex ID, or if that change is not feasible add a concise
comment explaining why a cast is required and replace "as any" with a safer cast
to Id<\"streamingMessages\">; ensure references to finishStreamId and the
addMessages mutation use the branded type instead of string.
In `@src/client/types.ts`:
- Around line 96-97: AgentPrompt.model currently uses plain LanguageModel while
Config.languageModel is wrapped with AssertAISDKv6<LanguageModel>, so change the
AgentPrompt.model type to AssertAISDKv6<LanguageModel> (i.e., model?:
AssertAISDKv6<LanguageModel>) to enforce the v6 guard; update imports to ensure
AssertAISDKv6 is available and adjust any call sites passing a model override to
satisfy the new constrained type.
In `@src/component/messages.ts`:
- Around line 1109-1126: The mutation updateApprovalStatus currently silently
no-ops when no message matches approvalId; change its contract to return a
boolean and surface a warning on misses: update the mutation's returns from
v.null() to v.boolean(), log a warning via ctx.logger.warn (or ctx.log.warn)
when message is not found, and return false in that branch; after a successful
ctx.db.patch of message._id return true. Ensure references to
updateApprovalStatus, ctx.db, message._id, and ctx.logger.warn are updated
accordingly.
- Around line 348-369: The code uses an unsafe cast ((messageDoc.message as
any).content) to merge pendingContent into messageDoc.message.content; add a
defensive runtime check that the role/discriminant matches before assigning:
verify pendingMessage.message?.role === messageDoc.message?.role and that both
content arrays are present and of compatible item shapes, and if the check fails
log or throw an error and skip the merge; then perform the assignment and update
messageDoc.tool = isTool(messageDoc.message) and messageDoc.text =
hideFromUserIdSearch ? undefined : extractText(messageDoc.message) only when the
guard passes to avoid masking real type mismatches at runtime.
In `@src/component/schema.ts`:
- Around line 97-101: The new indexes index("by_approvalId", ["approvalId"]) and
index("by_threadId_approvalStatus", ["threadId", "approvalStatus"]) use a by_
prefix that breaks the existing naming convention (e.g.,
threadId_status_tool_order_stepOrder, embeddingId_threadId); rename them to
approvalId and threadId_approvalStatus respectively and update any code that
references index("by_approvalId", ...) or index("by_threadId_approvalStatus",
...) to the new names so naming is consistent across the schema.
In `@src/deltas.ts`:
- Around line 72-104: The current silent suppression is based on a fragile
substring match inside the onError handler; change updateFromUIMessageChunks to
accept an explicit boolean option (e.g., suppressToolInvocationErrors) and only
allow the suppression logic to run when that option is true. Concretely: add the
suppressToolInvocationErrors parameter to the updateFromUIMessageChunks
call/site and function signature, update the onError closure to check that
suppressToolInvocationErrors is true before setting suppressError = true (keep
the existing string check if you must for backward compatibility), and ensure
callers that are on the approval-continuation path pass
suppressToolInvocationErrors: true while other callers omit it or pass false.
This confines suppression to intended call sites and removes the implicit
coupling to SDK error text while leaving suppressError and the rest of the flow
intact.
In `@src/react/useUIMessages.ts`:
- Around line 247-260: The code in useUIMessages.ts is using unsafe casts ((last
as any).parts, (msg as any).parts) because the generic M doesn't declare an
optional parts field; tighten the generic constraint or add a narrower overload
so TypeScript knows messages may have parts: update the generic type parameter
for the function (or the specific overload) to extend { parts?: unknown[] } or
create a MessageWithParts<M> type and use it when computing lastParts/msgParts,
then replace the as any casts with proper typed access to parts (references:
lastParts, msgParts, hasParts in useUIMessages.ts and the generic M used by that
function).
- Around line 221-300: The reduce callback in useUIMessages (the return
sorted(...).reduce((msgs, msg) => { ... })) repeatedly constructs new arrays
with patterns like [ ...msgs.slice(0, -1), msg ] which makes the algorithm
O(n²); change the reduce to mutate the accumulator in-place: when the
accumulator is empty push msg instead of returning [msg], when replacing the
last element do msgs.pop() followed by msgs.push(msg or merged) instead of
returning a new array, and where code currently returns msgs leave it as-is
(just return msgs). Update every site that uses [...msgs.slice(0, -1), ...] (and
the initial [msg]) inside the reduce callback (referencing variables msgs, msg,
last, merged, mergedParts, lastParts, msgParts) to use pop/push mutations and
return the same msgs array to achieve linear performance.
In `@src/UIMessages.ts`:
- Around line 604-609: Hoist the fixed Set called finalStates out of
createAssistantUIMessage to a module-level constant (e.g., const FINAL_STATES =
new Set([...]) ) so it is created once instead of on every call; update
createAssistantUIMessage to reference this module-level constant and remove the
local finalStates declaration, keeping the Set immutable by using const and an
upper-case name to indicate its constant nature.
- Around line 655-681: execution-denied handling unconditionally sets
toolCallPart.state = "output-denied", unlike the approval branch which respects
finalStates; change the code that processes executionDeniedResults (loop using
executionDeniedResults, allParts, ToolUIPart) so you only mutate state if the
current part state is not in the same final-state guard used earlier (e.g.,
check finalStates or the equivalent predicate before assigning "output-denied"),
and preserve the existing approval assignment logic; if the unconditional
overwrite is intentional, add a short clarifying comment above the state
mutation explaining that this pass is a correction layer that must override
prior states.
| "version": "1.0.0", | ||
| "dependencies": { | ||
| "convex": "^1.17.4", | ||
| "convex": "^1.31.2", |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# First, let's find the convex_rules.mdc file and see the current state
find . -name "convex_rules.mdc" -type fRepository: get-convex/agent
Length of output: 92
🏁 Script executed:
#!/bin/bash
# Find all ctx.db calls in the codebase to assess migration needs
rg -n --type=ts --type=tsx --type=js --type=jsx 'ctx\.db\.(get|patch|replace|delete)\(' \
--glob '!node_modules/**' \
--glob '!.cursor/**' \
-A 2Repository: get-convex/agent
Length of output: 87
🌐 Web query:
Convex ctx.db API changes version 1.31.0 table name parameter
💡 Result:
In convex npm package v1.31.0, Convex changed several ctx.db methods so they accept the table name as the first argument (the old “infer table from the ID” form is still supported for now, but the table name is now recommended and is expected to be required for future features like custom/bring-your-own IDs). [1] [2]
What changed (table name parameter added)
These methods now take tableName first: db.get, db.patch, db.replace, db.delete. [1]
Old (still works for now):
await ctx.db.get(id)await ctx.db.patch(id, value)await ctx.db.replace(id, value)await ctx.db.delete(id)
New (v1.31.0+):
await ctx.db.get("messages", id)await ctx.db.patch("messages", id, value)await ctx.db.replace("messages", id, value)await ctx.db.delete("messages", id)[1] [2]
Migration help
- ESLint rule:
@convex-dev/explicit-table-ids(autofix vianpx eslint . --fix) [1] - Codemod:
npx@convex-dev/codemod@latest explicit-ids[1]
🏁 Script executed:
#!/bin/bash
# Search for ctx.db calls without specifying file types
rg -n 'ctx\.db\.(get|patch|replace|delete)\(' \
--glob '!node_modules/**' \
--glob '!.cursor/**' \
-A 2Repository: get-convex/agent
Length of output: 13897
🏁 Script executed:
#!/bin/bash
# Check the current convex version in package.json
cat package.json | grep -A 2 '"convex"'Repository: get-convex/agent
Length of output: 281
Version bump to ^1.31.2 aligns with Convex's v1.31.0 API changes requiring table names as the first argument to ctx.db methods.
Since version 1.31.0, ctx.db.get(), ctx.db.patch(), ctx.db.replace(), and ctx.db.delete() accept the table name as the first argument. The old API (e.g., ctx.db.get(id)) remains functional for now but will be required for custom ID generation in future versions.
The codebase has 70+ instances of old-style calls throughout src/component/ that need migration to the new format (e.g., ctx.db.get("tableName", id)). Consider using the automated migration tool: npx @convex-dev/codemod@latest explicit-ids or enable the ESLint rule @convex-dev/explicit-table-ids with --fix.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.cursor/rules/convex_rules.mdc at line 437, The project is using Convex
v1.31.2 which requires passing the table name as the first argument to ctx.db
methods; update all old-style calls like ctx.db.get(id), ctx.db.patch(id, ...),
ctx.db.replace(id, ...), and ctx.db.delete(id) to the new form
ctx.db.get("tableName", id), ctx.db.patch("tableName", id, ...),
ctx.db.replace("tableName", id, ...), and ctx.db.delete("tableName", id). Locate
usages of ctx.db.get / ctx.db.patch / ctx.db.replace / ctx.db.delete across
src/component/ and either run the automated codemod npx
`@convex-dev/codemod`@latest explicit-ids or enable and run ESLint rule
`@convex-dev/explicit-table-ids` --fix to perform the migration consistently, then
run tests to confirm no regressions.
| if (args.promptMessageId && "runQuery" in ctx) { | ||
| try { | ||
| const [msg] = await ctx.runQuery(component.messages.getMessagesByIds, { | ||
| messageIds: [args.promptMessageId], | ||
| }); | ||
| if (msg?.status === "pending") { | ||
| return { | ||
| promptMessageId: args.promptMessageId, | ||
| pendingMessage: msg, | ||
| savedMessages: [], | ||
| }; | ||
| } | ||
| } catch { | ||
| // ID validation may fail in test environments with mock IDs — fall through. | ||
| } | ||
| } |
There was a problem hiding this comment.
Bare catch swallows non-ID-validation errors silently.
The comment at line 56 documents that this handles test-environment mock-ID failures, but the catch block also suppresses real errors — network failures, authorization errors, unexpected Convex exceptions — from the runQuery call. This can make production failures silent and very hard to diagnose.
At minimum, re-throw for non-ID-validation errors, or add a warning log before falling through:
🛡️ Proposed fix
} catch (e) {
- // ID validation may fail in test environments with mock IDs — fall through.
+ // ID validation may fail in test environments with mock IDs — fall through.
+ // Log unexpected errors so they are visible in production.
+ const msg = e instanceof Error ? e.message : String(e);
+ if (!msg.includes("Invalid ID") && !msg.includes("validation")) {
+ console.warn("saveInputMessages: unexpected error checking promptMessageId", e);
+ }
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (args.promptMessageId && "runQuery" in ctx) { | |
| try { | |
| const [msg] = await ctx.runQuery(component.messages.getMessagesByIds, { | |
| messageIds: [args.promptMessageId], | |
| }); | |
| if (msg?.status === "pending") { | |
| return { | |
| promptMessageId: args.promptMessageId, | |
| pendingMessage: msg, | |
| savedMessages: [], | |
| }; | |
| } | |
| } catch { | |
| // ID validation may fail in test environments with mock IDs — fall through. | |
| } | |
| } | |
| if (args.promptMessageId && "runQuery" in ctx) { | |
| try { | |
| const [msg] = await ctx.runQuery(component.messages.getMessagesByIds, { | |
| messageIds: [args.promptMessageId], | |
| }); | |
| if (msg?.status === "pending") { | |
| return { | |
| promptMessageId: args.promptMessageId, | |
| pendingMessage: msg, | |
| savedMessages: [], | |
| }; | |
| } | |
| } catch (e) { | |
| // ID validation may fail in test environments with mock IDs — fall through. | |
| // Log unexpected errors so they are visible in production. | |
| const errMsg = e instanceof Error ? e.message : String(e); | |
| if (!errMsg.includes("Invalid ID") && !errMsg.includes("validation")) { | |
| console.warn("saveInputMessages: unexpected error checking promptMessageId", e); | |
| } | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/client/saveInputMessages.ts` around lines 43 - 58, The catch block around
the ctx.runQuery call is swallowing all errors; update the logic in the if
(args.promptMessageId && "runQuery" in ctx) block (where
ctx.runQuery(component.messages.getMessagesByIds, { messageIds: [...] }) is
invoked) to only suppress known ID-validation errors and otherwise re-throw or
log the error before falling through: catch the error, detect whether it is the
specific Convex/ID-validation error (by error.name, error.code, or message
content), and if not, re-throw the error (or call a logger with context
including args.promptMessageId and the caught error) so real network/auth/Convex
failures are not silently ignored.
| expect(toolMessages.length).toBeGreaterThan(0); | ||
| expect(toolMessages[0].stepOrder).toBe(1); | ||
| expect(toolMessages[1].stepOrder).toBe(2); | ||
| // stepOrders should be consecutive (the base is now the last message's stepOrder) | ||
| expect(toolMessages[1].stepOrder).toBe(toolMessages[0].stepOrder + 1); | ||
| } |
There was a problem hiding this comment.
Guard against out-of-bounds access before asserting toolMessages[1].
expect(toolMessages.length).toBeGreaterThan(0) only ensures at least one element. toolMessages[1] will be undefined if exactly one tool message is returned, turning the stepOrder assertion into an opaque TypeError rather than a meaningful assertion failure.
🛡️ Proposed fix
- expect(toolMessages.length).toBeGreaterThan(0);
+ expect(toolMessages.length).toBeGreaterThanOrEqual(2);
// stepOrders should be consecutive (the base is now the last message's stepOrder)
expect(toolMessages[1].stepOrder).toBe(toolMessages[0].stepOrder + 1);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| expect(toolMessages.length).toBeGreaterThan(0); | |
| expect(toolMessages[0].stepOrder).toBe(1); | |
| expect(toolMessages[1].stepOrder).toBe(2); | |
| // stepOrders should be consecutive (the base is now the last message's stepOrder) | |
| expect(toolMessages[1].stepOrder).toBe(toolMessages[0].stepOrder + 1); | |
| } | |
| expect(toolMessages.length).toBeGreaterThanOrEqual(2); | |
| // stepOrders should be consecutive (the base is now the last message's stepOrder) | |
| expect(toolMessages[1].stepOrder).toBe(toolMessages[0].stepOrder + 1); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/fromUIMessages.test.ts` around lines 228 - 231, The test currently
asserts toolMessages[1].stepOrder after only checking
expect(toolMessages.length).toBeGreaterThan(0); update the guard to ensure there
are at least two messages before accessing index 1 — e.g. replace the length
check with expect(toolMessages.length).toBeGreaterThan(1) or add an explicit
conditional that fails with a clear message if toolMessages.length < 2; target
the test that references toolMessages and the consecutive stepOrder assertion to
implement this guard.

When a tool has needsApproval: true and the user provides an approval
response, generation now automatically continues to execute the approved
tools instead of stopping.
Changes:
in messages when tool calls lack results
Co-Authored-By: Claude Opus 4.5 noreply@anthropic.com
Summary by CodeRabbit