Skip to content

Add text streaming to search page#51

Merged
NexWasTaken merged 3 commits intomainfrom
feat/text-streaming
Jan 26, 2026
Merged

Add text streaming to search page#51
NexWasTaken merged 3 commits intomainfrom
feat/text-streaming

Conversation

@NexWasTaken
Copy link
Copy Markdown
Member

@NexWasTaken NexWasTaken commented Jan 23, 2026

Summary by CodeRabbit

Release Notes

  • New Features

    • Paper summaries now generate and display in real-time using streaming instead of waiting for a complete result.
    • Follow-up questions appear once summary generation completes.
    • Enhanced UI state management during summary generation with improved error handling.
  • Chores

    • Updated backend dependencies to support streaming functionality.

✏️ Tip: You can customize this high-level summary in your review settings.

- Changed double quotes to single quotes for package names in pnpm-lock.yaml for consistency.
- Added new dependency "@convex-dev/persistent-text-streaming" version "^0.3.0" in packages/backend/package.json.
…pport

- Replaced the existing summarizePaper action with a new streamSummary httpAction for real-time paper summarization.
- Updated imports to include necessary modules for streaming and chat functionality.
- Enhanced the SearchResults component to utilize the new chat-based summarization, sending messages with paper data and user settings.
- Removed deprecated code related to the previous summarization method to streamline the implementation.
- Updated package.json to change the backend filter to '@workspace/backend' for consistency.
- Added new dependencies including '@ai-sdk/react' and '@convex-dev/persistent-text-streaming' in the web app's package.json.
- Refactored imports in various files to replace 'posthog' with 'analytics' for event capturing.
- Introduced a new summaryStreams table in the backend schema for managing AI-generated summaries.
- Removed deprecated dodoPayments webhook handling code to streamline the backend logic.
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Jan 23, 2026

📝 Walkthrough

Walkthrough

The PR refactors paper summarization from a synchronous query to a streaming chat-based workflow. Frontend now uses a chat transport for real-time streaming; backend replaces the summarizePaper action with a streamSummary HTTP action; analytics module migrates from PostHog imports to a new analytics wrapper; schema adds a summaryStreams table; and AI SDK dependencies are updated to newer versions supporting streaming.

Changes

Cohort / File(s) Summary
Frontend Search Client Streaming Integration
apps/web/app/app/(user)/search/client.tsx
Replaces useSummarizePaperQuery with useChat and DefaultChatTransport for streaming summaries. Adds CONVEX_SITE_URL configuration, local hasSentMessage state guard, and derives summary from assistant messages. Introduces isStreaming, summaryPending, and summaryError states to coordinate with follow-up questions query.
Dependency Updates
apps/web/app/package.json, packages/backend/package.json
Adds "@ai-sdk/react" and "@convex-dev/persistent-text-streaming" to web dependencies; updates "@ai-sdk/google" (^1.2.18 → ^3.0.13) and "ai" (^4.3.16 → ^6.0.48) in backend dependencies.
Backend Paper Summary Streaming
packages/backend/convex/externalActions/ai/paperSummary.ts
Replaces synchronous summarizePaper action with streamSummary HTTP action. Parses JSON body with paper metadata and user settings, uses streamText for streaming responses with CORS headers, and removes credit deduction and event capture logic.
Schema and HTTP Routing
packages/backend/convex/schema.ts, packages/backend/convex/http.ts
Adds summaryStreams table (streamId, query, papers, userSummarySettings fields with byStreamId index). Updates http.ts to wire streamSummary POST /ai endpoint, adds CORS preflight handling, and simplifies DoDo payment webhook extraction logic.
Analytics Module Migration
packages/backend/convex/lib/analytics.ts, packages/backend/convex/externalActions/ai/..., packages/backend/convex/externalActions/semanticScholar/..., packages/backend/convex/folders/..., packages/backend/convex/searches/..., packages/backend/convex/users/..., packages/backend/convex/httpActions/clerk.ts
Introduces new analytics.ts module providing captureEvent and identifyUser functions via PostHog HTTP API. Migrates 15+ files from importing captureEvent from ../lib/posthog to ../lib/analytics.
Dodo Payments Webhook Removal
packages/backend/convex/httpActions/dodoPayments.ts
Removes entire dodoPayments webhook handler file and associated route configuration.
Minor Fixes and Refactoring
apps/web/app/queries/semantic-scholar.ts, apps/web/app/queries/stream.ts, packages/backend/convex/convex.config.ts, packages/backend/convex/dodo.ts, packages/backend/convex/externalActions/ai/studySnapshot.ts, packages/backend/convex/externalActions/semanticScholar/relevantPapers.ts, packages/backend/convex/users/mutations.ts, package.json
Adds trailing commas to semantic-scholar calls; creates new useCreateStreamMutation hook; adjusts convex.config imports; exports dodo.api() destructuring; adds optional chaining in relevantPapers; awaits db.patch in addCredits; updates dev-backend Turbo filter to @workspace/backend.

Sequence Diagram

sequenceDiagram
    participant Browser as Client (Browser)
    participant Chat as Chat Transport
    participant HTTP as HTTP Action
    participant AI as AI Provider (Google)
    participant DB as Convex DB

    Note over Browser,DB: New Streaming Flow
    
    Browser->>Chat: useSendMessage(papers, query, settings)
    Chat->>HTTP: POST /ai { messages, papers, settings }
    HTTP->>DB: Read stream context
    HTTP->>AI: streamText(query, papers)
    AI-->>HTTP: Stream text chunks
    HTTP-->>Chat: Stream response (CORS enabled)
    Chat-->>Browser: Real-time message updates
    Browser->>Browser: Extract summary from assistant message
    Browser->>HTTP: followUpQuestions query (when summary ready)
    HTTP->>DB: Generate follow-up questions
    HTTP-->>Browser: Display questions
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

  • Clean up code base #48: Modifies the same apps/web/app/app/(user)/search/client.tsx file, indicating concurrent work on the search client component.
  • add specific run commands #31: Updates the same package.json dev-backend script filter, suggesting related tooling or infrastructure changes.
  • PostHog integration #45: Migrates analytics event capture across the same Convex backend files, indicating coordinated telemetry refactoring.

Poem

🐰 Whiskers twitch with streaming delight,
Chat flows through the digital night,
Papers dance in HTTP's embrace,
Real-time summaries light up the space,
No more waiting—responses take flight!

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 75.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately summarizes the main change: replacing synchronous summary queries with streaming-based chat workflow on the search page, enabling real-time summary generation.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@sonarqubecloud
Copy link
Copy Markdown

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 11

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
apps/web/app/package.json (1)

13-31: Correct package.json versions—@ai-sdk/react@3.0.50 and ai@6.0.48 do not exist on npm.

  • @ai-sdk/react@3.0.50 is not published; latest is 3.0.41. The 3.0.x line does support React 19 (specific patch ranges: ~19.0.1, ~19.1.2, ^19.2.1).
  • ai@6.0.48 is not published; latest in 6.x is 6.0.6. Note: AI SDK 6 introduced breaking changes; verify against upgrade guidance.
  • @convex-dev/persistent-text-streaming@0.3.0 exists but peer dependencies require manual verification via npm view @convex-dev/persistent-text-streaming@0.3.0 peerDependencies --json.

Update to actual published versions before merging.

apps/web/app/app/(user)/search/client.tsx (1)

313-342: Summary can get stuck on the first query/settings due to unyielding hasSentMessage ref and uncleared chat messages.

hasSentMessage never resets, so new searches (query or filters) or late-loaded user?.summarySettings won't trigger a new summary. Additionally, useChat does not auto-clear the messages array between searches—previous summaries remain visible and prior messages persist in the conversation history sent to the server.

Consider keying the request by query + filters + user settings, detecting state changes via a requestKey, and calling setMessages([]) to clear conversation history when any of these change.

💡 One possible approach
-  const hasSentMessage = useRef(false);
+  const lastRequestKey = useRef<string | null>(null);
+  const requestKey = useMemo(
+    () =>
+      JSON.stringify({
+        q,
+        minimumCitations,
+        openAccess,
+        publicationTypes,
+        fieldsOfStudy,
+        userSummarySettings: user?.summarySettings ?? "",
+      }),
+    [q, minimumCitations, openAccess, publicationTypes, fieldsOfStudy, user?.summarySettings],
+  );

   // Send message when papers are loaded
   useEffect(() => {
-    if (papers && papers.length > 0 && !hasSentMessage.current) {
-      hasSentMessage.current = true;
+    if (!papers?.length || user === undefined) return;
+    if (lastRequestKey.current === requestKey) return;
+    lastRequestKey.current = requestKey;
       sendMessage(
         { text: q },
         {
           body: {
             papers: JSON.stringify(
               papers.map((p) => ({
                 title: p.title,
                 abstract: p.abstract,
                 tldr: p.tldr,
               })),
             ),
             userSummarySettings: user?.summarySettings ?? "",
           },
         },
       );
-    }
-  }, [papers, q, sendMessage, user?.summarySettings]);
+  }, [papers, q, requestKey, sendMessage, user]);

Additionally, call setMessages([]) (available from useChat) when the requestKey changes to avoid sending prior conversation history to the server.

Also applies to: 348-380

🤖 Fix all issues with AI agents
In `@apps/web/app/app/`(user)/search/client.tsx:
- Around line 343-347: The current replacement for CONVEX_SITE_URL uses an
unescaped regex /.cloud$/ which incorrectly treats the first dot as any
character and drops that preceding character; update the call to
process.env.NEXT_PUBLIC_CONVEX_URL!.replace to use an escaped dot pattern
(/\.cloud$/) so only the literal ".cloud" suffix is matched and replaced with
".site" (refer to the CONVEX_SITE_URL constant and the .replace invocation).

In `@apps/web/app/queries/stream.ts`:
- Around line 1-10: The hook references a non-existent backend function
api.externalActions.ai.paperSummary.createStream which causes TypeScript errors;
fix by either (A) updating useCreateStreamMutation to call the existing endpoint
api.externalActions.ai.paperSummary.streamSummary (replace createStream with
streamSummary and ensure the mutation args/signature match), or (B) implement
and export a createStream function on the backend (paperSummary.ts) under
externalActions.ai.paperSummary so the symbol exists; adjust the mutationFn in
useCreateStreamMutation accordingly to match the chosen endpoint name and its
parameter/return types.

In `@packages/backend/convex/dodo.ts`:
- Around line 6-101: Remove the large commented-out Dodo Payments webhook
scaffold to clean up the file: delete the commented imports and the commented
functions/blocks including validateDodoPaymentsWebhookRequest, dodoHandler
(httpAction block), the http.route/createDodoWebhookHandler block and any
dangling commented calls to internal mutations (e.g.,
internal.webhooks.createPayment, internal.users.mutations.addCredits) so the
file only contains active code; retain nothing of the webhook scaffold in
comments (rely on git history if needed).

In `@packages/backend/convex/externalActions/ai/paperSummary.ts`:
- Around line 68-112: Remove the commented-out legacy action code block to avoid
confusion: delete the commented definitions and references for
summarizePapersCache, summarizePaper, and summarizePaperInternal (the entire
commented-out action/cache implementations and their handlers) so only the
active implementation remains; ensure no leftover commented stubs for
PAPER_SUMMARY_CREDITS, captureEvent, or internal.users mutation calls remain in
this file to prevent drift.
- Around line 38-45: The code in paperSummary.ts currently reads message content
from body.messages[].content which fails for AI SDK v6 where messages use a
parts array; update the extraction for messages, lastUserMessage and query to
pull text from message.parts (e.g., join parts into a single string) instead of
using .content so query correctly reflects the user's last message; ensure you
handle missing parts/null safely and preserve the existing type expectations for
messages, lastUserMessage and query.
- Around line 54-66: The endpoint currently calls streamText and returns
toUIMessageStreamResponse without enforcing authentication or charging credits;
restore the previous guard by validating the user and credits before invoking
the model: call the auth check used elsewhere (e.g., checkAuth or the same auth
helper used by summarizePaperInternal) to get the userId, then call
checkCredits(userId, estimatedCost) and if sufficient call deductCredits(userId,
cost) (or perform atomic reserve) before building the prompt and invoking
streamText (MODEL: MODELS.PAPER_SUMMARY, PROMPT: SUMMARIZE_PAPER_PROMPT). If any
check fails, return an appropriate error response and do not call streamText;
mirror the error/rollback behavior from summarizePaperInternal and ensure to
still return the result via result.toUIMessageStreamResponse with the same
headers on success.

In `@packages/backend/convex/http.ts`:
- Around line 114-137: In the OPTIONS handler for the "/ai" route (the
http.route with method "OPTIONS" and handler httpAction), replace the hard-coded
"Access-Control-Allow-Headers" value ("Content-Type, Digest") with the actual
value from the incoming "Access-Control-Request-Headers" header: read
request.headers.get("Access-Control-Request-Headers") (it's already checked for
non-null) and set that value into the Response headers for
"Access-Control-Allow-Headers" so the preflight echoes the requested headers
instead of using a fixed list.

In `@packages/backend/convex/lib/analytics.ts`:
- Around line 46-57: The payload currently sets timestamp twice (once inside
properties and once at the root) which causes the properties timestamp to be
treated as a custom property; in the JSON.stringify call that builds the PostHog
payload (where apiKey, eventName, distinctId and properties are used), remove
the timestamp field from the properties object and only keep timestamp at the
root level; specifically update the object constructed in the body so properties
spreads ...properties and $lib remain but do not include timestamp, and set
timestamp only once at the top-level timestamp: new Date().toISOString().
- Around line 41-57: The fetch call in analytics.ts that posts to
`${host}/capture/` currently awaits the promise but never consumes the Response,
which can leak resources; update the block (the POST using variables apiKey,
eventName, distinctId, properties) to capture the Response (e.g., const res =
await fetch(...)) and then consume the body minimally (for example call
res.text() or res.arrayBuffer() and .catch(() => {}) or check res.ok and then
consume) so the response stream is drained even when this is effectively
fire-and-forget.

In `@packages/backend/convex/schema.ts`:
- Around line 5-10: The summaryStreams table lacks retention and ownership
metadata; update the defineTable call for summaryStreams to add a createdAt
timestamp field (e.g., createdAt using the project’s timestamp validator) to
enable TTL-based cleanup and add an optional userId field to associate streams
with users for access control; also add appropriate indexes (e.g., byCreatedAt
and/or byUserId) so background cleanup jobs or queries can efficiently find old
or user-scoped streams.

In `@packages/backend/package.json`:
- Around line 10-19: Replace deprecated ai v5 APIs: in
packages/backend/convex/externalActions/ai/suggestedQuery.ts and
studySnapshot.ts swap generateObject calls for generateText and provide the new
output parameter shape (map previous object schema into output), and replace any
streamObject usages (e.g., in paperSummary.ts) with streamText using output
similarly; update any places importing or referencing CoreMessage and
convertToModelMessages to the v6 shape by removing CoreMessage types and calling
await convertToModelMessages(...) since it is now async (update callers to
await); run the official codemod npx `@ai-sdk/codemod` v6 and then verify/adjust
model imports in _models.ts to the new named exports and changed types.

Comment on lines +343 to +347
// Set up streaming chat for summary
const CONVEX_SITE_URL = process.env.NEXT_PUBLIC_CONVEX_URL!.replace(
/.cloud$/,
".site",
);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Fix .cloud replacement regex (current pattern drops a character).

/.cloud$/ matches any character plus cloud, which removes the preceding character from the hostname (e.g., myapp.cloudmyap.site). Use an escaped dot.

🐛 Suggested fix
   const CONVEX_SITE_URL = process.env.NEXT_PUBLIC_CONVEX_URL!.replace(
-    /.cloud$/,
+    /\.cloud$/,
     ".site",
   );
🤖 Prompt for AI Agents
In `@apps/web/app/app/`(user)/search/client.tsx around lines 343 - 347, The
current replacement for CONVEX_SITE_URL uses an unescaped regex /.cloud$/ which
incorrectly treats the first dot as any character and drops that preceding
character; update the call to process.env.NEXT_PUBLIC_CONVEX_URL!.replace to use
an escaped dot pattern (/\.cloud$/) so only the literal ".cloud" suffix is
matched and replaced with ".site" (refer to the CONVEX_SITE_URL constant and the
.replace invocation).

Comment on lines +1 to +10
import { useConvexMutation } from "@convex-dev/react-query";
import { useMutation } from "@tanstack/react-query";
import { api } from "@workspace/backend/_generated/api";

export const useCreateStreamMutation = () =>
useMutation({
mutationFn: useConvexMutation(
api.externalActions.ai.paperSummary.createStream,
),
});
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

# Search for createStream in the backend
rg -n "createStream" packages/backend --type ts --type tsx

Repository: QuriousAI/qurious

Length of output: 88


🏁 Script executed:

# Find the backend structure and api definitions
fd -type f "\.ts$" packages/backend | grep -E "(api|action|stream)" | head -20

Repository: QuriousAI/qurious

Length of output: 231


🏁 Script executed:

# Look for externalActions or ai.paperSummary references
rg -n "paperSummary\|externalActions" packages/backend --type ts --type tsx

Repository: QuriousAI/qurious

Length of output: 88


🏁 Script executed:

# Search for createStream in backend (correct rg syntax)
rg "createStream" packages/backend -t ts

Repository: QuriousAI/qurious

Length of output: 43


🏁 Script executed:

# Find TypeScript files in backend that might contain API definitions
fd "\.ts$" packages/backend | head -30

Repository: QuriousAI/qurious

Length of output: 1557


🏁 Script executed:

# Search for paperSummary and externalActions
rg "paperSummary|externalActions" packages/backend -t ts

Repository: QuriousAI/qurious

Length of output: 4792


🏁 Script executed:

cat -n packages/backend/convex/externalActions/ai/paperSummary.ts

Repository: QuriousAI/qurious

Length of output: 4152


🏁 Script executed:

rg "useCreateStreamMutation" apps/web -t ts

Repository: QuriousAI/qurious

Length of output: 136


🏁 Script executed:

# Also check if there are other stream-related actions/functions being defined
rg "stream" packages/backend/convex/externalActions/ai/paperSummary.ts -i

Repository: QuriousAI/qurious

Length of output: 341


createStream does not exist in the Convex backend—this will cause TypeScript compilation failure.

The hook targets api.externalActions.ai.paperSummary.createStream, but the paperSummary.ts file only exports streamSummary (an HTTP action). The createStream function is not defined anywhere in the backend. Either add the missing createStream function or update this hook to target the correct endpoint.

🤖 Prompt for AI Agents
In `@apps/web/app/queries/stream.ts` around lines 1 - 10, The hook references a
non-existent backend function api.externalActions.ai.paperSummary.createStream
which causes TypeScript errors; fix by either (A) updating
useCreateStreamMutation to call the existing endpoint
api.externalActions.ai.paperSummary.streamSummary (replace createStream with
streamSummary and ensure the mutation args/signature match), or (B) implement
and export a createStream function on the backend (paperSummary.ts) under
externalActions.ai.paperSummary so the symbol exists; adjust the mutationFn in
useCreateStreamMutation accordingly to match the chosen endpoint name and its
parameter/return types.

Comment on lines +6 to +101
// import { Webhook as standardWebhook } from "standardwebhooks";
import { httpAction } from "./_generated/server";

// import type { WebhookEvent as DodoPaymentsWebhookEvent } from "../types/dodopayments";
// import { envVariables } from "../env";

// async function validateDodoPaymentsWebhookRequest(req: Request) {
// const dodoPaymentsStandardWebhook = new standardWebhook(
// envVariables.DODO_PAYMENTS_WEBHOOK_SECRET
// );

// const payloadString = await req.text();
// const webhookHeaders = {
// "webhook-id": req.headers.get("webhook-id") || "",
// "webhook-timestamp": req.headers.get("webhook-timestamp") || "",
// "webhook-signature": req.headers.get("webhook-signature") || "",
// };

// let event = null;

// try {
// event = dodoPaymentsStandardWebhook.verify(
// payloadString,
// webhookHeaders
// ) as DodoPaymentsWebhookEvent;
// } catch (error) {
// console.error("Error verifying dodo payments webhook event", error);
// }

// return event;
// }

// export const dodoHandler = httpAction(async (ctx, request) => {
// const event = await validateDodoPaymentsWebhookRequest(request);
// if (event === null) {
// return new Response("Error occurred while validating webhook", {
// status: 400,
// });
// }

// switch (event.type) {
// case "subscription.active":
// await ctx.runMutation(
// internal.subscriptions.mutations.createNewSubscription,
// {
// payload: event.data,
// }
// );
// break;
// default:
// console.warn(`⚠️ Ignored Dodo Payments webhook event: ${event.type}`);
// }

// return new Response("OK", { status: 200 });
// });

// http.route({
// path: "/dodopayments-webhook",
// method: "POST",
// handler: createDodoWebhookHandler({
// // Handle successful payments
// onPaymentSucceeded: async (ctx, payload) => {
// console.log("🎉 Payment Succeeded!");
// // Use Convex context to persist payment data
// await ctx.runMutation(internal.webhooks.createPayment, {
// paymentId: payload.data.payment_id,
// businessId: payload.business_id,
// customerEmail: payload.data.customer.email,
// amount: payload.data.total_amount,
// currency: payload.data.currency,
// status: payload.data.status,
// webhookPayload: JSON.stringify(payload),
// });

// await ctx.runMutation(internal.users.mutations.addCredits, {

// })
// },

// // // Handle subscription activation
// // onSubscriptionActive: async (ctx, payload) => {
// // console.log("🎉 Subscription Activated!");
// // // Use Convex context to persist subscription data
// // await ctx.runMutation(internal.webhooks.createSubscription, {
// // subscriptionId: payload.data.subscription_id,
// // businessId: payload.business_id,
// // customerEmail: payload.data.customer.email,
// // status: payload.data.status,
// // webhookPayload: JSON.stringify(payload),
// // });
// // },
// // Add other event handlers as needed
// }),
// });

// export default http;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Consider removing the commented-out webhook scaffold.

Keeping large commented blocks makes the file harder to scan; git history can preserve this if needed.

🤖 Prompt for AI Agents
In `@packages/backend/convex/dodo.ts` around lines 6 - 101, Remove the large
commented-out Dodo Payments webhook scaffold to clean up the file: delete the
commented imports and the commented functions/blocks including
validateDodoPaymentsWebhookRequest, dodoHandler (httpAction block), the
http.route/createDodoWebhookHandler block and any dangling commented calls to
internal mutations (e.g., internal.webhooks.createPayment,
internal.users.mutations.addCredits) so the file only contains active code;
retain nothing of the webhook scaffold in comments (rely on git history if
needed).

Comment on lines +38 to +45
// AI SDK useChat sends: { messages: Message[], id: string, ...body }
const messages = body.messages as Array<{ role: string; content: string }>;
const papers = body.papers as string;
const userSummarySettings = (body.userSummarySettings as string) ?? "";

const prompt = SUMMARIZE_PAPER_PROMPT(
args.query,
JSON.stringify(args.papers),
args.userSummarySettings,
);
// Extract query from the last user message
const lastUserMessage = messages?.filter((m) => m.role === "user").pop();
const query = lastUserMessage?.content ?? "";
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

AI SDK 6 DefaultChatTransport message format content parts structure

💡 Result:

In AI SDK v6, DefaultChatTransport (used by useChat) sends UIMessage[] to your /api/chat endpoint. Each message’s content is in message.parts (not a single content string). (ai-sdk.dev)

Request body (what your API should expect)

By default, the transport prepares a request that includes the chat id, the messages array, and metadata about why the request is happening (submit vs regenerate). (ai-sdk.dev)

A typical shape is:

{
  "id": "chat-id",
  "messages": [ /* UIMessage[] */ ],
  "trigger": "submit-message" | "regenerate-message",
  "messageId": "optional-message-id"
}

(trigger / messageId are part of the request preparation contract in useChat.) (ai-sdk.dev)

UIMessage structure

type UIMessage = {
  id: string;
  role: 'system' | 'user' | 'assistant';
  metadata?: unknown;
  parts: UIMessagePart[];
}

(ai-sdk.dev)

parts (“content parts”) structure (v6)

Common built-in part types include: (ai-sdk.dev)

Text

{ type: 'text'; text: string; state?: 'streaming' | 'done' }

Reasoning (if enabled by your stream/agent)

{ type: 'reasoning'; text: string; state?: 'streaming' | 'done'; providerMetadata?: Record<string, any> }

Tool parts (typed by tool name)

Tool parts are named tool-${toolName} and carry a toolCallId plus a state machine (input-streaminginput-availableoutput-available or output-error) with input, output, and/or errorText. (ai-sdk.dev)

Files (for UI rendering)

{ type: 'file'; mediaType: string; url: string; filename?: string }

Custom data parts

{ type: `data-${name}`; data: any; id?: string }

Sources

{ type: 'source-url'; sourceId: string; url: string; title?: string; providerMetadata?: Record<string, any> }
{ type: 'source-document'; sourceId: string; mediaType: string; title: string; filename?: string; providerMetadata?: Record<string, any> }

Step boundary

{ type: 'step-start' }

(ai-sdk.dev)

Citations:


🏁 Script executed:

# Find package.json to determine AI SDK version
find . -name "package.json" -type f | head -20

Repository: QuriousAI/qurious

Length of output: 178


🏁 Script executed:

# Explore the file structure around the mentioned file
ls -la packages/backend/convex/externalActions/ai/ 2>/dev/null || echo "Path check needed"

Repository: QuriousAI/qurious

Length of output: 497


🏁 Script executed:

# Search for useChat references in the codebase
rg "useChat" --type ts --type tsx -B 2 -A 2 | head -50

Repository: QuriousAI/qurious

Length of output: 88


🏁 Script executed:

# Check the actual implementation of paperSummary.ts
cat packages/backend/convex/externalActions/ai/paperSummary.ts 2>/dev/null | head -100

Repository: QuriousAI/qurious

Length of output: 3046


Fix message extraction to handle AI SDK v6's parts structure; currently query will always be empty.

AI SDK v6's DefaultChatTransport sends messages with parts array (not content). The current code assumes a content field that doesn't exist, causing query to be an empty string and summaries to ignore the user's input. Extract text from message parts instead.

💡 Fix message extraction
-  const messages = body.messages as Array<{ role: string; content: string }>;
+  const messages = Array.isArray(body.messages) ? body.messages : [];
-  const papers = body.papers as string;
+  const papers = typeof body.papers === "string" ? body.papers : "";
   const userSummarySettings = (body.userSummarySettings as string) ?? "";

   // Extract query from the last user message
-  const lastUserMessage = messages?.filter((m) => m.role === "user").pop();
-  const query = lastUserMessage?.content ?? "";
+  const lastUserMessage = messages.filter((m) => m.role === "user").pop();
+  const query =
+    (lastUserMessage?.parts ?? [])
+      .filter((p: any) => p.type === "text")
+      .map((p: any) => p.text)
+      .join("") || "";
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// AI SDK useChat sends: { messages: Message[], id: string, ...body }
const messages = body.messages as Array<{ role: string; content: string }>;
const papers = body.papers as string;
const userSummarySettings = (body.userSummarySettings as string) ?? "";
const prompt = SUMMARIZE_PAPER_PROMPT(
args.query,
JSON.stringify(args.papers),
args.userSummarySettings,
);
// Extract query from the last user message
const lastUserMessage = messages?.filter((m) => m.role === "user").pop();
const query = lastUserMessage?.content ?? "";
// AI SDK useChat sends: { messages: Message[], id: string, ...body }
const messages = Array.isArray(body.messages) ? body.messages : [];
const papers = typeof body.papers === "string" ? body.papers : "";
const userSummarySettings = (body.userSummarySettings as string) ?? "";
// Extract query from the last user message
const lastUserMessage = messages.filter((m) => m.role === "user").pop();
const query =
(lastUserMessage?.parts ?? [])
.filter((p: any) => p.type === "text")
.map((p: any) => p.text)
.join("") || "";
🤖 Prompt for AI Agents
In `@packages/backend/convex/externalActions/ai/paperSummary.ts` around lines 38 -
45, The code in paperSummary.ts currently reads message content from
body.messages[].content which fails for AI SDK v6 where messages use a parts
array; update the extraction for messages, lastUserMessage and query to pull
text from message.parts (e.g., join parts into a single string) instead of using
.content so query correctly reflects the user's last message; ensure you handle
missing parts/null safely and preserve the existing type expectations for
messages, lastUserMessage and query.

Comment on lines +54 to +66
const prompt = SUMMARIZE_PAPER_PROMPT(query, papers, userSummarySettings);

// Track paper summarization event
await captureEvent(ctx, "paper_summarized", {
query: args.query,
paperCount: Array.isArray(args.papers) ? args.papers.length : 1,
creditsUsed: PAPER_SUMMARY_CREDITS,
});
const result = streamText({
model: MODELS.PAPER_SUMMARY,
prompt,
});

const { text } = result;
return text;
},
return result.toUIMessageStreamResponse({
headers: {
"Access-Control-Allow-Origin": "*",
Vary: "Origin",
},
});
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cd packages/backend && find . -name "paperSummary.ts" -o -name "paperSummary.ts" | head -5

Repository: QuriousAI/qurious

Length of output: 178


🏁 Script executed:

cd packages/backend && find . -type f -name "*paperSummary*"

Repository: QuriousAI/qurious

Length of output: 104


🏁 Script executed:

cd packages/backend && fd "paperSummary" --type f

Repository: QuriousAI/qurious

Length of output: 102


🏁 Script executed:

find . -path "*/convex/externalActions/ai/paperSummary.ts" 2>/dev/null

Repository: QuriousAI/qurious

Length of output: 121


🏁 Script executed:

ls -la packages/backend/convex/externalActions/ai/

Repository: QuriousAI/qurious

Length of output: 497


🏁 Script executed:

cat -n packages/backend/convex/externalActions/ai/paperSummary.ts

Repository: QuriousAI/qurious

Length of output: 4152


🏁 Script executed:

rg "httpAction" packages/backend/convex --type ts -A 5 | head -100

Repository: QuriousAI/qurious

Length of output: 6067


🏁 Script executed:

rg "ctx\.auth\.getUserIdentity" packages/backend/convex --type ts -B 3 -A 1

Repository: QuriousAI/qurious

Length of output: 1940


🏁 Script executed:

cat -n packages/backend/convex/http.ts

Repository: QuriousAI/qurious

Length of output: 5551


🏁 Script executed:

cat packages/backend/convex/_generated/server.d.ts | grep -A 30 "HttpActionBuilder\|HttpActionCtx"

Repository: QuriousAI/qurious

Length of output: 1998


🏁 Script executed:

rg "export.*interface.*HttpAction" packages/backend/convex --type ts -A 5

Repository: QuriousAI/qurious

Length of output: 43


🏁 Script executed:

rg "httpAction.*auth\|httpAction.*identity" packages/backend/convex --type ts -B 2 -A 5

Repository: QuriousAI/qurious

Length of output: 43


🏁 Script executed:

rg "httpAction" packages/backend/convex -B 2 -A 15 | grep -A 15 "handler.*async"

Repository: QuriousAI/qurious

Length of output: 2040


🏁 Script executed:

grep -r "ctx\." packages/backend/convex/externalActions/ai/paperSummary.ts

Repository: QuriousAI/qurious

Length of output: 201


🏁 Script executed:

rg "Authorization\|Bearer\|token" packages/backend/convex/externalActions --type ts -B 2 -A 2

Repository: QuriousAI/qurious

Length of output: 43


🏁 Script executed:

rg "checkCredits\|deductCredits" packages/backend/convex --type ts -B 3 -A 3

Repository: QuriousAI/qurious

Length of output: 43


🏁 Script executed:

web_search
How does Convex httpAction handle authentication and user identity? Can you access ctx.auth in httpAction?

Repository: QuriousAI/qurious

Length of output: 153


🌐 Web query:

Convex httpAction authentication: can you access ctx.auth.getUserIdentity() in httpAction handlers?

💡 Result:

Yes. In Convex HTTP Actions you can call await ctx.auth.getUserIdentity() inside an httpAction handler, as long as the request includes an Authorization: Bearer <JWT> header. Convex’s HTTP Actions docs explicitly describe this pattern for authenticating requests to an HTTP endpoint. [1]

One important difference vs queries/mutations/actions: ctx.auth.getUserIdentity() throws in HTTP Actions when there’s no valid identity (instead of returning null). [2]

Sources:
[1] Convex docs — HTTP Actions (Authentication section) (docs.convex.dev)
[2] Convex API reference — Auth.getUserIdentity() behavior (throws on HTTP Actions) (docs.convex.dev)

Citations:


🏁 Script executed:

rg "streamSummary\|/ai" packages/frontend --type ts --type tsx -B 2 -A 5 | head -60

Repository: QuriousAI/qurious

Length of output: 88


🏁 Script executed:

rg "Authorization.*Bearer\|Bearer.*token" packages --type ts --type tsx | head -20

Repository: QuriousAI/qurious

Length of output: 88


🏁 Script executed:

find packages/frontend -type f \( -name "*.ts" -o -name "*.tsx" \) | xargs rg "streamSummary\|/ai" -B 2 -A 5 2>/dev/null | head -80

Repository: QuriousAI/qurious

Length of output: 113


🏁 Script executed:

find packages -type f -name "*.ts" | xargs rg "Authorization.*Bearer" 2>/dev/null | head -20

Repository: QuriousAI/qurious

Length of output: 43


🏁 Script executed:

find . -type d -name "frontend" -o -type d -name "web" -o -type d -name "app" | head -10

Repository: QuriousAI/qurious

Length of output: 243


🏁 Script executed:

git ls-files | grep -i "streamSummary\|/ai" | head -20

Repository: QuriousAI/qurious

Length of output: 382


🏁 Script executed:

cat -n apps/web/app/queries/ai.ts

Repository: QuriousAI/qurious

Length of output: 1936


🏁 Script executed:

rg "summarizePaper\|streamSummary" packages/backend/convex/_generated/api.d.ts -B 2 -A 2

Repository: QuriousAI/qurious

Length of output: 43


🏁 Script executed:

grep -n "paperSummary\|summarizePaper\|streamSummary" packages/backend/convex/_generated/api.d.ts

Repository: QuriousAI/qurious

Length of output: 238


🏁 Script executed:

sed -n '17p; 62p' packages/backend/convex/_generated/api.d.ts && rg "externalActions_ai_paperSummary" packages/backend/convex/_generated/api.d.ts -A 5

Repository: QuriousAI/qurious

Length of output: 1431


🏁 Script executed:

grep -A 20 "typeof externalActions_ai_paperSummary" packages/backend/convex/_generated/api.d.ts

Repository: QuriousAI/qurious

Length of output: 1497


🏁 Script executed:

python3 << 'EOF'
import subprocess
import json

# Get the exports from the generated API
result = subprocess.run(
    ["cat", "packages/backend/convex/externalActions/ai/paperSummary.ts"],
    capture_output=True,
    text=True
)

content = result.stdout
# Find all export statements
exports = [line for line in content.split('\n') if 'export const' in line or 'export' in line and 'const' in line]

for exp in exports[:10]:
    print(exp.strip())
EOF

Repository: QuriousAI/qurious

Length of output: 147


🏁 Script executed:

grep "^export" packages/backend/convex/externalActions/ai/paperSummary.ts

Repository: QuriousAI/qurious

Length of output: 126


🏁 Script executed:

rg "PAPER_SUMMARY_CREDITS" packages/backend/convex --type ts -B 2 -A 2

Repository: QuriousAI/qurious

Length of output: 2073


🏁 Script executed:

grep -r "credits\|billing" packages/backend/convex/users/mutations.ts | head -20

Repository: QuriousAI/qurious

Length of output: 986


Reintroduce auth and credit enforcement before streaming.

The streamSummary endpoint now streams without authentication or credit deduction, making it publicly callable and completely bypassing the checkCredits/deductCredits flow. Since credits are required for paper summarization, this is a billing regression and security issue.

Add authentication check and credit deduction before the model call:

Guard pattern
 export const streamSummary = httpAction(async (ctx, request) => {
   console.log("Request received");
   const body = await request.json();
+
+  const identity = await ctx.auth.getUserIdentity();
+  if (!identity) {
+    return new Response("Unauthorized", { status: 401 });
+  }
+
+  await ctx.runMutation(internal.users.mutations.checkCredits, {
+    amount: PAPER_SUMMARY_CREDITS,
+  });
+  await ctx.runMutation(internal.users.mutations.deductCredits, {
+    amount: PAPER_SUMMARY_CREDITS,
+  });
   const messages = body.messages as Array<{ role: string; content: string }>;

Note: The commented-out summarizePaperInternal (lines 87–112) shows the previous credit-enforcing pattern.

🤖 Prompt for AI Agents
In `@packages/backend/convex/externalActions/ai/paperSummary.ts` around lines 54 -
66, The endpoint currently calls streamText and returns
toUIMessageStreamResponse without enforcing authentication or charging credits;
restore the previous guard by validating the user and credits before invoking
the model: call the auth check used elsewhere (e.g., checkAuth or the same auth
helper used by summarizePaperInternal) to get the userId, then call
checkCredits(userId, estimatedCost) and if sufficient call deductCredits(userId,
cost) (or perform atomic reserve) before building the prompt and invoking
streamText (MODEL: MODELS.PAPER_SUMMARY, PROMPT: SUMMARIZE_PAPER_PROMPT). If any
check fails, return an appropriate error response and do not call streamText;
mirror the error/rollback behavior from summarizePaperInternal and ensure to
still return the result via result.toUIMessageStreamResponse with the same
headers on success.

Comment on lines +114 to +137
http.route({
path: "/ai",
method: "OPTIONS",
handler: httpAction(async (_, request) => {
// Make sure the necessary headers are present
// for this to be a valid pre-flight request
const headers = request.headers;
if (
headers.get("Origin") !== null &&
headers.get("Access-Control-Request-Method") !== null &&
headers.get("Access-Control-Request-Headers") !== null
) {
return new Response(null, {
headers: new Headers({
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "POST",
"Access-Control-Allow-Headers": "Content-Type, Digest",
"Access-Control-Max-Age": "86400",
}),
});
} else {
return new Response();
}
}),
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Echo requested headers in preflight to avoid future CORS breaks.

The fixed "Content-Type, Digest" list can cause unexpected failures when the client adds headers later. Reflecting Access-Control-Request-Headers is safer.

♻️ Suggested tweak
   const headers = request.headers;
   if (
     headers.get("Origin") !== null &&
     headers.get("Access-Control-Request-Method") !== null &&
     headers.get("Access-Control-Request-Headers") !== null
   ) {
+    const requested = headers.get("Access-Control-Request-Headers");
     return new Response(null, {
       headers: new Headers({
         "Access-Control-Allow-Origin": "*",
         "Access-Control-Allow-Methods": "POST",
-        "Access-Control-Allow-Headers": "Content-Type, Digest",
+        "Access-Control-Allow-Headers": requested ?? "Content-Type",
+        Vary: "Origin, Access-Control-Request-Headers",
         "Access-Control-Max-Age": "86400",
       }),
     });
   }
🤖 Prompt for AI Agents
In `@packages/backend/convex/http.ts` around lines 114 - 137, In the OPTIONS
handler for the "/ai" route (the http.route with method "OPTIONS" and handler
httpAction), replace the hard-coded "Access-Control-Allow-Headers" value
("Content-Type, Digest") with the actual value from the incoming
"Access-Control-Request-Headers" header: read
request.headers.get("Access-Control-Request-Headers") (it's already checked for
non-null) and set that value into the Response headers for
"Access-Control-Allow-Headers" so the preflight echoes the requested headers
instead of using a fixed list.

Comment on lines +41 to +57
await fetch(`${host}/capture/`, {
method: "POST",
headers: {
"Content-Type": "application/json",
},
body: JSON.stringify({
api_key: apiKey,
event: eventName,
distinct_id: distinctId,
properties: {
...properties,
timestamp: new Date().toISOString(),
$lib: "convex",
},
timestamp: new Date().toISOString(),
}),
});
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Response not consumed from fetch call.

The fetch call creates a response promise that is never consumed. While fire-and-forget is intentional here, not reading the response body can cause resource leaks in some runtimes. Consider consuming the response minimally.

Proposed improvement
-    await fetch(`${host}/capture/`, {
+    const response = await fetch(`${host}/capture/`, {
       method: "POST",
       headers: {
         "Content-Type": "application/json",
       },
       body: JSON.stringify({
         api_key: apiKey,
         event: eventName,
         distinct_id: distinctId,
         properties: {
           ...properties,
-          timestamp: new Date().toISOString(),
           $lib: "convex",
         },
         timestamp: new Date().toISOString(),
       }),
     });
+    // Consume response to prevent resource leaks
+    await response.text();
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
await fetch(`${host}/capture/`, {
method: "POST",
headers: {
"Content-Type": "application/json",
},
body: JSON.stringify({
api_key: apiKey,
event: eventName,
distinct_id: distinctId,
properties: {
...properties,
timestamp: new Date().toISOString(),
$lib: "convex",
},
timestamp: new Date().toISOString(),
}),
});
const response = await fetch(`${host}/capture/`, {
method: "POST",
headers: {
"Content-Type": "application/json",
},
body: JSON.stringify({
api_key: apiKey,
event: eventName,
distinct_id: distinctId,
properties: {
...properties,
timestamp: new Date().toISOString(),
$lib: "convex",
},
timestamp: new Date().toISOString(),
}),
});
// Consume response to prevent resource leaks
await response.text();
🤖 Prompt for AI Agents
In `@packages/backend/convex/lib/analytics.ts` around lines 41 - 57, The fetch
call in analytics.ts that posts to `${host}/capture/` currently awaits the
promise but never consumes the Response, which can leak resources; update the
block (the POST using variables apiKey, eventName, distinctId, properties) to
capture the Response (e.g., const res = await fetch(...)) and then consume the
body minimally (for example call res.text() or res.arrayBuffer() and .catch(()
=> {}) or check res.ok and then consume) so the response stream is drained even
when this is effectively fire-and-forget.

Comment on lines +46 to +57
body: JSON.stringify({
api_key: apiKey,
event: eventName,
distinct_id: distinctId,
properties: {
...properties,
timestamp: new Date().toISOString(),
$lib: "convex",
},
timestamp: new Date().toISOString(),
}),
});
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Duplicate timestamp field in payload.

The timestamp is set both inside properties (line 52) and at the root level (line 55). PostHog expects timestamp at the root level for event timing. The one inside properties is redundant and will appear as a custom property rather than affecting event timing.

Proposed fix
       body: JSON.stringify({
         api_key: apiKey,
         event: eventName,
         distinct_id: distinctId,
         properties: {
           ...properties,
-          timestamp: new Date().toISOString(),
           $lib: "convex",
         },
         timestamp: new Date().toISOString(),
       }),
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
body: JSON.stringify({
api_key: apiKey,
event: eventName,
distinct_id: distinctId,
properties: {
...properties,
timestamp: new Date().toISOString(),
$lib: "convex",
},
timestamp: new Date().toISOString(),
}),
});
body: JSON.stringify({
api_key: apiKey,
event: eventName,
distinct_id: distinctId,
properties: {
...properties,
$lib: "convex",
},
timestamp: new Date().toISOString(),
}),
🤖 Prompt for AI Agents
In `@packages/backend/convex/lib/analytics.ts` around lines 46 - 57, The payload
currently sets timestamp twice (once inside properties and once at the root)
which causes the properties timestamp to be treated as a custom property; in the
JSON.stringify call that builds the PostHog payload (where apiKey, eventName,
distinctId and properties are used), remove the timestamp field from the
properties object and only keep timestamp at the root level; specifically update
the object constructed in the body so properties spreads ...properties and $lib
remain but do not include timestamp, and set timestamp only once at the
top-level timestamp: new Date().toISOString().

Comment on lines +5 to +10
summaryStreams: defineTable({
streamId: v.string(),
query: v.string(),
papers: v.string(), // JSON stringified papers
userSummarySettings: v.string(),
}).index("byStreamId", ["streamId"]),
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Consider adding a cleanup strategy for stream records.

The summaryStreams table lacks a TTL or timestamp field for managing data retention. Over time, completed or abandoned streams will accumulate. Consider:

  • Adding a createdAt timestamp to enable periodic cleanup
  • Adding a userId field if streams should be associated with users for access control
🤖 Prompt for AI Agents
In `@packages/backend/convex/schema.ts` around lines 5 - 10, The summaryStreams
table lacks retention and ownership metadata; update the defineTable call for
summaryStreams to add a createdAt timestamp field (e.g., createdAt using the
project’s timestamp validator) to enable TTL-based cleanup and add an optional
userId field to associate streams with users for access control; also add
appropriate indexes (e.g., byCreatedAt and/or byUserId) so background cleanup
jobs or queries can efficiently find old or user-scoped streams.

Comment on lines +10 to +19
"@ai-sdk/google": "^3.0.13",
"@clerk/backend": "^1.33.1",
"@convex-dev/action-cache": "^0.2.7",
"@convex-dev/persistent-text-streaming": "^0.3.0",
"@convex-dev/resend": "^0.1.0",
"@dodopayments/convex": "^0.2.8",
"@posthog/ai": "^6.4.3",
"@t3-oss/env-core": "^0.13.6",
"@workspace/semantic-scholar": "workspace:*",
"ai": "^4.3.16",
"ai": "^6.0.48",
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

# First, let's examine the package.json file to confirm the versions
cat -n packages/backend/package.json | head -30

Repository: QuriousAI/qurious

Length of output: 1042


🏁 Script executed:

# Search for usage of `@ai-sdk/google` in the backend codebase
rg "@ai-sdk/google" packages/backend/ --type ts --type js -A 3 -B 1

Repository: QuriousAI/qurious

Length of output: 1899


🏁 Script executed:

# Search for usage of the 'ai' package in the backend codebase
rg "from ['\"]ai['\"]|import.*from ['\"]ai['\"]|require\(['\"]ai['\"]\)" packages/backend/ --type ts --type js -A 2 -B 1

Repository: QuriousAI/qurious

Length of output: 1743


🌐 Web query:

ai-sdk google 3.0 breaking changes migration guide

💡 Result:

@ai-sdk/google v3.x ships with AI SDK 6 (provider API/spec v3). There isn’t a separate “Google 3.0 migration guide”; you migrate by following the AI SDK 5.x → 6.0 migration guide and upgrading all @ai-sdk/* providers to ^3.0.0. [1]

What changed / what to do

  1. Upgrade packages

    • ai^6.0.0
    • @ai-sdk/* (including @ai-sdk/google) → ^3.0.0
    • @ai-sdk/provider^3.0.0
    • @ai-sdk/provider-utils^4.0.0 [1]
  2. Run the official codemods (recommended)

    npx `@ai-sdk/codemod` v6

    or individually:

    npx `@ai-sdk/codemod` v6/rename-text-embedding-to-embedding src/
    npx `@ai-sdk/codemod` v6/rename-vertex-provider-metadata-key src/

    Key ones you’re likely to hit with Google/Vertex:

    • textEmbeddingModelembeddingModel (and textEmbeddingembedding) [1][2]
    • providerMetadata / providerOptions key for Vertex: googlevertex [1]
  3. Be aware of the underlying “provider spec v3” shift

    • @ai-sdk/google@3.0.0 is part of the “AI SDK 6 / provider spec v3” release train. [2]

If you share your current versions (e.g. ai@5.x + @ai-sdk/google@2.x) and which features you use (tools, embeddings, Vertex files, etc.), I can list the exact diffs you’ll hit in your codebase.


🌐 Web query:

ai package version 6.0 breaking changes from 4.x

💡 Result:

Assuming you mean the npm ai package (Vercel AI SDK), there isn’t a single “4.x → 6.0” jump doc—you need to apply 4.x → 5.0 breaking changes, then 5.x → 6.0. [2] [1]

4.x → 5.0 (major breaking changes)

  • Renames
    • maxTokensmaxOutputTokens (e.g., in generateText/streamText). [2]
    • Core types: CoreMessageModelMessage, MessageUIMessage, CreateMessageCreateUIMessage. [2]
    • convertToCoreMessagesconvertToModelMessages. [2]
  • UI messages shape changed
    • UIMessage.content is replaced by a parts array structure. [2]
  • Streaming/data streaming changed
    • Data stream protocol moved to Server-Sent Events (SSE) and streaming was restructured from “data streams” to “UI message streams”. [2]
    • processDataStream removed; use readUIMessageStream (or Chat/useChat APIs). [2]
  • Usage accounting changed
    • usage becomes “final step only”; new totalUsage contains total across steps. [2]
  • Package layout
    • @ai-sdk/ui-utils removed; exports moved to ai. [2]

5.x → 6.0 (major breaking changes)

  • Agents API
    • Experimental_Agent replaced by ToolLoopAgent; systeminstructions; default stopWhen changed to stepCountIs(20). [1]
  • CoreMessage fully removed
    • Deprecated CoreMessage and related functions removed; use ModelMessage + convertToModelMessages. [1]
    • convertToModelMessages is now async (so you must await it). [1]
  • APIs/types renamed
    • ToolCallOptionsToolExecutionOptions. [1]
    • textEmbeddingModelembeddingModel and textEmbeddingembedding on providers. [1]
    • MockLanguageModelV2 etc. renamed to V3 in ai/test. [1]
  • Object helpers deprecated
    • generateObject / streamObject deprecated; use generateText / streamText with an output setting. [1]

Practical migration tip

Run codemods in order (or all at once):

  • v4 → v5: npx @ai-sdk/codemod v5 (or upgrade) [2]
  • v5 → v6: npx @ai-sdk/codemod v6 [1]

If you meant a different “ai” package (Python, Rust, etc.), tell me which registry/link and I’ll map 4.x → 6.0 breaking changes for that package instead.


Address deprecated generateObject usage in ai v6.0.

The upgrade to ai@^6.0.48 introduces breaking changes that require immediate code updates. Specifically, generateObject (used in packages/backend/convex/externalActions/ai/suggestedQuery.ts and studySnapshot.ts) and streamObject are deprecated in v6.0 and should be replaced with generateText/streamText using the new output parameter. Additionally, CoreMessage has been removed entirely and convertToModelMessages is now async.

Run the official codemod to automate migration: npx @ai-sdk/codemod v6. Ensure all three AI files (suggestedQuery.ts, streamText usage in paperSummary.ts, and model imports in _models.ts) are updated to comply with the v6.0 API.

🤖 Prompt for AI Agents
In `@packages/backend/package.json` around lines 10 - 19, Replace deprecated ai v5
APIs: in packages/backend/convex/externalActions/ai/suggestedQuery.ts and
studySnapshot.ts swap generateObject calls for generateText and provide the new
output parameter shape (map previous object schema into output), and replace any
streamObject usages (e.g., in paperSummary.ts) with streamText using output
similarly; update any places importing or referencing CoreMessage and
convertToModelMessages to the v6 shape by removing CoreMessage types and calling
await convertToModelMessages(...) since it is now async (update callers to
await); run the official codemod npx `@ai-sdk/codemod` v6 and then verify/adjust
model imports in _models.ts to the new named exports and changed types.

@NexWasTaken NexWasTaken merged commit 922e156 into main Jan 26, 2026
2 of 3 checks passed
@NexWasTaken NexWasTaken mentioned this pull request Jan 26, 2026
@NexWasTaken NexWasTaken deleted the feat/text-streaming branch January 26, 2026 07:31
@coderabbitai coderabbitai Bot mentioned this pull request Jan 27, 2026
@coderabbitai coderabbitai Bot mentioned this pull request Feb 5, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant