Skip to content

Commit 1480671

Browse files
committed
docs(ai-chat): add Types page, link toolExecute and withUIMessage, fix MDX headings
1 parent 33eb060 commit 1480671

File tree

11 files changed

+310
-64
lines changed

11 files changed

+310
-64
lines changed

docs/ai-chat/backend.mdx

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,10 @@ description: "Three approaches to building your chat backend — chat.task(), se
88

99
The highest-level approach. Handles message accumulation, stop signals, turn lifecycle, and auto-piping automatically.
1010

11+
<Tip>
12+
To fix a **custom** `UIMessage` subtype (typed custom data parts, tool map, etc.), use [`chat.withUIMessage<...>().task({...})`](/ai-chat/types) instead of `chat.task({...})`. Options are the same; defaults for `toUIMessageStream()` can be set on `withUIMessage`.
13+
</Tip>
14+
1115
### Simple: return a StreamTextResult
1216

1317
Return the `streamText` result from `run` and it's automatically piped to the frontend:

docs/ai-chat/features.mdx

Lines changed: 22 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ description: "Per-run data, deferred work, custom streaming, subtask integration
88

99
Use `chat.local` to create typed, run-scoped data that persists across turns and is accessible from anywhere — the run function, tools, nested helpers. Each run gets its own isolated copy, and locals are automatically cleared between runs.
1010

11-
When a subtask is invoked via `ai.tool()`, initialized locals are automatically serialized into the subtask's metadata and hydrated on first access — no extra code needed. Subtask changes to hydrated locals are local to the subtask and don't propagate back to the parent.
11+
When a subtask is invoked via `ai.toolExecute()` (or the deprecated `ai.tool()`), initialized locals are automatically serialized into the subtask's metadata and hydrated on first access — no extra code needed. Subtask changes to hydrated locals are local to the subtask and don't propagate back to the parent.
1212

1313
### Declaring and initializing
1414

@@ -76,18 +76,18 @@ const premiumTool = tool({
7676

7777
### Accessing from subtasks
7878

79-
When you use `ai.tool()` to expose a subtask, chat locals are automatically available read-only:
79+
When you use `ai.toolExecute()` inside AI SDK `tool()` to expose a subtask, chat locals are automatically available read-only:
8080

8181
```ts
8282
import { chat, ai } from "@trigger.dev/sdk/ai";
8383
import { schemaTask } from "@trigger.dev/sdk";
84-
import { streamText } from "ai";
84+
import { streamText, tool } from "ai";
8585
import { openai } from "@ai-sdk/openai";
8686
import { z } from "zod";
8787

8888
const userContext = chat.local<{ name: string; plan: "free" | "pro" }>({ id: "userContext" });
8989

90-
export const analyzeData = schemaTask({
90+
export const analyzeDataTask = schemaTask({
9191
id: "analyze-data",
9292
schema: z.object({ query: z.string() }),
9393
run: async ({ query }) => {
@@ -97,6 +97,12 @@ export const analyzeData = schemaTask({
9797
},
9898
});
9999

100+
const analyzeData = tool({
101+
description: analyzeDataTask.description ?? "",
102+
inputSchema: analyzeDataTask.schema!,
103+
execute: ai.toolExecute(analyzeDataTask),
104+
});
105+
100106
export const myChat = chat.task({
101107
id: "my-chat",
102108
onChatStart: async ({ clientData }) => {
@@ -106,7 +112,7 @@ export const myChat = chat.task({
106112
return streamText({
107113
model: openai("gpt-4o"),
108114
messages,
109-
tools: { analyzeData: ai.tool(analyzeData) },
115+
tools: { analyzeData },
110116
abortSignal: signal,
111117
});
112118
},
@@ -227,7 +233,8 @@ When a tool invokes a subtask via `triggerAndWait`, the subtask can stream direc
227233
```ts
228234
import { chat, ai } from "@trigger.dev/sdk/ai";
229235
import { schemaTask } from "@trigger.dev/sdk";
230-
import { streamText, generateId } from "ai";
236+
import { streamText, tool, generateId } from "ai";
237+
import { openai } from "@ai-sdk/openai";
231238
import { z } from "zod";
232239

233240
// A subtask that streams progress back to the parent chat
@@ -271,7 +278,12 @@ export const researchTask = schemaTask({
271278
},
272279
});
273280

274-
// The chat task uses it as a tool via ai.tool()
281+
const research = tool({
282+
description: researchTask.description ?? "",
283+
inputSchema: researchTask.schema!,
284+
execute: ai.toolExecute(researchTask),
285+
});
286+
275287
export const myChat = chat.task({
276288
id: "my-chat",
277289
run: async ({ messages, signal }) => {
@@ -280,7 +292,7 @@ export const myChat = chat.task({
280292
messages,
281293
abortSignal: signal,
282294
tools: {
283-
research: ai.tool(researchTask),
295+
research,
284296
},
285297
});
286298
},
@@ -311,9 +323,9 @@ The `target` option accepts:
311323

312324
---
313325

314-
## ai.tool() — subtask integration
326+
## Task tool subtasks (`ai.toolExecute`)
315327

316-
When a subtask runs via `ai.tool()`, it can access the tool call context and chat context from the parent:
328+
When a subtask runs through **`execute: ai.toolExecute(task)`** (or the deprecated `ai.tool()`), it can access the tool call context and chat context from the parent:
317329

318330
```ts
319331
import { ai, chat } from "@trigger.dev/sdk/ai";

docs/ai-chat/frontend.mdx

Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -31,6 +31,23 @@ The transport is created once on first render and reused across re-renders. Pass
3131
The hook keeps `onSessionChange` up to date via a ref internally, so you don't need to memoize the callback or worry about stale closures.
3232
</Tip>
3333

34+
## Typed messages (`chat.withUIMessage`)
35+
36+
If your chat task is defined with [`chat.withUIMessage<YourUIMessage>()`](/ai-chat/types) (custom `data-*` parts, typed tools, etc.), pass the same message type through `useChat` so `messages` and `message.parts` are narrowed on the client:
37+
38+
```tsx
39+
import { useChat } from "@ai-sdk/react";
40+
import { useTriggerChatTransport, type InferChatUIMessage } from "@trigger.dev/sdk/chat/react";
41+
import type { myChat } from "./myChat";
42+
43+
type Msg = InferChatUIMessage<typeof myChat>;
44+
45+
const transport = useTriggerChatTransport<typeof myChat>({ task: "my-chat", accessToken: getChatToken });
46+
const { messages } = useChat<Msg>({ transport });
47+
```
48+
49+
See the [Types](/ai-chat/types) guide for defining `YourUIMessage`, default stream options, and backend examples.
50+
3451
### Dynamic access tokens
3552

3653
For token refresh, pass a function instead of a string. It's called on each `sendMessage`:

docs/ai-chat/overview.mdx

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -157,5 +157,6 @@ There are three ways to build the backend, from most opinionated to most flexibl
157157
- [Quick Start](/ai-chat/quick-start) — Get a working chat in 3 steps
158158
- [Backend](/ai-chat/backend) — Backend approaches in detail
159159
- [Frontend](/ai-chat/frontend) — Transport setup, sessions, client data
160+
- [Types](/ai-chat/types) — TypeScript patterns, including custom `UIMessage` with `chat.withUIMessage`
160161
- [Features](/ai-chat/features) — Per-run data, deferred work, streaming, subtasks
161162
- [API Reference](/ai-chat/reference) — Complete reference tables

docs/ai-chat/quick-start.mdx

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -28,6 +28,10 @@ description: "Get a working AI chat in 3 steps — define a task, generate a tok
2828
},
2929
});
3030
```
31+
32+
<Tip>
33+
For a **custom** [`UIMessage`](https://sdk.vercel.ai/docs/reference/ai-sdk-core/ui-message) subtype (typed `data-*` parts, tool map, etc.), define the task with [`chat.withUIMessage<...>().task({...})`](/ai-chat/types) instead of `chat.task`.
34+
</Tip>
3135
</Step>
3236

3337
<Step title="Generate an access token">
@@ -105,4 +109,5 @@ description: "Get a working AI chat in 3 steps — define a task, generate a tok
105109

106110
- [Backend](/ai-chat/backend) — Lifecycle hooks, persistence, session iterator, raw task primitives
107111
- [Frontend](/ai-chat/frontend) — Session management, client data, reconnection
112+
- [Types](/ai-chat/types)`chat.withUIMessage`, `InferChatUIMessage`, and related typing
108113
- [Features](/ai-chat/features) — Per-run data, deferred work, streaming, subtasks

docs/ai-chat/reference.mdx

Lines changed: 44 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -298,6 +298,50 @@ All methods available on the `chat` object from `@trigger.dev/sdk/ai`.
298298
| `chat.cleanupAbortedParts(message)` | Remove incomplete parts from a stopped response message |
299299
| `chat.stream` | Typed chat output stream — use `.writer()`, `.pipe()`, `.append()`, `.read()` |
300300
| `chat.MessageAccumulator` | Class that accumulates conversation messages across turns |
301+
| `chat.withUIMessage(config?).task(options)` | Same as `chat.task`, but fixes a custom `UIMessage` subtype and optional default stream options. See [Types](/ai-chat/types) |
302+
303+
## `chat.withUIMessage`
304+
305+
Returns `{ task }`, where `task` is like [`chat.task`](#chat-namespace) but parameterized on a UI message type `TUIM`.
306+
307+
```ts
308+
chat.withUIMessage<TUIM>(config?: ChatWithUIMessageConfig<TUIM>): {
309+
task: (options: ChatTaskOptions<..., ..., TUIM>) => Task<...>;
310+
};
311+
```
312+
313+
| Parameter | Type | Description |
314+
|-----------|------|-------------|
315+
| `config.streamOptions` | `ChatUIMessageStreamOptions<TUIM>` | Optional defaults for `toUIMessageStream()`. Shallow-merged with `uiMessageStreamOptions` on the inner `.task({ ... })` (task wins on key conflicts). |
316+
317+
Use this when you need [`InferChatUIMessage`](#inferchatuimessage) / typed `data-*` parts / `InferUITools` to line up across backend hooks and `useChat`. Full guide: [Types](/ai-chat/types).
318+
319+
## `ChatWithUIMessageConfig`
320+
321+
| Field | Type | Description |
322+
|-------|------|-------------|
323+
| `streamOptions` | `ChatUIMessageStreamOptions<TUIM>` | Default `toUIMessageStream()` options for tasks created via `.task()` |
324+
325+
## `InferChatUIMessage`
326+
327+
Type helper: extracts the `UIMessage` subtype from a chat task’s wire payload.
328+
329+
```ts
330+
import type { InferChatUIMessage } from "@trigger.dev/sdk/ai";
331+
// or from "@trigger.dev/sdk/chat/react"
332+
333+
type Msg = InferChatUIMessage<typeof myChat>;
334+
```
335+
336+
Use with `useChat<Msg>({ transport })` when using [`chat.withUIMessage`](/ai-chat/types). For tasks defined with plain `chat.task()` (no custom generic), this resolves to the base `UIMessage`.
337+
338+
## AI helpers (`ai` from `@trigger.dev/sdk/ai`)
339+
340+
| Export | Status | Description |
341+
|--------|--------|-------------|
342+
| `ai.toolExecute(task)` | **Preferred** | Returns the `execute` function for AI SDK `tool()`. Runs the task via `triggerAndSubscribe` and attaches tool/chat metadata (same behavior the deprecated wrapper used internally). |
343+
| `ai.tool(task, options?)` | **Deprecated** | Wraps `tool()` / `dynamicTool()` and the same execute path. Migrate to `tool({ ..., execute: ai.toolExecute(task) })`. See [Task-backed AI tools](/tasks/schemaTask#task-backed-ai-tools). |
344+
| `ai.toolCallId`, `ai.chatContext`, `ai.chatContextOrThrow`, `ai.currentToolOptions` | Supported | Work for any task-backed tool execute path, including `ai.toolExecute`. |
301345

302346
## ChatUIMessageStreamOptions
303347

docs/ai-chat/types.mdx

Lines changed: 137 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,137 @@
1+
---
2+
title: "Types"
3+
sidebarTitle: "Types"
4+
description: "TypeScript types for AI Chat tasks, UI messages, and the frontend transport."
5+
---
6+
7+
TypeScript patterns for [AI Chat](/ai-chat/overview). This page will expand over time; it currently documents how to pin a custom AI SDK [`UIMessage`](https://sdk.vercel.ai/docs/reference/ai-sdk-core/ui-message) subtype with `chat.withUIMessage` and align types on the client.
8+
9+
## Custom `UIMessage` with `chat.withUIMessage`
10+
11+
`chat.task()` types the wire payload with the base AI SDK `UIMessage`. That is enough for many apps.
12+
13+
When you add **custom `data-*` parts** (via `chat.stream` / `writer`) or a **typed tool map** (e.g. `InferUITools<typeof tools>`), you want a **narrower** `UIMessage` generic so that:
14+
15+
- `onTurnStart`, `onTurnComplete`, and similar hooks expose correctly typed `uiMessages`
16+
- Stream options like `sendReasoning` align with your message shape
17+
- The frontend can treat `useChat` messages as the same subtype end-to-end
18+
19+
`chat.withUIMessage<YourUIMessage>(config?)` returns `{ task }`, where `task(...)` accepts the **same options as** [`chat.task()`](/ai-chat/backend#chat-task) but fixes `YourUIMessage` as the UI message type for that chat task.
20+
21+
### Defining a `UIMessage` subtype
22+
23+
Build the type from AI SDK helpers and your tools object:
24+
25+
```ts
26+
import type { InferUITools, UIDataTypes, UIMessage } from "ai";
27+
import { tool } from "ai";
28+
import { z } from "zod";
29+
30+
const myTools = {
31+
lookup: tool({
32+
description: "Look up a record",
33+
inputSchema: z.object({ id: z.string() }),
34+
execute: async ({ id }) => ({ id, label: "example" }),
35+
}),
36+
};
37+
38+
type MyChatTools = InferUITools<typeof myTools>;
39+
40+
type MyChatDataTypes = UIDataTypes & {
41+
"turn-status": { status: "preparing" | "streaming" | "done" };
42+
};
43+
44+
export type MyChatUIMessage = UIMessage<unknown, MyChatDataTypes, MyChatTools>;
45+
```
46+
47+
Task-backed tools should use AI SDK [`tool()`](https://sdk.vercel.ai/docs/ai-sdk-core/tools-and-tool-calling) with `execute: ai.toolExecute(schemaTask)` where needed — see [Task-backed AI tools](/tasks/schemaTask#task-backed-ai-tools).
48+
49+
### Backend: `chat.withUIMessage(...).task(...)`
50+
51+
Call `withUIMessage` **once**, then chain `.task({ ... })` instead of `chat.task({ ... })`:
52+
53+
```ts
54+
import { chat } from "@trigger.dev/sdk/ai";
55+
import { streamText, tool } from "ai";
56+
import { openai } from "@ai-sdk/openai";
57+
import { z } from "zod";
58+
import type { MyChatUIMessage } from "./my-chat-types";
59+
60+
const myTools = {
61+
lookup: tool({
62+
description: "Look up a record",
63+
inputSchema: z.object({ id: z.string() }),
64+
execute: async ({ id }) => ({ id, label: "example" }),
65+
}),
66+
};
67+
68+
export const myChat = chat.withUIMessage<MyChatUIMessage>({
69+
streamOptions: {
70+
sendReasoning: true,
71+
onError: (error) =>
72+
error instanceof Error ? error.message : "Something went wrong.",
73+
},
74+
}).task({
75+
id: "my-chat",
76+
clientDataSchema: z.object({ userId: z.string() }),
77+
onTurnStart: async ({ uiMessages, writer }) => {
78+
// uiMessages is MyChatUIMessage[] — custom data parts are typed
79+
writer.write({
80+
type: "data-turn-status",
81+
data: { status: "preparing" },
82+
});
83+
},
84+
run: async ({ messages, signal }) => {
85+
return streamText({
86+
model: openai("gpt-4o"),
87+
messages,
88+
tools: myTools,
89+
abortSignal: signal,
90+
});
91+
},
92+
});
93+
```
94+
95+
### Default stream options
96+
97+
The optional `streamOptions` object becomes the **default** [`uiMessageStreamOptions`](/ai-chat/reference#chat-task-options) for `toUIMessageStream()`.
98+
99+
If you also set `uiMessageStreamOptions` on the inner `.task({ ... })`, the two objects are **shallow-merged** — keys on the **task** win on conflicts. Per-turn overrides via [`chat.setUIMessageStreamOptions()`](/ai-chat/backend#stream-options) still apply on top.
100+
101+
### Frontend: `InferChatUIMessage`
102+
103+
Import the helper type and pass it to `useChat` so `messages` and render logic match the backend:
104+
105+
```tsx
106+
import { useChat } from "@ai-sdk/react";
107+
import { useTriggerChatTransport, type InferChatUIMessage } from "@trigger.dev/sdk/chat/react";
108+
import type { myChat } from "./myChat";
109+
110+
type Msg = InferChatUIMessage<typeof myChat>;
111+
112+
export function Chat() {
113+
const transport = useTriggerChatTransport<typeof myChat>({
114+
task: "my-chat",
115+
accessToken: getChatToken,
116+
});
117+
118+
const { messages } = useChat<Msg>({ transport });
119+
120+
return messages.map((m) => (
121+
<div key={m.id}>{/* m.parts narrowed for your UIMessage subtype */}</div>
122+
));
123+
}
124+
```
125+
126+
You can also import `InferChatUIMessage` from `@trigger.dev/sdk/ai` in non-React modules.
127+
128+
### When plain `chat.task()` is enough
129+
130+
If you do not rely on custom `UIMessage` generics (only default text, reasoning, and built-in tool UI types), **`chat.task()` alone is fine** — no need for `withUIMessage`.
131+
132+
## See also
133+
134+
- [Backend — `chat.task()`](/ai-chat/backend#chat-task)
135+
- [Frontend — transport & `useChat`](/ai-chat/frontend)
136+
- [API reference — `chat.withUIMessage`](/ai-chat/reference#chat-withuimessage)
137+
- [Task-backed AI tools — `ai.toolExecute`](/tasks/schemaTask#task-backed-ai-tools)

docs/docs.json

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -91,6 +91,7 @@
9191
"ai-chat/quick-start",
9292
"ai-chat/backend",
9393
"ai-chat/frontend",
94+
"ai-chat/types",
9495
"ai-chat/features",
9596
"ai-chat/compaction",
9697
"ai-chat/pending-messages",

docs/migrating-from-v3.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ We're retiring Trigger.dev v3. **New v3 deploys will stop working from 1 April 2
3434
| [Hidden tasks](/hidden-tasks) | Create tasks that are not exported from your trigger files but can still be executed. |
3535
| [Middleware & locals](#middleware-and-locals) | The middleware system runs at the top level, executing before and after all lifecycle hooks. The locals API allows sharing data between middleware and hooks. |
3636
| [useWaitToken](/realtime/react-hooks/use-wait-token) | Use the useWaitToken hook to complete a wait token from a React component. |
37-
| [ai.tool](/tasks/schemaTask#ai-tool) | Create an AI tool from an existing `schemaTask` to use with the Vercel [AI SDK](https://vercel.com/docs/ai-sdk). |
37+
| [Task-backed AI tools](/tasks/schemaTask#task-backed-ai-tools) | Use `schemaTask` with AI SDK `tool()` and `ai.toolExecute()` (legacy `ai.tool` is deprecated). |
3838

3939
## Node.js support
4040

@@ -165,7 +165,7 @@ export const myAiTask = schemaTask({
165165
});
166166
```
167167

168-
We've replaced the `toolTask` function with the `ai.tool` function, which creates an AI tool from an existing `schemaTask`. See the [ai.tool](/tasks/schemaTask#ai-tool) page for more details.
168+
We've replaced the `toolTask` function with `schemaTask` plus AI SDK `tool()` and `ai.toolExecute()` (the older `ai.tool()` wrapper is deprecated). See [Task-backed AI tools](/tasks/schemaTask#task-backed-ai-tools).
169169

170170
## Breaking changes
171171

0 commit comments

Comments
 (0)