Skip to content

Commit 34ddec8

Browse files
committed
patterns and the ctx thing
1 parent 03f3920 commit 34ddec8

File tree

5 files changed

+53
-5
lines changed

5 files changed

+53
-5
lines changed

docs/ai-chat/backend.mdx

Lines changed: 14 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -67,6 +67,12 @@ async function runAgentLoop(messages: ModelMessage[]) {
6767

6868
### Lifecycle hooks
6969

70+
#### Task context (`ctx`)
71+
72+
Every chat lifecycle callback and the **`run`** payload include **`ctx`**: the same run context object as `task({ run: (payload, { ctx }) => ... })`. Import the type with **`import type { TaskRunContext } from "@trigger.dev/sdk"`** (the **`Context`** export is the same type). Use **`ctx`** for tags, metadata, or any API that needs the full run record. The string **`runId`** on chat events is always **`ctx.run.id`** (both are provided for convenience). See [Task context (`ctx`)](/ai-chat/reference#task-context-ctx) in the API reference.
73+
74+
Standard **[task lifecycle hooks](/tasks/overview)****`onWait`**, **`onResume`**, **`onComplete`**, **`onFailure`**, etc. — are also available on **`chat.task()`** with the same shapes as on a normal `task()`. For example, tear down an external sandbox **right before the run suspends** waiting for the next message using **`onWait`** when **`wait.type === "token"`**. See the [Code execution sandbox](/ai-chat/patterns/code-sandbox) pattern.
75+
7076
#### onPreload
7177

7278
Fires when a preloaded run starts — before any messages arrive. Use it to eagerly initialize state (DB records, user context) while the user is still typing.
@@ -77,7 +83,7 @@ Preloaded runs are triggered by calling `transport.preload(chatId)` on the front
7783
export const myChat = chat.task({
7884
id: "my-chat",
7985
clientDataSchema: z.object({ userId: z.string() }),
80-
onPreload: async ({ chatId, clientData, runId, chatAccessToken }) => {
86+
onPreload: async ({ ctx, chatId, clientData, runId, chatAccessToken }) => {
8187
// Initialize early — before the first message arrives
8288
const user = await db.user.findUnique({ where: { id: clientData.userId } });
8389
userContext.init({ name: user.name, plan: user.plan });
@@ -101,6 +107,7 @@ export const myChat = chat.task({
101107

102108
| Field | Type | Description |
103109
| ----------------- | --------------------------------------------- | -------------------------------- |
110+
| `ctx` | `TaskRunContext` | Full task run context — [reference](/ai-chat/reference#task-context-ctx) |
104111
| `chatId` | `string` | Chat session ID |
105112
| `runId` | `string` | The Trigger.dev run ID |
106113
| `chatAccessToken` | `string` | Scoped access token for this run |
@@ -145,6 +152,7 @@ Fires at the start of every turn, after message accumulation and `onChatStart` (
145152

146153
| Field | Type | Description |
147154
| ----------------- | --------------------------------------------- | ----------------------------------------------- |
155+
| `ctx` | `TaskRunContext` | Full task run context — [reference](/ai-chat/reference#task-context-ctx) |
148156
| `chatId` | `string` | Chat session ID |
149157
| `messages` | `ModelMessage[]` | Full accumulated conversation (model format) |
150158
| `uiMessages` | `UIMessage[]` | Full accumulated conversation (UI format) |
@@ -219,6 +227,7 @@ Fires after each turn completes — after the response is captured and the strea
219227

220228
| Field | Type | Description |
221229
| -------------------- | ------------------------ | -------------------------------------------------------------------------------------------- |
230+
| `ctx` | `TaskRunContext` | Full task run context — [reference](/ai-chat/reference#task-context-ctx) |
222231
| `chatId` | `string` | Chat session ID |
223232
| `messages` | `ModelMessage[]` | Full accumulated conversation (model format) |
224233
| `uiMessages` | `UIMessage[]` | Full accumulated conversation (UI format) |
@@ -263,6 +272,10 @@ export const myChat = chat.task({
263272
it uses this to skip past already-seen events — preventing duplicate messages.
264273
</Tip>
265274

275+
<Tip>
276+
For a full **conversation + session** persistence pattern (including preload, continuation, and token renewal), see [Database persistence](/ai-chat/patterns/database-persistence).
277+
</Tip>
278+
266279
### Using prompts
267280

268281
Use [AI Prompts](/ai/prompts) to manage your system prompt as versioned, overridable config. Store the resolved prompt in a lifecycle hook with `chat.prompt.set()`, then spread `chat.toStreamTextOptions()` into `streamText` — it includes the system prompt, model, config, and telemetry automatically.

docs/ai-chat/features.mdx

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,8 @@ description: "Per-run data, deferred work, custom streaming, subtask integration
88

99
Use `chat.local` to create typed, run-scoped data that persists across turns and is accessible from anywhere — the run function, tools, nested helpers. Each run gets its own isolated copy, and locals are automatically cleared between runs.
1010

11+
Lifecycle hooks and **`run`** also receive **`ctx`** ([`TaskRunContext`](/ai-chat/reference#task-context-ctx)) — the same object as on a standard `task()` — for tags, metadata, and cleanup that needs the full run record.
12+
1113
When a subtask is invoked via `ai.toolExecute()` (or the deprecated `ai.tool()`), initialized locals are automatically serialized into the subtask's metadata and hydrated on first access — no extra code needed. Subtask changes to hydrated locals are local to the subtask and don't propagate back to the parent.
1214

1315
### Declaring and initializing
@@ -156,7 +158,7 @@ onTurnComplete: async ({ chatId }) => {
156158

157159
---
158160

159-
## chat.defer()
161+
## chat.defer() {#chat-defer}
160162

161163
Use `chat.defer()` to run background work in parallel with streaming. The deferred promise runs alongside the LLM response and is awaited (with a 5s timeout) before `onTurnComplete` fires.
162164

docs/ai-chat/overview.mdx

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -155,6 +155,8 @@ There are three ways to build the backend, from most opinionated to most flexibl
155155
## Related
156156

157157
- [Quick Start](/ai-chat/quick-start) — Get a working chat in 3 steps
158+
- [Database persistence](/ai-chat/patterns/database-persistence) — Conversation + session state across hooks (ORM-agnostic)
159+
- [Code execution sandbox](/ai-chat/patterns/code-sandbox) — Warm/teardown pattern for E2B (or similar) with `onWait` / `chat.local`
158160
- [Backend](/ai-chat/backend) — Backend approaches in detail
159161
- [Frontend](/ai-chat/frontend) — Transport setup, sessions, client data
160162
- [Types](/ai-chat/types) — TypeScript patterns, including custom `UIMessage` with `chat.withUIMessage`

docs/ai-chat/reference.mdx

Lines changed: 27 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -30,14 +30,31 @@ Options for `chat.task()`.
3030
| `preloadTimeout` | `string` | Same as `turnTimeout` | Suspend timeout for preloaded runs |
3131
| `uiMessageStreamOptions` | `ChatUIMessageStreamOptions` || Default options for `toUIMessageStream()`. Per-turn override via `chat.setUIMessageStreamOptions()` |
3232

33-
Plus all standard [TaskOptions](/tasks/overview)`retry`, `queue`, `machine`, `maxDuration`, etc.
33+
Plus all standard [TaskOptions](/tasks/overview)`retry`, `queue`, `machine`, `maxDuration`, **`onWait`**, **`onResume`**, **`onComplete`**, and other lifecycle hooks. Those hooks use the same parameter shapes as on a normal `task()` (including `ctx`).
34+
35+
## Task context (`ctx`)
36+
37+
All **`chat.task`** lifecycle events (**`onPreload`**, **`onChatStart`**, **`onTurnStart`**, **`onBeforeTurnComplete`**, **`onTurnComplete`**, **`onCompacted`**) and the object passed to **`run`** include **`ctx`**: the same **`TaskRunContext`** shape as the `ctx` in `task({ run: (payload, { ctx }) => ... })`.
38+
39+
Use **`ctx`** for run metadata, tags, parent links, or any API that needs the full run record. The chat-specific string **`runId`** on events is always **`ctx.run.id`**; both are provided for convenience.
40+
41+
```ts
42+
import type { TaskRunContext } from "@trigger.dev/sdk";
43+
// Equivalent alias (same type):
44+
import type { Context } from "@trigger.dev/sdk";
45+
```
46+
47+
<Note>
48+
Prefer `import type { TaskRunContext } from "@trigger.dev/sdk"` in application code. Do not depend on `@trigger.dev/core` directly.
49+
</Note>
3450

3551
## ChatTaskRunPayload
3652

3753
The payload passed to the `run` function.
3854

3955
| Field | Type | Description |
4056
| -------------- | ------------------------------------------ | -------------------------------------------------------------------- |
57+
| `ctx` | `TaskRunContext` | Full task run context — same as `task` `run`’s `{ ctx }` |
4158
| `messages` | `ModelMessage[]` | Model-ready messages — pass directly to `streamText` |
4259
| `chatId` | `string` | Unique chat session ID |
4360
| `trigger` | `"submit-message" \| "regenerate-message"` | What triggered the request |
@@ -47,13 +64,16 @@ The payload passed to the `run` function.
4764
| `signal` | `AbortSignal` | Combined stop + cancel signal |
4865
| `cancelSignal` | `AbortSignal` | Cancel-only signal |
4966
| `stopSignal` | `AbortSignal` | Stop-only signal (per-turn) |
67+
| `previousTurnUsage` | `LanguageModelUsage \| undefined` | Token usage from the previous turn (undefined on turn 0) |
68+
| `totalUsage` | `LanguageModelUsage` | Cumulative token usage across completed turns so far |
5069

5170
## PreloadEvent
5271

5372
Passed to the `onPreload` callback.
5473

5574
| Field | Type | Description |
5675
| ----------------- | --------------------------- | -------------------------------------------------------------- |
76+
| `ctx` | `TaskRunContext` | Full task run context — see [Task context](#task-context-ctx) |
5777
| `chatId` | `string` | Chat session ID |
5878
| `runId` | `string` | The Trigger.dev run ID |
5979
| `chatAccessToken` | `string` | Scoped access token for this run |
@@ -66,6 +86,7 @@ Passed to the `onChatStart` callback.
6686

6787
| Field | Type | Description |
6888
| ----------------- | --------------------------- | -------------------------------------------------------------- |
89+
| `ctx` | `TaskRunContext` | Full task run context — see [Task context](#task-context-ctx) |
6990
| `chatId` | `string` | Chat session ID |
7091
| `messages` | `ModelMessage[]` | Initial model-ready messages |
7192
| `clientData` | Typed by `clientDataSchema` | Custom data from the frontend |
@@ -82,6 +103,7 @@ Passed to the `onTurnStart` callback.
82103

83104
| Field | Type | Description |
84105
| ----------------- | --------------------------- | -------------------------------------------------------------- |
106+
| `ctx` | `TaskRunContext` | Full task run context — see [Task context](#task-context-ctx) |
85107
| `chatId` | `string` | Chat session ID |
86108
| `messages` | `ModelMessage[]` | Full accumulated conversation (model format) |
87109
| `uiMessages` | `UIMessage[]` | Full accumulated conversation (UI format) |
@@ -100,6 +122,7 @@ Passed to the `onTurnComplete` callback.
100122

101123
| Field | Type | Description |
102124
| -------------------- | --------------------------------- | ---------------------------------------------------- |
125+
| `ctx` | `TaskRunContext` | Full task run context — see [Task context](#task-context-ctx) |
103126
| `chatId` | `string` | Chat session ID |
104127
| `messages` | `ModelMessage[]` | Full accumulated conversation (model format) |
105128
| `uiMessages` | `UIMessage[]` | Full accumulated conversation (UI format) |
@@ -118,11 +141,11 @@ Passed to the `onTurnComplete` callback.
118141

119142
## BeforeTurnCompleteEvent
120143

121-
Passed to the `onBeforeTurnComplete` callback. Same fields as `TurnCompleteEvent` plus a `writer`.
144+
Passed to the `onBeforeTurnComplete` callback. Same fields as `TurnCompleteEvent` (including **`ctx`**) plus a `writer`.
122145

123146
| Field | Type | Description |
124147
| -------------------------------- | --------------------------- | ----------------------------------------------------------------------------- |
125-
| _(all TurnCompleteEvent fields)_ | | See [TurnCompleteEvent](#turncompleteevent) |
148+
| _(all TurnCompleteEvent fields)_ | | See [TurnCompleteEvent](#turncompleteevent) (includes `ctx`) |
126149
| `writer` | [`ChatWriter`](#chatwriter) | Stream writer — the stream is still open so chunks appear in the current turn |
127150

128151
## ChatWriter
@@ -178,6 +201,7 @@ Passed to the `onCompacted` callback.
178201

179202
| Field | Type | Description |
180203
| -------------- | --------------------------- | ------------------------------------------------- |
204+
| `ctx` | `TaskRunContext` | Full task run context — see [Task context](#task-context-ctx) |
181205
| `summary` | `string` | The generated summary text |
182206
| `messages` | `ModelMessage[]` | Messages that were compacted (pre-compaction) |
183207
| `messageCount` | `number` | Number of messages before compaction |

docs/docs.json

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -96,6 +96,13 @@
9696
"ai-chat/compaction",
9797
"ai-chat/pending-messages",
9898
"ai-chat/background-injection",
99+
{
100+
"group": "Patterns",
101+
"pages": [
102+
"ai-chat/patterns/database-persistence",
103+
"ai-chat/patterns/code-sandbox"
104+
]
105+
},
99106
"ai-chat/reference"
100107
]
101108
}

0 commit comments

Comments
 (0)