You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: apps/www/app/docs/design-guidelines/content.mdx
-6Lines changed: 0 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -160,12 +160,6 @@ If the assistant can't point at something later, you lose half the value of rend
160
160
161
161
Use stable IDs from your backend — database IDs, canonical URLs — not array indexes or render-time UUIDs. Anything the user can act on should have an ID the assistant can cite.
162
162
163
-
## Receipts
164
-
165
-
When a user takes an action with consequences, the tool UI should transition to a receipt — a read-only confirmation of what happened. This gives the user proof and the assistant something to cite later.
166
-
167
-
See the [Receipts](/docs/receipts) page for patterns and copy guidance.
168
-
169
163
## Anti-Patterns
170
164
171
165
-**Input fields:** Input fields compete with the main chat composer. Ask whether the assistant could gather that information through conversation, or link to a form outside the chat.
How the carousel handles common interaction patterns:
135
-
136
-
**Responsive button layout:** Each card is a container (`@container/card`). Narrow cards stack buttons vertically with the primary on top. Above 176px, buttons flow horizontally with the primary on the right. `flex-col-reverse` keeps DOM order consistent while adapting visual hierarchy.
137
-
138
-
**Equal card heights:** Cards use CSS Grid (`grid-flow-col` with `auto-cols-max`). Grid cells match heights automatically, so a two-line title aligns with one-line neighbors.
139
-
140
-
**Smooth scroll animation:** Custom JavaScript scrolling with a cubic-bezier ease-out curve. Respects `prefers-reduced-motion` with an instant-scroll fallback. Snap positions are calculated dynamically from card offsets.
141
-
142
-
**Image handling:** Accepts image URLs or hex color fallbacks. Images use `loading="lazy"` and `decoding="async"`. On hover, images scale up 5% with a `duration-200` transform transition.
143
-
144
-
**Overlay click target:** Interactive cards render a full-card overlay button (`absolute inset-0`, `z-10`) for `onItemClick`, while action buttons sit above it (`z-20`) so primary actions stay independently clickable.
145
-
146
-
**Touch optimization:**`touch-manipulation` eliminates the 300ms tap delay on iOS. Navigation buttons appear only on hover/focus (desktop) with backdrop blur.
Copy file name to clipboardExpand all lines: apps/www/app/docs/overview/content.mdx
+1-17Lines changed: 1 addition & 17 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -50,15 +50,7 @@ Tool UI is a component library built for this. Each component turns a specific k
50
50
</MockThread>
51
51
</div>
52
52
53
-
Same data, different experience. The left side either dumps JSON to the user or, with a markdown renderer like MDX, gives you a plain text link. The right side renders a clickable card that looks and behaves like a native part of the conversation.
54
-
55
-
## What is tool calling?
56
-
57
-
Tool calling happens when the assistant _does_ something instead of _saying_ something. The user asks "find me flights to Tokyo," and instead of describing options in a paragraph, the assistant calls a search tool and returns structured results.
58
-
59
-
You define the functions the model can invoke: searching a database, fetching a URL, running a calculation. The model decides when to call them based on the conversation. It sends structured arguments, your server executes the function, and a result comes back.
60
-
61
-
Usually that result is plain text or JSON dumped into the chat. Tool UI handles what comes after: **rendering those results as real UI**.
53
+
Same data, different experience.
62
54
63
55
## What if tool results could render UI?
64
56
@@ -121,16 +113,8 @@ flowchart LR
121
113
122
114
`} />
123
115
124
-
1.**The assistant calls a tool.** Based on the conversation, the model invokes a tool you've defined (e.g., `previewLink`, `searchFlights`).
125
-
2.**The tool returns JSON.** Your server-side function executes and returns structured data matching an `outputSchema`.
126
-
3.**The schema matches. A component renders.** On the client, a registered renderer parses the JSON against the component's schema. If it matches, the component renders inline in the chat message.
127
-
4.**The user interacts.** For display components, this is the end. For interactive components (decisions, approvals), the user takes an action.
128
-
5.**The result returns.** The user's choice is sent back to the assistant as a tool result via `addResult`, continuing the conversation.
129
-
130
116
### Minimal example
131
117
132
-
The server defines a tool with a typed output schema. The client registers a renderer that maps that output to a component.
133
-
134
118
**Server:** define a tool that returns structured data.
These snippets use [assistant-ui](https://assistant-ui.com) for end-to-end wiring, but the Plan component itself is framework-agnostic at the UI layer: you can use it in any React codebase with any LLM SDK or tool-calling interface that can provide compatible props.
Copy file name to clipboardExpand all lines: apps/www/app/docs/quick-start/content.mdx
+1-15Lines changed: 1 addition & 15 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -19,9 +19,7 @@ import { Bot } from "lucide-react";
19
19
</AlertDescription>
20
20
</Alert>
21
21
22
-
Add a Tool UI component to a chat app. By the end you'll have an assistant that renders a rich link preview card instead of raw JSON.
23
-
24
-
**What you'll build:** A backend tool that fetches link metadata, and a frontend renderer that turns the JSON response into an interactive LinkPreview card, all wired together through [assistant-ui](https://www.assistant-ui.com/).
22
+
Add a Tool UI component to a chat app powered by [assistant-ui](https://www.assistant-ui.com/).
25
23
26
24
## Install
27
25
@@ -69,8 +67,6 @@ This copies the source files into your project. The code is yours - change it ho
69
67
70
68
## Wire it up
71
69
72
-
Two pieces connect the component to the conversation: a **backend tool** that returns structured data when the model calls it, and a **frontend renderer** that turns that data into the LinkPreview component.
73
-
74
70
<Steps>
75
71
76
72
<Steptitle="Define a backend tool">
@@ -204,16 +200,6 @@ Components like [Option List](/docs/option-list) and [Approval Card](/docs/appro
204
200
205
201
For the full implementation pattern, including how to forward frontend tools through your API route and enable auto-continue after user decisions, see the [Advanced](/docs/advanced) page.
206
202
207
-
## Other frameworks
208
-
209
-
Tool UI components work with any React app. Without assistant-ui, you manually parse tool outputs and render components. Use assistant-ui for the best experience.
210
-
211
-
Tool UI components are installed from registry entries, and each entry includes `components/tool-ui/shared` automatically.
212
-
213
-
## Runtime options
214
-
215
-
[assistant-ui](https://www.assistant-ui.com/) supports multiple runtimes: [AI SDK](https://ai-sdk.dev/), [LangGraph](https://langchain-ai.github.io/langgraphjs/), LangServe, [Mastra](https://mastra.dev/), or custom backends. The examples above use AI SDK v6.
216
-
217
203
## Next steps
218
204
219
205
-[**Gallery**](/docs/gallery): Browse all available components
0 commit comments