This project was bootstrapped using AI-assisted code generation to quickly evaluate generative UI frameworks. The focus of this experiment was on architecture, testing different approaches, and drawing insights from each framework’s behavior.
Compare generative UI libraries side by side. Send the same prompt to three different approaches and see how they render, how fast they respond, and what they produce.
| Library | Approach |
|---|---|
| Thesys C1 | API middleware generates UI markup, SDK renders it |
| Tambo | Register React components with Zod schemas, AI picks and streams props |
| Vercel AI SDK | streamText with tool calls, client renders components from tool output |
npm install
cp .env.local.example .env.localAdd your API keys to .env.local:
THESYS_API_KEY— from thesys.devNEXT_PUBLIC_TAMBO_API_KEY— from tambo.coOPENAI_API_KEY— from platform.openai.com
npm run devOpen http://localhost:3000.
- Type a prompt or click a preset (sales dashboard, comparison table, pie chart, etc.)
- All three panels receive the same prompt simultaneously
- Each panel shows time-to-first-render (TTFR) and total render time
- Thesys generates its own UI from scratch; Tambo and Vercel AI SDK render the same shared React components (Chart, DataTable, InfoCard, Dashboard) so you can compare apples to apples on those two
- Next.js 15 (App Router)
- Tailwind CSS v4 + shadcn/ui
- Recharts
- TypeScript