Free Agent is an autonomous AI mode in the Agent Builder Console that allows an AI agent to independently complete complex tasks using a suite of tools. Unlike the Workflow mode (which executes predefined node-based workflows), Free Agent operates in an iterative loop where the LLM reasons about the task, executes tools, and tracks progress until completion.
| Aspect | Workflow Mode | Free Agent Mode |
|---|---|---|
| Execution | Pre-defined node graph | Autonomous iteration loop |
| Control | User defines each step | AI decides next actions |
| Memory | Per-node state | Blackboard + Scratchpad + Attributes |
| Flexibility | Fixed flow | Dynamic, goal-oriented |
src/components/freeAgent/
├── FreeAgentView.tsx # Main container, switches between canvas/panel views
├── FreeAgentPanel.tsx # Control panel: prompt input, model selection, run controls
├── FreeAgentCanvas.tsx # Visual node graph showing agent, tools, memory
├── BlackboardViewer.tsx # Displays planning journal entries
├── ArtifactsPanel.tsx # Shows generated artifacts (documents, data)
├── RawViewer.tsx # Debug view: Input/Output/Tools tabs
├── AssistanceModal.tsx # Modal for user input when agent needs help
├── FinalReportModal.tsx # Shows task completion summary
└── [Node Components] # FreeAgentNode, ToolNode, PromptNode, etc.
src/hooks/useFreeAgentSession.ts
Manages the entire session lifecycle:
- Session state (idle, running, paused, completed, error, needs_assistance)
- Iteration loop execution
- Tool result caching
- Memory synchronization via refs
- Assistance request handling
src/lib/freeAgentToolExecutor.ts
Handles tool execution with two categories:
- Frontend Tools: Executed locally (memory read/write, file operations, exports)
- Edge Function Tools: Dispatched to Supabase functions (search, scrape, API calls)
supabase/functions/free-agent/index.ts
Main orchestration function:
- Builds system prompt with memory state
- Calls LLM (Gemini, Claude, or Grok)
- Parses structured JSON response
- Executes backend tools
- Returns results for next iteration
Free Agent uses a three-tier memory architecture:
Purpose: Track progress, prevent loops, maintain context
- Always visible in the system prompt every iteration
- Stores: current step, completed items, next actions, observations, decisions
- Agent MUST write to blackboard every iteration
Categories:
plan- Current step and progress trackingobservation- What the agent found or learnedinsight- Conclusions drawn from observationsdecision- Choices made and reasoningerror- Problems encountered
Purpose: Store YOUR SUMMARIES and notes, not raw data dumps
- Contains your summaries, analysis, and extracted insights
- May contain
{{attribute_name}}references (placeholders, NOT auto-expanded) - Handlebars are just placeholders - use
read_attributeto fetch full data - Read on-demand to preserve context window
- Persists across iterations
Important: read_scratchpad does NOT auto-expand handlebar references. It returns:
- Your scratchpad content as-is
- List of available attributes for
read_attribute
Purpose: Token-efficient storage of large tool results
When a tool uses saveAs parameter:
{ "tool": "web_scrape", "params": { "url": "...", "saveAs": "weather_data" } }- Result is stored as independent attribute
- Agent receives small confirmation (not full data)
- Reference
{{weather_data}}auto-added to scratchpad as placeholder - Use
read_attribute(["weather_data"])to retrieve full content - After reading, agent must SUMMARIZE key findings to scratchpad
- Fetch with saveAs:
{ "tool": "brave_search", "params": { "query": "...", "saveAs": "search_results" } } - Receive confirmation: "Saved to 'search_results'. Use read_attribute..."
- Read attribute:
{ "tool": "read_attribute", "params": { "names": ["search_results"] } } - SUMMARIZE to scratchpad:
{ "tool": "write_scratchpad", "params": { "content": "## Search Summary\\n- Key finding 1\\n- Key finding 2" } } - Continue working from your summary - don't re-read raw data!
| Tool | Description |
|---|---|
read_blackboard |
Read planning journal entries |
write_blackboard |
Add entry to planning journal |
read_scratchpad |
Read data storage (NO handlebar expansion - use read_attribute) |
write_scratchpad |
Save data to persistent storage |
read_file |
Read content of session file |
read_prompt |
Get original user prompt |
read_prompt_files |
List available session files |
read_attribute |
Access saved tool results |
| Tool | Edge Function | Description |
|---|---|---|
brave_search |
brave-search | Web search via Brave API |
google_search |
google-search | Web search via Google API |
web_scrape |
web-scrape | Extract content from webpage |
| Tool | Edge Function | Description |
|---|---|---|
read_github_repo |
github-fetch | Get repository file tree |
read_github_file |
github-fetch | Read specific files from repo |
| Tool | Edge Function | Description |
|---|---|---|
pdf_info |
tool_pdf-handler | Get PDF metadata and page count |
pdf_extract_text |
tool_pdf-handler | Extract text from PDF |
ocr_image |
tool_ocr-handler | OCR text extraction from image |
read_zip_contents |
tool_zip-handler | List files in ZIP archive |
read_zip_file |
tool_zip-handler | Read specific file from ZIP |
extract_zip_files |
tool_zip-handler | Extract files from ZIP |
| Tool | Edge Function | Description |
|---|---|---|
send_email |
send-email | Send email via Resend |
request_assistance |
(frontend) | Ask user for input |
| Tool | Edge Function | Description |
|---|---|---|
image_generation |
run-nano | Generate image from prompt |
elevenlabs_tts |
elevenlabs-tts | Text-to-speech synthesis |
| Tool | Edge Function | Description |
|---|---|---|
get_call_api |
api-call | HTTP GET request |
post_call_api |
api-call | HTTP POST request |
execute_sql |
external-db | Execute SQL on external database |
| Tool | Edge Function | Description |
|---|---|---|
get_time |
time | Get current date/time |
get_weather |
tool_weather | Get weather for location |
| Tool | Description |
|---|---|
export_word |
Create Word document artifact |
export_pdf |
Create PDF document artifact |
┌─────────────────────────────────────────────────────────────────┐
│ USER ENTERS PROMPT │
└─────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ SESSION INITIALIZED (iteration = 1) │
└─────────────────────────────────────────────────────────────────┘
│
▼
┌────────────────────────────────────────────┐
│ ITERATION LOOP START │
└────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ 1. BUILD SYSTEM PROMPT │
│ - Include blackboard (always) │
│ - Include scratchpad preview │
│ - Include previous tool results │
│ - Include session files │
│ - Include assistance response (if any) │
└─────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ 2. CALL LLM (Gemini/Claude/Grok) │
│ - Provider-specific formatting │
│ - JSON mode enforcement │
└─────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ 3. PARSE RESPONSE │
│ - Extract: reasoning, tool_calls, blackboard_entry, │
│ status, message_to_user, artifacts, final_report │
│ - Handle parsing errors gracefully │
└─────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ 4. EXECUTE TOOLS (parallel within iteration) │
│ - Backend tools → Edge functions │
│ - Frontend tools → Local handlers │
│ - Handle saveAs for auto-attribute creation │
└─────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ 5. UPDATE MEMORY │
│ - Add blackboard entry │
│ - Update scratchpad │
│ - Store tool result attributes │
│ - Create artifacts │
└─────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ 6. CHECK STATUS │
│ │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────────┐ │
│ │ in_progress │ │ completed │ │ needs_assistance │ │
│ │ → continue │ │ → show report│ │ → show modal │ │
│ └──────────────┘ └──────────────┘ └──────────────────┘ │
│ │
│ ┌──────────────┐ ┌───────────────────────────────────┐ │
│ │ error │ │ max_iterations reached │ │
│ │ → show error │ │ → auto-complete with summary │ │
│ └──────────────┘ └───────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────┘
│
▼
┌────────────────────────────────────────────┐
│ INCREMENT ITERATION, LOOP BACK │
└────────────────────────────────────────────┘
Without safeguards, the agent might:
- Re-execute the same search repeatedly
- Forget what it already found
- Never progress to completion
-
Blackboard Mandatory & Verbose: Every response MUST include detailed
blackboard_entry- Tracks completed steps AND key findings/extracted data
- Agent reads this every iteration
- Must include: COMPLETED, EXTRACTED/FOUND, NEXT
-
Tool Results Visibility: Results only visible for ONE iteration
- Must save to scratchpad or use
saveAs - Clear warning in prompt about disappearing results
- Must save to scratchpad or use
-
saveAs Auto-Save: Data-fetching tools can auto-save results
- Agent receives confirmation, not full data
- Reduces token waste and re-fetching
-
Frontend Tool Cache: Expensive operations cached for 5 minutes
read_github_repo,read_github_file,web_scrape- Identical requests served from cache
-
Max Iterations: Hard limit (default 20) prevents runaway loops
- Auto-generates summary when limit reached
| State | Description | User Actions |
|---|---|---|
idle |
Ready for new task | Enter prompt, Start |
running |
Executing iteration loop | Pause, Stop |
paused |
Temporarily halted | Resume, Stop |
needs_assistance |
Waiting for user input | Provide response |
completed |
Task finished | View report, Continue, Reset |
error |
Execution failed | View error, Reset |
- Continue: Preserves blackboard, scratchpad, artifacts. Allows new task building on previous work.
- Reset: Clears all memory. Fresh start.
The Enhance Prompt feature uses AI to transform a vague user request into a detailed, structured execution plan before the agent starts running.
- Convert informal requests into actionable step-by-step plans
- Identify which tools the agent should use at each phase
- Define clear success criteria and checkpoints
- Anticipate potential challenges
- Estimate the number of iterations needed
- Enter your task description in the prompt textarea
- Click "Enhance Prompt" (the wand icon button) above "Start Agent"
- Review the generated plan in the modal that opens
- Choose a view:
- Preview: Rendered markdown view of the plan
- Edit: Raw text editor for manual modifications
- Refine (optional): Provide feedback and click "Refine" for AI improvement
- Accept: Choose between:
- Accept: Replace prompt and return to panel
- Accept & Start: Replace prompt and immediately start the agent
| Feature | Description |
|---|---|
| Original Prompt | Read-only display of your initial request |
| Model Indicator | Shows which AI model will generate the plan |
| Preview Tab | Formatted markdown rendering of the plan |
| Edit Tab | Raw text editor for manual changes |
| Feedback Input | Optional field to provide refinement instructions |
| Refine Button | Re-generates plan incorporating your feedback |
| Start Over | Regenerates from scratch |
The enhanced prompt follows a consistent structure:
## Goal
Clear restatement of what needs to be accomplished.
## Strategy
High-level approach to solving the problem.
## Execution Plan
### Phase 1: [Name]
- **Tools**: [which tools to use]
- **Actions**: [specific steps]
- **Store**: [what to save to blackboard/scratchpad]
- **Expected Output**: [what this phase produces]
### Phase 2: [Name]
...
## Success Criteria
- [How to know the task is complete]
- [Quality checks to perform]
## Potential Challenges
- [Possible issues and how to handle them]
## Estimated Iterations: [number]The enhancement AI receives:
- Your original prompt
- Complete list of available tools with descriptions and parameters
- Metadata about uploaded files (names, types, sizes)
- The selected model's capabilities
You can refine the plan multiple times:
- Review the generated plan
- Identify areas that need improvement
- Enter feedback like:
- "Focus more on error handling"
- "Add a verification step after data collection"
- "Use brave_search instead of google_search"
- "Include a step to save intermediate results"
- Click "Refine" to regenerate with your feedback incorporated
Recommended for:
- Complex multi-step tasks
- Research and analysis projects
- Tasks requiring multiple tool integrations
- When you're unsure of the best approach
- Long-running autonomous sessions
Skip for:
- Simple single-tool operations
- Tasks you've done before with known steps
- Quick queries or lookups
The FreeAgentCanvas provides visual feedback of the agent's state:
┌──────────────┐
│ Read Tools │ (brave_search, web_scrape, etc.)
│ ┌──┐ ┌──┐ │
│ │ │ │ │ │
└───┴──┴─┴──┴──┘
│
▼ (blue connection)
┌─────────┐ ┌─────────────┐ ┌────────────┐
│ Prompt │────────▶│ AGENT │────────▶│ Scratchpad │
│ Node │ │ (Center) │ │ Node │──────▶ Attribute Nodes
└─────────┘ └─────────────┘ └────────────┘
│ │ │
File Nodes │ Artifact Nodes
▼ (amber connection)
┌──────────────┐
│ Write Tools │ (send_email, export_pdf, etc.)
│ ┌──┐ ┌──┐ │
│ │ │ │ │ │
└───┴──┴─┴──┴──┘
- Agent Node: Center, shows current status with pulsing animation when active
- Tool Nodes: Above (read) and below (write), color-coded by status
- Prompt Node: Left side, shows user's task
- File Nodes: Below prompt, session files
- Scratchpad Node: Right side, shows memory content
- Attribute Nodes: Far right, individual saved tool results
- Artifact Nodes: Below scratchpad, generated outputs
The Raw tab provides complete visibility into the iteration:
- System Prompt: Full prompt sent to LLM including memory state
- User Prompt: Original task description
- Full Prompt: Combined system + user prompt
- Raw Response: Exact LLM output (JSON)
- Parse Errors: If parsing failed, shows error details and problematic text
- Tool Results: Each tool call with success/failure status and result data
Free Agent supports three LLM providers:
- Models:
gemini-2.5-flash,gemini-2.5-flash-lite,gemini-3-pro-preview,gemini-3-flash-preview - JSON Mode:
responseMimeType: "application/json" - API Key:
GEMINI_API_KEY
- Models:
claude-sonnet-4-5,claude-haiku-4-5,claude-opus-4-5 - JSON Mode: Forced tool use with
respond_with_actionstool - API Key:
ANTHROPIC_API_KEY
- Models:
grok-4-1-fast-reasoning,grok-4-1-fast-non-reasoning,grok-code-fast-1 - JSON Mode: OpenAI-compatible
response_formatwith JSON schema - API Key:
XAI_API_KEY
All models use the same response schema for consistency.
Every LLM response must conform to:
{
"reasoning": "Agent's thought process (required)",
"tool_calls": [
{
"tool": "tool_name",
"params": { ... }
}
],
"blackboard_entry": {
"category": "plan|observation|insight|decision|error",
"content": "What happened this iteration (required)"
},
"status": "in_progress|completed|needs_assistance|error",
"message_to_user": "Optional progress update",
"artifacts": [
{
"type": "text|file|image|data",
"title": "Artifact title",
"content": "Artifact content",
"description": "Optional description"
}
],
"final_report": {
"summary": "Task completion summary",
"tools_used": ["tool1", "tool2"],
"artifacts_created": ["artifact1"],
"key_findings": ["finding1", "finding2"]
}
}- Attempt direct JSON parse
- Sanitize control characters and retry
- Extract JSON object via regex
- Salvage
reasoningfield for user feedback - Return raw response for debugging
- Logged to console
- Returned in toolResults array
- Agent can react in next iteration
- 429 errors surface to user
- 402 (payment required) handled gracefully
To prevent race conditions between iterations:
// In useFreeAgentSession.ts
const blackboardRef = useRef<BlackboardEntry[]>([]);
const scratchpadRef = useRef<string>("");
const toolResultAttributesRef = useRef<Record<string, ToolResultAttribute>>({});These refs are updated immediately on tool execution, ensuring the next iteration prompt always has current data (bypassing React state update delays).
GEMINI_API_KEY- For Gemini modelsANTHROPIC_API_KEY- For Claude modelsXAI_API_KEY- For Grok modelsBRAVE_API_KEY- For Brave SearchGOOGLE_SEARCH_API/GOOGLE_SEARCH_ENGINE- For Google SearchGITHUB_TOKEN- For GitHub operationsRESEND_API_KEY- For email sendingELEVENLABS_API_KEY- For text-to-speech
public/data/toolsManifest.json
Defines all available tools with:
- Display name and description
- Parameter schema
- Category grouping
- Icon assignment
Free Agent supports full customization of the system prompt, allowing users to modify agent behavior, test different configurations, and create reusable prompt templates.
Access the Prompt tab in Free Agent view to see and modify:
- System Sections - Core agent instructions (read-only)
- Customizable Sections - Editable parts of the prompt
- Runtime Sections - Dynamic placeholders populated at execution
- Tools - Tool definitions with editable descriptions
- Response Schemas - Provider-specific JSON schemas (view-only)
| Type | Badge Color | Editable | Description |
|---|---|---|---|
| System | Red | No | Core agent identity and critical rules |
| Customizable | Blue | Yes | Workflow, anti-loop rules, data handling |
| Runtime | Purple | No | Dynamic placeholders ({{TOOLS_LIST}}, etc.) |
| Custom | Green | Yes | User-added sections |
For sections marked as "Customizable":
- Click the Edit button on any customizable section
- Modify the content in the textarea
- Click Save to persist changes
- Click Reset to restore original content
Changes are automatically saved to localStorage and persist across sessions.
Visual Indicators:
- Modified badge appears on edited sections
- Reordered badge shows sections moved from default position
- Custom badge identifies user-created sections
- Click Add Section at the top of the Prompt tab
- Enter a unique title and description
- Write your custom content
- Click Add Section to save
Custom sections can be:
- Edited at any time
- Reordered using up/down arrows
- Deleted when no longer needed
Use the up/down arrows on any section to change its position in the prompt. This affects the order in which instructions appear to the LLM.
- System sections can be moved relative to each other
- Custom sections can be placed anywhere in the order
- Original order can be restored with Reset All
These read-only sections show where dynamic content is injected at execution time:
| Placeholder | Description |
|---|---|
{{TOOLS_LIST}} |
Available tools formatted for the LLM |
{{SESSION_FILES}} |
List of attached files |
{{BLACKBOARD_CONTENT}} |
Current planning journal entries |
{{SCRATCHPAD_CONTENT}} |
Current scratchpad data |
{{PREVIOUS_RESULTS}} |
Tool results from last iteration |
{{ASSISTANCE_RESPONSE}} |
User response to assistance request |
These placeholders help you understand where runtime data appears in the final prompt sent to the LLM.
The Tools tab displays all available tools organized by category:
Features:
- Search/filter tools by name
- View tool parameters, types, and requirements
- See edge function or frontend handler mapping
- Edit tool descriptions to customize LLM behavior
Editing Tool Descriptions:
- Click Edit on any tool
- Modify the description
- Click Save to persist
Custom descriptions appear in {{TOOLS_LIST}} and help guide the LLM's understanding of when and how to use each tool.
Tool Categories:
- Memory (read/write blackboard, scratchpad, attributes)
- Web (search, scrape)
- Code (GitHub operations)
- File (read files, ZIP handling)
- Document (PDF, OCR)
- Utility (time, weather)
- Communication (email, assistance)
- Export (Word, PDF generation)
- API (HTTP calls, SQL)
Tools can belong to multiple categories for easier discovery.
View the JSON schemas used to enforce structured responses from each LLM provider:
- Gemini: Uses
responseMimeType: "application/json"with schema - Claude: Uses forced tool call with
respond_with_actionstool - Grok: Uses OpenAI-compatible
response_format
These schemas ensure consistent response structure across providers and cannot be edited.
Exporting:
- Click the Export button in the Prompt tab header
- A JSON file downloads containing:
- All section customizations
- Custom sections you've added
- Order overrides
- Tool description overrides
Importing:
- Click the Import button
- Select a previously exported JSON file
- All customizations are restored
Export Format (v1.0):
{
"formatVersion": "1.0",
"exportedAt": "2026-01-04T...",
"template": {
"id": "freeagent-default",
"name": "FreeAgent Default",
"sections": [...],
"responseSchemas": [...],
"tools": [...]
},
"customizations": {
"sectionOverrides": {
"section_id": "custom content..."
},
"additionalSections": [
{
"id": "custom_1",
"title": "My Custom Rules",
"content": "...",
"order": 5.5
}
],
"orderOverrides": {
"section_id": 3.5
},
"toolOverrides": {
"brave_search": {
"description": "Custom search behavior..."
}
}
}
}- Reset Section: Restore individual section to default
- Reset All: Clear all customizations and restore factory defaults
Testing Different Prompting Strategies:
- Modify anti-loop rules to test different behaviors
- Add custom sections with specific constraints
- Change tool descriptions to encourage certain patterns
Creating Specialized Agents:
- Export a base configuration
- Create variations for different tasks (research, coding, writing)
- Import the appropriate template before running
Debugging Agent Behavior:
- View the Raw tab to see the assembled prompt
- Compare against your customizations
- Identify which instructions affect behavior
Sharing Configurations:
- Export your optimized template
- Share JSON file with team members
- Import to replicate exact agent behavior
- Be specific in your prompt
- Attach relevant files upfront
- Use Continue to build on previous work
- Check Raw tab when debugging issues
- Export templates before making major changes
- Use custom sections for task-specific instructions
- Always use
saveAsfor data-fetching tools - Write to blackboard every iteration
- Summarize data, don't copy raw JSON
- Check blackboard before re-executing tools
- Use
read_attributeto access saved results - Test prompt changes with small iterations first
- Start with small edits to customizable sections
- Use the Raw tab to verify changes appear correctly
- Export working configurations before experimenting
- Add custom sections for new behaviors rather than modifying core sections
- Use descriptive titles for custom sections
- Document your changes in section descriptions