Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
24 changes: 12 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -465,7 +465,9 @@ firecrawl agent abc123-def456-... --wait --poll-interval 10

---

### `browser` - Browser sandbox sessions (Beta)
### `browser` - Browser sandbox sessions (Deprecated)

> **Deprecated:** Prefer `scrape` + `interact` instead. Interact lets you scrape a page and then click, fill forms, and navigate without managing sessions manually. See the `interact` command.

Launch and control cloud browser sessions. By default, commands are sent to agent-browser (pre-installed in every sandbox). Use `--python` or `--node` to run Playwright code directly instead.

Expand Down Expand Up @@ -866,24 +868,22 @@ Add `-y` to any command to auto-approve tool permissions (maps to `--dangerously

### Live View

Use `firecrawl browser launch --json` to get a live view URL, then pass it to your agent so you can watch it work in real-time:
Use `firecrawl scrape <url>` + `firecrawl interact` to interact with pages. For advanced use cases requiring a raw CDP session, you can still use `firecrawl browser launch --json` to get a live view URL:

```bash
# Launch a browser session and grab the live view URL
# Preferred: scrape + interact workflow
firecrawl scrape https://myapp.com
firecrawl interact --prompt "Click on the login button and fill in the form"

# Advanced: Launch a browser session and grab the live view URL
LIVE_URL=$(firecrawl browser launch --json | jq -r '.liveViewUrl')

# Pass it to Claude Code
claude --append-system-prompt "A cloud browser session is running. Live view: $LIVE_URL -- use \`firecrawl browser\` commands to interact." \
claude --append-system-prompt "A cloud browser session is running. Live view: $LIVE_URL -- use \`firecrawl interact\` to interact with scraped pages." \
--dangerously-skip-permissions \
"QA test https://myapp.com using the cloud browser"

# Pass it to Codex
codex --full-auto \
--config "instructions=A cloud browser session is running. Live view: $LIVE_URL -- use \`firecrawl browser\` commands to interact." \
"walk through the signup flow on https://example.com"
"QA test https://myapp.com"

# Or use the built-in workflow commands (session is auto-saved for firecrawl browser)
firecrawl browser launch --json | jq -r '.liveViewUrl'
# Or use the built-in workflow commands
firecrawl claude demo https://resend.com
```

Expand Down
2 changes: 1 addition & 1 deletion package.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"name": "firecrawl-cli",
"version": "1.11.2",
"version": "1.12.0",
"description": "Command-line interface for Firecrawl. Scrape, crawl, and extract data from any website directly from your terminal.",
"main": "dist/index.js",
"bin": {
Expand Down
2 changes: 1 addition & 1 deletion skills/firecrawl-agent/SKILL.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,5 +53,5 @@ firecrawl agent "get feature list" --urls "<url>" --wait -o .firecrawl/features.
## See also

- [firecrawl-scrape](../firecrawl-scrape/SKILL.md) — simpler single-page extraction
- [firecrawl-browser](../firecrawl-browser/SKILL.md) — manual browser automation (more control)
- [firecrawl-browser](../firecrawl-browser/SKILL.md) — scrape + interact for manual page interaction (more control)
- [firecrawl-crawl](../firecrawl-crawl/SKILL.md) — bulk extraction without AI
102 changes: 40 additions & 62 deletions skills/firecrawl-browser/SKILL.md
Original file line number Diff line number Diff line change
@@ -1,107 +1,85 @@
---
name: firecrawl-browser
description: |
Cloud browser automation for pages requiring interaction — clicks, form fills, login, pagination, infinite scroll. Use this skill when the user needs to interact with a webpage, log into a site, click buttons, fill forms, navigate multi-step flows, handle pagination, or when regular scraping fails because content requires JavaScript interaction. Triggers on "click", "fill out the form", "log in to", "paginated", "infinite scroll", "interact with the page", or "scrape failed". Provides remote Chromium sessions with persistent profiles.
DEPRECATED — use scrape + interact instead. Interact lets you scrape a page and then click, fill forms, and navigate without managing sessions manually. Use this skill when the user needs to interact with a webpage, log into a site, click buttons, fill forms, navigate multi-step flows, handle pagination, or when regular scraping fails because content requires JavaScript interaction. Triggers on "click", "fill out the form", "log in to", "paginated", "infinite scroll", "interact with the page", or "scrape failed".
allowed-tools:
- Bash(firecrawl *)
- Bash(npx firecrawl *)
---

# firecrawl browser
# firecrawl interact (formerly browser)

Cloud Chromium sessions in Firecrawl's remote sandboxed environment. Interact with pages that require clicks, form fills, pagination, or login.
> **The `browser` command is deprecated.** Use `scrape` + `interact` instead. Interact lets you scrape a page and then click, fill forms, and navigate without managing sessions manually.

Interact with scraped pages in a live browser session. Scrape a page first, then use natural language prompts or code to click, fill forms, navigate, and extract data.

## When to use

- Content requires interaction: clicks, form fills, pagination, login
- `scrape` failed because content is behind JavaScript interaction
- You need to navigate a multi-step flow
- Last resort in the [workflow escalation pattern](firecrawl-cli): search → scrape → map → crawl → **browser**
- **Never use browser for web searches** — use `search` instead
- Last resort in the [workflow escalation pattern](firecrawl-cli): search → scrape → map → crawl → **interact**
- **Never use interact for web searches** — use `search` instead

## Quick start

```bash
# Typical browser workflow
firecrawl browser "open <url>"
firecrawl browser "snapshot -i" # see interactive elements with @ref IDs
firecrawl browser "click @e5" # interact with elements
firecrawl browser "fill @e3 'search query'" # fill form fields
firecrawl browser "scrape" -o .firecrawl/page.md # extract content
firecrawl browser close
```
# 1. Scrape a page (scrape ID is saved automatically)
firecrawl scrape "<url>"

Shorthand auto-launches a session if none exists — no setup required.
# 2. Interact with the page using natural language
firecrawl interact --prompt "Click the login button"
firecrawl interact --prompt "Fill in the email field with test@example.com"
firecrawl interact --prompt "Extract the pricing table"

## Commands
# 3. Or use code for precise control
firecrawl interact --code "agent-browser click @e5" --language bash
firecrawl interact --code "agent-browser snapshot -i" --language bash

| Command | Description |
| -------------------- | ---------------------------------------- |
| `open <url>` | Navigate to a URL |
| `snapshot -i` | Get interactive elements with `@ref` IDs |
| `screenshot` | Capture a PNG screenshot |
| `click <@ref>` | Click an element by ref |
| `type <@ref> <text>` | Type into an element |
| `fill <@ref> <text>` | Fill a form field (clears first) |
| `scrape` | Extract page content as markdown |
| `scroll <direction>` | Scroll up/down/left/right |
| `wait <seconds>` | Wait for a duration |
| `eval <js>` | Evaluate JavaScript on the page |

Session management: `launch-session --ttl 600`, `list`, `close`
# 4. Stop the session when done
firecrawl interact stop
```

## Options

| Option | Description |
| ---------------------------- | -------------------------------------------------- |
| `--ttl <seconds>` | Session time-to-live |
| `--ttl-inactivity <seconds>` | Inactivity timeout |
| `--session <id>` | Use a specific session ID |
| `--profile <name>` | Use a named profile (persists state) |
| `--no-save-changes` | Read-only reconnect (don't write to session state) |
| `-o, --output <path>` | Output file path |
| Option | Description |
| --------------------- | ------------------------------------------------- |
| `--prompt <text>` | Natural language instruction (use this OR --code) |
| `--code <code>` | Code to execute in the browser session |
| `--language <lang>` | Language for code: bash, python, node |
| `--timeout <seconds>` | Execution timeout (default: 30, max: 300) |
| `--scrape-id <id>` | Target a specific scrape (default: last scrape) |
| `-o, --output <path>` | Output file path |

## Profiles

Profiles survive close and can be reconnected by name. Use them for login-then-work flows:
Use `--profile` on the scrape to persist browser state (cookies, localStorage) across scrapes:

```bash
# Session 1: Login and save state
firecrawl browser launch-session --profile my-app
firecrawl browser "open https://app.example.com/login"
firecrawl browser "snapshot -i"
firecrawl browser "fill @e3 'user@example.com'"
firecrawl browser "click @e7"
firecrawl browser "wait 2"
firecrawl browser close
firecrawl scrape "https://app.example.com/login" --profile my-app
firecrawl interact --prompt "Fill in email with user@example.com and click login"

# Session 2: Come back authenticated
firecrawl browser launch-session --profile my-app
firecrawl browser "open https://app.example.com/dashboard"
firecrawl browser "scrape" -o .firecrawl/dashboard.md
firecrawl browser close
```

Read-only reconnect (no writes to session state):

```bash
firecrawl browser launch-session --profile my-app --no-save-changes
firecrawl scrape "https://app.example.com/dashboard" --profile my-app
firecrawl interact --prompt "Extract the dashboard data"
```

Shorthand with profile:
Read-only reconnect (no writes to profile state):

```bash
firecrawl browser --profile my-app "open https://example.com"
firecrawl scrape "https://app.example.com" --profile my-app --no-save-changes
```

## Tips

- If you get forbidden errors, the session may have expired — create a new one.
- For parallel browser work, launch separate sessions and operate them via `--session <id>`.
- Always `close` sessions when done to free resources.
- Always scrape first — `interact` requires a scrape ID from a previous `firecrawl scrape` call
- The scrape ID is saved automatically, so you don't need `--scrape-id` for subsequent interact calls
- Use `firecrawl interact stop` to free resources when done
- For parallel work, scrape multiple pages and interact with each using `--scrape-id`

## See also

- [firecrawl-scrape](../firecrawl-scrape/SKILL.md) — try scrape first, escalate to browser only when needed
- [firecrawl-search](../firecrawl-search/SKILL.md) — for web searches (never use browser for searching)
- [firecrawl-scrape](../firecrawl-scrape/SKILL.md) — try scrape first, escalate to interact only when needed
- [firecrawl-search](../firecrawl-search/SKILL.md) — for web searches (never use interact for searching)
- [firecrawl-agent](../firecrawl-agent/SKILL.md) — AI-powered extraction (less manual control)
34 changes: 17 additions & 17 deletions skills/firecrawl-cli/SKILL.md
Original file line number Diff line number Diff line change
@@ -1,15 +1,15 @@
---
name: firecrawl
description: |
Web scraping, search, crawling, and browser automation via the Firecrawl CLI. Use this skill whenever the user wants to search the web, find articles, research a topic, look something up online, scrape a webpage, grab content from a URL, extract data from a website, crawl documentation, download a site, or interact with pages that need clicks or logins. Also use when they say "fetch this page", "pull the content from", "get the page at https://", or reference scraping external websites. This provides real-time web search with full page content extraction and cloud browser automation — capabilities beyond what Claude can do natively with built-in tools. Do NOT trigger for local file operations, git commands, deployments, or code editing tasks.
Web scraping, search, crawling, and page interaction via the Firecrawl CLI. Use this skill whenever the user wants to search the web, find articles, research a topic, look something up online, scrape a webpage, grab content from a URL, extract data from a website, crawl documentation, download a site, or interact with pages that need clicks or logins. Also use when they say "fetch this page", "pull the content from", "get the page at https://", or reference scraping external websites. This provides real-time web search with full page content extraction and interact capabilities — beyond what Claude can do natively with built-in tools. Do NOT trigger for local file operations, git commands, deployments, or code editing tasks.
allowed-tools:
- Bash(firecrawl *)
- Bash(npx firecrawl *)
---

# Firecrawl CLI

Web scraping, search, and browser automation CLI. Returns clean markdown optimized for LLM context windows.
Web scraping, search, and page interaction CLI. Returns clean markdown optimized for LLM context windows.

Run `firecrawl --help` or `firecrawl <command> --help` for full option details.

Expand Down Expand Up @@ -42,25 +42,25 @@ Follow this escalation pattern:
2. **Scrape** - Have a URL. Extract its content directly.
3. **Map + Scrape** - Large site or need a specific subpage. Use `map --search` to find the right URL, then scrape it.
4. **Crawl** - Need bulk content from an entire site section (e.g., all /docs/).
5. **Browser** - Scrape failed because content is behind interaction (pagination, modals, form submissions, multi-step navigation).
5. **Interact** - Scrape first, then interact with the page (pagination, modals, form submissions, multi-step navigation).

| Need | Command | When |
| --------------------------- | ---------- | --------------------------------------------------------- |
| Find pages on a topic | `search` | No specific URL yet |
| Get a page's content | `scrape` | Have a URL, page is static or JS-rendered |
| Find URLs within a site | `map` | Need to locate a specific subpage |
| Bulk extract a site section | `crawl` | Need many pages (e.g., all /docs/) |
| AI-powered data extraction | `agent` | Need structured data from complex sites |
| Interact with a page | `browser` | Content requires clicks, form fills, pagination, or login |
| Download a site to files | `download` | Save an entire site as local files |
| Need | Command | When |
| --------------------------- | --------------------- | --------------------------------------------------------- |
| Find pages on a topic | `search` | No specific URL yet |
| Get a page's content | `scrape` | Have a URL, page is static or JS-rendered |
| Find URLs within a site | `map` | Need to locate a specific subpage |
| Bulk extract a site section | `crawl` | Need many pages (e.g., all /docs/) |
| AI-powered data extraction | `agent` | Need structured data from complex sites |
| Interact with a page | `scrape` + `interact` | Content requires clicks, form fills, pagination, or login |
| Download a site to files | `download` | Save an entire site as local files |

For detailed command reference, use the individual skill for each command (e.g., `firecrawl-search`, `firecrawl-browser`) or run `firecrawl <command> --help`.
For detailed command reference, run `firecrawl <command> --help`.

**Scrape vs browser:**
**Scrape vs interact:**

- Use `scrape` first. It handles static pages and JS-rendered SPAs.
- Use `browser` when you need to interact with a page, such as clicking buttons, filling out forms, navigating through a complex site, infinite scroll, or when scrape fails to grab all the content you need.
- Never use browser for web searches - use `search` instead.
- Use `scrape` + `interact` when you need to interact with a page, such as clicking buttons, filling out forms, navigating through a complex site, infinite scroll, or when scrape fails to grab all the content you need.
- Never use interact for web searches - use `search` instead.

**Avoid redundant fetches:**

Expand Down Expand Up @@ -116,7 +116,7 @@ firecrawl scrape "<url-3>" -o .firecrawl/3.md &
wait
```

For browser, launch separate sessions for independent tasks and operate them in parallel via `--session <id>`.
For interact, scrape multiple pages and interact with each independently using their scrape IDs.

## Credit Usage

Expand Down
2 changes: 1 addition & 1 deletion skills/firecrawl-crawl/SKILL.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ Bulk extract content from a website. Crawls pages following links up to a depth/

- You need content from many pages on a site (e.g., all `/docs/`)
- You want to extract an entire site section
- Step 4 in the [workflow escalation pattern](firecrawl-cli): search → scrape → map → **crawl** → browser
- Step 4 in the [workflow escalation pattern](firecrawl-cli): search → scrape → map → **crawl** → interact

## Quick start

Expand Down
2 changes: 1 addition & 1 deletion skills/firecrawl-map/SKILL.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ Discover URLs on a site. Use `--search` to find a specific page within a large s

- You need to find a specific subpage on a large site
- You want a list of all URLs on a site before scraping or crawling
- Step 3 in the [workflow escalation pattern](firecrawl-cli): search → scrape → **map** → crawl → browser
- Step 3 in the [workflow escalation pattern](firecrawl-cli): search → scrape → **map** → crawl → interact

## Quick start

Expand Down
6 changes: 3 additions & 3 deletions skills/firecrawl-scrape/SKILL.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ Scrape one or more URLs. Returns clean, LLM-optimized markdown. Multiple URLs ar

- You have a specific URL and want its content
- The page is static or JS-rendered (SPA)
- Step 2 in the [workflow escalation pattern](firecrawl-cli): search → **scrape** → map → crawl → browser
- Step 2 in the [workflow escalation pattern](firecrawl-cli): search → **scrape** → map → crawl → interact

## Quick start

Expand Down Expand Up @@ -55,7 +55,7 @@ firecrawl scrape "https://example.com/pricing" --query "What is the enterprise p
## Tips

- **Prefer plain scrape over `--query`.** Scrape to a file, then use `grep`, `head`, or read the markdown directly — you can search and reason over the full content yourself. Use `--query` only when you want a single targeted answer without saving the page (costs 5 extra credits).
- **Try scrape before browser.** Scrape handles static pages and JS-rendered SPAs. Only escalate to browser when you need interaction (clicks, form fills, pagination).
- **Try scrape before interact.** Scrape handles static pages and JS-rendered SPAs. Only escalate to `interact` when you need interaction (clicks, form fills, pagination).
- Multiple URLs are scraped concurrently — check `firecrawl --status` for your concurrency limit.
- Single format outputs raw content. Multiple formats (e.g., `--format markdown,links`) output JSON.
- Always quote URLs — shell interprets `?` and `&` as special characters.
Expand All @@ -64,5 +64,5 @@ firecrawl scrape "https://example.com/pricing" --query "What is the enterprise p
## See also

- [firecrawl-search](../firecrawl-search/SKILL.md) — find pages when you don't have a URL
- [firecrawl-browser](../firecrawl-browser/SKILL.md) — when scrape can't get the content (interaction needed)
- [firecrawl-browser](../firecrawl-browser/SKILL.md) — when scrape can't get the content, use `interact` to click, fill forms, etc.
- [firecrawl-download](../firecrawl-download/SKILL.md) — bulk download an entire site to local files
Loading
Loading