Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion package.json
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@
},
"dependencies": {
"@inquirer/prompts": "^8.2.1",
"@mendable/firecrawl-js": "4.17.0",
"@mendable/firecrawl-js": "4.22.2",
"commander": "^14.0.2"
}
}
29 changes: 15 additions & 14 deletions pnpm-lock.yaml

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

24 changes: 24 additions & 0 deletions skills/firecrawl-cli/SKILL.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,6 +63,7 @@ Follow this escalation pattern:
| Interact with a page | `scrape` + `interact` | Content requires clicks, form fills, pagination, or login |
| Download a site to files | `download` | Save an entire site as local files |
| Parse a local file | `parse` | File on disk (PDF, DOCX, XLSX, etc.) — not a URL |
| Watch pages for changes | `monitor` | Schedule recurring scrapes/crawls, diff against snapshots |

For detailed command reference, run `firecrawl <command> --help`.

Expand All @@ -72,6 +73,29 @@ For detailed command reference, run `firecrawl <command> --help`.
- Use `scrape` + `interact` when you need to interact with a page, such as clicking buttons, filling out forms, navigating through a complex site, infinite scroll, or when scrape fails to grab all the content you need.
- Never use interact for web searches - use `search` instead.

**Monitor:** Schedule recurring scrapes or crawls and diff each result against the last retained snapshot. Use for product pages, docs, blogs, changelogs, competitor sites — any page where changes matter. Each check labels pages as `same`, `new`, `changed`, `removed`, or `error`, with webhook and email notification options.

Subcommands: `create | list | get | update | delete | run | checks | check`.

```bash
# create from flags
firecrawl monitor create --name "Blog" --schedule "every 30 minutes" \
--scrape-urls https://example.com/blog --email alerts@example.com

# or from JSON (positional file, or piped stdin)
firecrawl monitor create monitor.json
cat monitor.json | firecrawl monitor create

firecrawl monitor list --limit 20
firecrawl monitor run <monitorId> # trigger a check now
firecrawl monitor checks <monitorId> # list checks
firecrawl monitor check <monitorId> <checkId> --page-status changed
firecrawl monitor update <monitorId> --state paused
firecrawl monitor delete <monitorId>
```

Schedules accept cron (`--cron "*/30 * * * *"`) or natural language (`--schedule "every 30 minutes"`). Minimum interval is 15 minutes. Targets are either `--scrape-urls a,b,c` (scrape) or `--crawl-url <url>` (crawl whole site each check). Note: `--state` (not `--status`) sets active/paused; `--page-status` (not `--status`) filters page results on `check` — avoids collision with the global `--status` flag. Monitoring is not available for zero-data-retention teams.

**Avoid redundant fetches:**

- `search --scrape` already fetches full page content. Don't re-scrape those URLs.
Expand Down
Loading
Loading