Cli firecrawl-crawl
install
source · Clone the upstream repo
git clone https://github.com/firecrawl/cli
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/firecrawl/cli "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/firecrawl-crawl" ~/.claude/skills/firecrawl-cli-firecrawl-crawl && rm -rf "$T"
manifest:
skills/firecrawl-crawl/SKILL.mdsource content
firecrawl crawl
Bulk extract content from a website. Crawls pages following links up to a depth/limit.
When to use
- You need content from many pages on a site (e.g., all
)/docs/ - You want to extract an entire site section
- Step 4 in the workflow escalation pattern: search → scrape → map → crawl → interact
Quick start
# Crawl a docs section firecrawl crawl "<url>" --include-paths /docs --limit 50 --wait -o .firecrawl/crawl.json # Full crawl with depth limit firecrawl crawl "<url>" --max-depth 3 --wait --progress -o .firecrawl/crawl.json # Check status of a running crawl firecrawl crawl <job-id>
Options
| Option | Description |
|---|---|
| Wait for crawl to complete before returning |
| Show progress while waiting |
| Max pages to crawl |
| Max link depth to follow |
| Only crawl URLs matching these paths |
| Skip URLs matching these paths |
| Delay between requests |
| Max parallel crawl workers |
| Pretty print JSON output |
| Output file path |
Tips
- Always use
when you need the results immediately. Without it, crawl returns a job ID for async polling.--wait - Use
to scope the crawl — don't crawl an entire site when you only need one section.--include-paths - Crawl consumes credits per page. Check
before large crawls.firecrawl credit-usage
See also
- firecrawl-scrape — scrape individual pages
- firecrawl-map — discover URLs before deciding to crawl
- firecrawl-download — download site to local files (uses map + scrape)