Cli firecrawl-crawl

install
source · Clone the upstream repo
git clone https://github.com/firecrawl/cli
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/firecrawl/cli "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/firecrawl-crawl" ~/.claude/skills/firecrawl-cli-firecrawl-crawl && rm -rf "$T"
manifest: skills/firecrawl-crawl/SKILL.md
source content

firecrawl crawl

Bulk extract content from a website. Crawls pages following links up to a depth/limit.

When to use

  • You need content from many pages on a site (e.g., all
    /docs/
    )
  • You want to extract an entire site section
  • Step 4 in the workflow escalation pattern: search → scrape → map → crawl → interact

Quick start

# Crawl a docs section
firecrawl crawl "<url>" --include-paths /docs --limit 50 --wait -o .firecrawl/crawl.json

# Full crawl with depth limit
firecrawl crawl "<url>" --max-depth 3 --wait --progress -o .firecrawl/crawl.json

# Check status of a running crawl
firecrawl crawl <job-id>

Options

OptionDescription
--wait
Wait for crawl to complete before returning
--progress
Show progress while waiting
--limit <n>
Max pages to crawl
--max-depth <n>
Max link depth to follow
--include-paths <paths>
Only crawl URLs matching these paths
--exclude-paths <paths>
Skip URLs matching these paths
--delay <ms>
Delay between requests
--max-concurrency <n>
Max parallel crawl workers
--pretty
Pretty print JSON output
-o, --output <path>
Output file path

Tips

  • Always use
    --wait
    when you need the results immediately. Without it, crawl returns a job ID for async polling.
  • Use
    --include-paths
    to scope the crawl — don't crawl an entire site when you only need one section.
  • Crawl consumes credits per page. Check
    firecrawl credit-usage
    before large crawls.

See also