Cli firecrawl-agent

install
source · Clone the upstream repo
git clone https://github.com/firecrawl/cli
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/firecrawl/cli "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/firecrawl-agent" ~/.claude/skills/firecrawl-cli-firecrawl-agent && rm -rf "$T"
manifest: skills/firecrawl-agent/SKILL.md
source content

firecrawl agent

AI-powered autonomous extraction. The agent navigates sites and extracts structured data (takes 2-5 minutes).

When to use

  • You need structured data from complex multi-page sites
  • Manual scraping would require navigating many pages
  • You want the AI to figure out where the data lives

Quick start

# Extract structured data
firecrawl agent "extract all pricing tiers" --wait -o .firecrawl/pricing.json

# With a JSON schema for structured output
firecrawl agent "extract products" --schema '{"type":"object","properties":{"name":{"type":"string"},"price":{"type":"number"}}}' --wait -o .firecrawl/products.json

# Focus on specific pages
firecrawl agent "get feature list" --urls "<url>" --wait -o .firecrawl/features.json

Options

OptionDescription
--urls <urls>
Starting URLs for the agent
--model <model>
Model to use: spark-1-mini or spark-1-pro
--schema <json>
JSON schema for structured output
--schema-file <path>
Path to JSON schema file
--max-credits <n>
Credit limit for this agent run
--wait
Wait for agent to complete
--pretty
Pretty print JSON output
-o, --output <path>
Output file path

Tips

  • Always use
    --wait
    to get results inline. Without it, returns a job ID.
  • Use
    --schema
    for predictable, structured output — otherwise the agent returns freeform data.
  • Agent runs consume more credits than simple scrapes. Use
    --max-credits
    to cap spending.
  • For simple single-page extraction, prefer
    scrape
    — it's faster and cheaper.

See also