Cli firecrawl-agent
install
source · Clone the upstream repo
git clone https://github.com/firecrawl/cli
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/firecrawl/cli "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/firecrawl-agent" ~/.claude/skills/firecrawl-cli-firecrawl-agent && rm -rf "$T"
manifest:
skills/firecrawl-agent/SKILL.mdsource content
firecrawl agent
AI-powered autonomous extraction. The agent navigates sites and extracts structured data (takes 2-5 minutes).
When to use
- You need structured data from complex multi-page sites
- Manual scraping would require navigating many pages
- You want the AI to figure out where the data lives
Quick start
# Extract structured data firecrawl agent "extract all pricing tiers" --wait -o .firecrawl/pricing.json # With a JSON schema for structured output firecrawl agent "extract products" --schema '{"type":"object","properties":{"name":{"type":"string"},"price":{"type":"number"}}}' --wait -o .firecrawl/products.json # Focus on specific pages firecrawl agent "get feature list" --urls "<url>" --wait -o .firecrawl/features.json
Options
| Option | Description |
|---|---|
| Starting URLs for the agent |
| Model to use: spark-1-mini or spark-1-pro |
| JSON schema for structured output |
| Path to JSON schema file |
| Credit limit for this agent run |
| Wait for agent to complete |
| Pretty print JSON output |
| Output file path |
Tips
- Always use
to get results inline. Without it, returns a job ID.--wait - Use
for predictable, structured output — otherwise the agent returns freeform data.--schema - Agent runs consume more credits than simple scrapes. Use
to cap spending.--max-credits - For simple single-page extraction, prefer
— it's faster and cheaper.scrape
See also
- firecrawl-scrape — simpler single-page extraction
- firecrawl-interact — scrape + interact for manual page interaction (more control)
- firecrawl-crawl — bulk extraction without AI