Cli firecrawl-download
install
source · Clone the upstream repo
git clone https://github.com/firecrawl/cli
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/firecrawl/cli "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/firecrawl-download" ~/.claude/skills/firecrawl-cli-firecrawl-download && rm -rf "$T"
manifest:
skills/firecrawl-download/SKILL.mdsource content
firecrawl download
Experimental. Convenience command that combines
+mapto save an entire site as local files.scrape
Maps the site first to discover pages, then scrapes each one into nested directories under
.firecrawl/. All scrape options work with download. Always pass -y to skip the confirmation prompt.
When to use
- You want to save an entire site (or section) to local files
- You need offline access to documentation or content
- Bulk content extraction with organized file structure
Quick start
# Interactive wizard (picks format, screenshots, paths for you) firecrawl download https://docs.example.com # With screenshots firecrawl download https://docs.example.com --screenshot --limit 20 -y # Multiple formats (each saved as its own file per page) firecrawl download https://docs.example.com --format markdown,links --screenshot --limit 20 -y # Creates per page: index.md + links.txt + screenshot.png # Filter to specific sections firecrawl download https://docs.example.com --include-paths "/features,/sdks" # Skip translations firecrawl download https://docs.example.com --exclude-paths "/zh,/ja,/fr,/es,/pt-BR" # Full combo firecrawl download https://docs.example.com \ --include-paths "/features,/sdks" \ --exclude-paths "/zh,/ja" \ --only-main-content \ --screenshot \ -y
Download options
| Option | Description |
|---|---|
| Max pages to download |
| Filter URLs by search query |
| Only download matching paths |
| Skip matching paths |
| Include subdomain pages |
| Skip confirmation prompt (always use in automated flows) |
Scrape options (all work with download)
-f <formats>, -H, -S, --screenshot, --full-page-screenshot, --only-main-content, --include-tags, --exclude-tags, --wait-for, --max-age, --country, --languages
See also
- firecrawl-map — just discover URLs without downloading
- firecrawl-scrape — scrape individual pages
- firecrawl-crawl — bulk extract as JSON (not local files)