Claude-code-plugins firecrawl-hello-world
install
source · Clone the upstream repo
git clone https://github.com/jeremylongshore/claude-code-plugins-plus-skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/jeremylongshore/claude-code-plugins-plus-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/plugins/saas-packs/firecrawl-pack/skills/firecrawl-hello-world" ~/.claude/skills/jeremylongshore-claude-code-plugins-firecrawl-hello-world && rm -rf "$T"
manifest:
plugins/saas-packs/firecrawl-pack/skills/firecrawl-hello-world/SKILL.mdsource content
Firecrawl Hello World
Overview
Four minimal examples covering Firecrawl's core endpoints: scrape (single page), crawl (multi-page), map (URL discovery), and extract (LLM structured data). Each is a standalone snippet you can run immediately.
Prerequisites
installed (@mendable/firecrawl-js
)npm install @mendable/firecrawl-js
environment variable setFIRECRAWL_API_KEY
Instructions
Step 1: Single-Page Scrape
import FirecrawlApp from "@mendable/firecrawl-js"; const firecrawl = new FirecrawlApp({ apiKey: process.env.FIRECRAWL_API_KEY!, }); // Scrape one page — returns markdown, HTML, metadata, links const result = await firecrawl.scrapeUrl("https://docs.firecrawl.dev", { formats: ["markdown"], }); console.log("Title:", result.metadata?.title); console.log("Markdown:", result.markdown?.substring(0, 500));
Step 2: Multi-Page Crawl
// Crawl a site recursively — follows links, respects robots.txt const crawlResult = await firecrawl.crawlUrl("https://docs.firecrawl.dev", { limit: 10, // max 10 pages (saves credits) scrapeOptions: { formats: ["markdown"], }, }); console.log(`Crawled ${crawlResult.data?.length} pages`); for (const page of crawlResult.data || []) { console.log(` ${page.metadata?.title} — ${page.metadata?.sourceURL}`); }
Step 3: Map a Site (URL Discovery)
// Discover all URLs on a site in ~2-3 seconds (uses sitemap + SERP) const mapResult = await firecrawl.mapUrl("https://docs.firecrawl.dev"); console.log(`Found ${mapResult.links?.length} URLs`); mapResult.links?.slice(0, 10).forEach(url => console.log(` ${url}`));
Step 4: LLM Extract (Structured Data)
// Extract structured data from a page using an LLM + JSON schema const extracted = await firecrawl.scrapeUrl("https://firecrawl.dev/pricing", { formats: ["extract"], extract: { schema: { type: "object", properties: { plans: { type: "array", items: { type: "object", properties: { name: { type: "string" }, price: { type: "string" }, credits: { type: "number" }, }, }, }, }, }, }, }); console.log("Pricing plans:", JSON.stringify(extracted.extract, null, 2));
Output
- Single-page markdown scraped from a live URL
- Multi-page crawl results with titles and source URLs
- Site map with all discovered URLs
- Structured JSON extracted by LLM from page content
Error Handling
| Error | Cause | Solution |
|---|---|---|
| SDK not installed | |
| Missing or invalid API key | Check env var |
| Rate limit exceeded | Wait and retry with backoff |
Empty | JS-heavy page not rendered | Add to scrape options |
| Credits exhausted | Check balance at firecrawl.dev/app |
Examples
Python Hello World
from firecrawl import FirecrawlApp firecrawl = FirecrawlApp(api_key="fc-YOUR_API_KEY") # Scrape result = firecrawl.scrape_url("https://example.com", params={ "formats": ["markdown"] }) print(result["markdown"][:500]) # Map urls = firecrawl.map_url("https://example.com") print(f"Found {len(urls.get('links', []))} URLs")
Batch Scrape Multiple URLs
// Scrape many URLs at once — more efficient than individual scrapes const batchResult = await firecrawl.batchScrapeUrls( [ "https://docs.firecrawl.dev/features/scrape", "https://docs.firecrawl.dev/features/crawl", "https://docs.firecrawl.dev/features/extract", ], { formats: ["markdown"] } ); for (const page of batchResult.data || []) { console.log(`${page.metadata?.title}: ${page.markdown?.length} chars`); }
Resources
Next Steps
Proceed to
firecrawl-local-dev-loop for development workflow setup.