Claude-code-plugins-plus firecrawl-cost-tuning
install
source · Clone the upstream repo
git clone https://github.com/jeremylongshore/claude-code-plugins-plus-skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/jeremylongshore/claude-code-plugins-plus-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/plugins/saas-packs/firecrawl-pack/skills/firecrawl-cost-tuning" ~/.claude/skills/jeremylongshore-claude-code-plugins-plus-firecrawl-cost-tuning && rm -rf "$T"
manifest:
plugins/saas-packs/firecrawl-pack/skills/firecrawl-cost-tuning/SKILL.mdsource content
Firecrawl Cost Tuning
Overview
Firecrawl charges credits per operation: 1 credit per scrape, 1 per crawled page, 1 per map call, and variable credits for extract (LLM usage). An unbounded crawl on a large site can consume thousands of credits in minutes. This skill covers concrete techniques to reduce credit consumption by 50-80%.
Credit Cost Table
| Operation | Credits | Notes |
|---|---|---|
| 1 | Per page, any format |
| 1 per page | Each discovered page costs 1 credit |
| 1 | Regardless of URLs returned |
| 1 per URL | Same as individual scrape |
| 5+ | LLM processing adds cost |
Instructions
Step 1: Always Set Crawl Limits
import FirecrawlApp from "@mendable/firecrawl-js"; const firecrawl = new FirecrawlApp({ apiKey: process.env.FIRECRAWL_API_KEY!, }); // BAD: no limit — could crawl 100K pages await firecrawl.crawlUrl("https://docs.large-project.org"); // Cost: potentially 100,000+ credits // GOOD: bounded crawl await firecrawl.crawlUrl("https://docs.large-project.org", { limit: 50, // max 50 pages maxDepth: 2, // only 2 levels deep includePaths: ["/api/*"], // only API docs excludePaths: ["/blog/*", "/changelog/*"], scrapeOptions: { formats: ["markdown"] }, }); // Cost: max 50 credits
Step 2: Use Scrape for Known URLs Instead of Crawl
// If you know which pages you need, don't crawl — scrape them directly const targetUrls = [ "https://docs.example.com/api/auth", "https://docs.example.com/api/users", "https://docs.example.com/api/billing", ]; // Cost: 3 credits (one per page) const results = await firecrawl.batchScrapeUrls(targetUrls, { formats: ["markdown"], }); // vs crawling the whole docs site: potentially 500+ credits
Step 3: Map First, Then Selective Scrape
// Map costs 1 credit and returns up to 30K URLs const map = await firecrawl.mapUrl("https://docs.example.com"); // Cost: 1 credit // Filter to only what you need const apiDocs = (map.links || []).filter(url => url.includes("/api/")); console.log(`${map.links?.length} total URLs, only ${apiDocs.length} are API docs`); // Scrape only relevant pages const results = await firecrawl.batchScrapeUrls(apiDocs.slice(0, 20), { formats: ["markdown"], }); // Cost: 1 (map) + 20 (scrape) = 21 credits // vs blind crawl: could be 500+ credits
Step 4: Cache to Prevent Re-Scraping
import { createHash } from "crypto"; const cache = new Map<string, { content: string; timestamp: number }>(); const CACHE_TTL = 24 * 3600 * 1000; // 24 hours async function cachedScrape(url: string): Promise<string> { const key = createHash("md5").update(url).digest("hex"); const cached = cache.get(key); if (cached && Date.now() - cached.timestamp < CACHE_TTL) { return cached.content; // Free — no API call } const result = await firecrawl.scrapeUrl(url, { formats: ["markdown"] }); if (result.markdown) { cache.set(key, { content: result.markdown, timestamp: Date.now() }); } return result.markdown || ""; } // Typical savings: 50-80% credit reduction for recurring scrapes
Step 5: Monitor Credit Consumption
set -euo pipefail # Check current credit balance curl -s https://api.firecrawl.dev/v1/team/credits \ -H "Authorization: Bearer $FIRECRAWL_API_KEY" | jq .
// Daily credit tracker class CreditBudget { private dailyLimit: number; private usage = new Map<string, number>(); constructor(dailyLimit = 1000) { this.dailyLimit = dailyLimit; } canAfford(estimatedCredits: number): boolean { const today = new Date().toISOString().split("T")[0]; const used = this.usage.get(today) || 0; return used + estimatedCredits <= this.dailyLimit; } record(credits: number) { const today = new Date().toISOString().split("T")[0]; this.usage.set(today, (this.usage.get(today) || 0) + credits); } remaining(): number { const today = new Date().toISOString().split("T")[0]; return this.dailyLimit - (this.usage.get(today) || 0); } } const budget = new CreditBudget(1000); // Before each crawl if (!budget.canAfford(50)) { throw new Error(`Daily credit budget exceeded. ${budget.remaining()} credits left`); } await firecrawl.crawlUrl(url, { limit: 50 }); budget.record(50);
Step 6: Choose Minimal Formats
set -euo pipefail # Cheapest: markdown only (1 credit, fastest) curl -X POST https://api.firecrawl.dev/v1/scrape \ -H "Authorization: Bearer $FIRECRAWL_API_KEY" \ -H "Content-Type: application/json" \ -d '{"url":"https://example.com","formats":["markdown"]}' # Avoid requesting screenshots, rawHtml, or extract unless needed # Extract uses LLM calls — significantly more credits
Error Handling
| Issue | Cause | Solution |
|---|---|---|
| Credits exhausted | Check balance, upgrade plan, or wait for reset |
| Credits drained by one crawl | No set | Always set and |
| Duplicate scraping costs | Same URLs scraped daily | Implement URL-keyed caching |
| High per-page cost | Requesting all formats + extract | Use only |
| Budget overrun | No daily cap | Implement credit budget tracker |
Cost Optimization Summary
| Technique | Credit Savings |
|---|---|
Set crawl | Prevents 100x overages |
| Map + selective scrape | 50-90% vs blind crawl |
| Cache repeated scrapes | 50-80% reduction |
| Markdown-only format | Fastest, no extras |
| Batch scrape vs individual | Same cost, less overhead |
Resources
Next Steps
For reference architecture, see
firecrawl-reference-architecture.