Claude-code-plugins-plus-skills algolia-rate-limits
install
source · Clone the upstream repo
git clone https://github.com/jeremylongshore/claude-code-plugins-plus-skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/jeremylongshore/claude-code-plugins-plus-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/plugins/saas-packs/algolia-pack/skills/algolia-rate-limits" ~/.claude/skills/jeremylongshore-claude-code-plugins-plus-skills-algolia-rate-limits && rm -rf "$T"
manifest:
plugins/saas-packs/algolia-pack/skills/algolia-rate-limits/SKILL.mdsource content
Algolia Rate Limits
Overview
Algolia has two distinct rate limiting mechanisms: per-API-key limits (configurable, returns HTTP 429) and server-side indexing limits (protects cluster stability, returns HTTP 429 with specific messages). The
algoliasearch v5 client has built-in retry with backoff, but you need to handle sustained rate limiting yourself.
How Algolia Rate Limiting Works
Per-API-Key Rate Limits
| Setting | Default | Where to Change |
|---|---|---|
| 0 (unlimited) | Dashboard > API Keys > Edit |
| 1000 | Dashboard > API Keys > Edit |
| Search requests | Plan-dependent | Upgrade plan |
Server-Side Indexing Limits
When the indexing queue is overloaded, Algolia returns 429 with these messages:
| Message | Meaning | Action |
|---|---|---|
| Queue full | Reduce batch frequency |
| Too much pending work | Wait for queue to drain |
| Stuck tasks | Check dashboard > Indices > Operations |
| Record quota near limit | Delete unused records or upgrade |
Instructions
Step 1: Configure Per-Key Rate Limits
import { algoliasearch } from 'algoliasearch'; const client = algoliasearch(process.env.ALGOLIA_APP_ID!, process.env.ALGOLIA_ADMIN_KEY!); // Create a rate-limited API key for frontend use const { key } = await client.addApiKey({ apiKey: { acl: ['search'], description: 'Frontend search key — rate limited', maxQueriesPerIPPerHour: 1000, // Per user IP maxHitsPerQuery: 20, indexes: ['products'], // Restrict to specific indices validity: 0, // 0 = never expires }, }); console.log(`Created rate-limited key: ${key}`);
Step 2: Implement Backoff for Sustained 429s
import { ApiError } from 'algoliasearch'; async function withBackoff<T>( operation: () => Promise<T>, config = { maxRetries: 5, baseDelayMs: 1000, maxDelayMs: 30000 } ): Promise<T> { for (let attempt = 0; attempt <= config.maxRetries; attempt++) { try { return await operation(); } catch (error) { if (attempt === config.maxRetries) throw error; // Only retry on 429 or 5xx if (error instanceof ApiError) { if (error.status !== 429 && error.status < 500) throw error; } // Exponential backoff with jitter const delay = Math.min( config.baseDelayMs * Math.pow(2, attempt) + Math.random() * 500, config.maxDelayMs ); console.warn(`Rate limited (attempt ${attempt + 1}). Retrying in ${delay.toFixed(0)}ms`); await new Promise(r => setTimeout(r, delay)); } } throw new Error('Unreachable'); } // Usage const { hits } = await withBackoff(() => client.searchSingleIndex({ indexName: 'products', searchParams: { query: 'laptop' } }) );
Step 3: Throttled Batch Indexing
import PQueue from 'p-queue'; // Limit concurrent indexing operations to avoid overloading the queue const indexingQueue = new PQueue({ concurrency: 1, // One batch at a time interval: 1000, // Per second intervalCap: 2, // Max 2 operations per second }); async function throttledBulkIndex(records: Record<string, any>[]) { const BATCH_SIZE = 500; const chunks: Record<string, any>[][] = []; for (let i = 0; i < records.length; i += BATCH_SIZE) { chunks.push(records.slice(i, i + BATCH_SIZE)); } let indexed = 0; await Promise.all( chunks.map(chunk => indexingQueue.add(async () => { const { taskID } = await client.saveObjects({ indexName: 'products', objects: chunk, }); await client.waitForTask({ indexName: 'products', taskID }); indexed += chunk.length; console.log(`Indexed ${indexed}/${records.length}`); }) ) ); }
Step 4: Monitor Usage Approaching Limits
// Check current API key usage via the dashboard or programmatically async function checkKeyUsage(apiKey: string) { const keyInfo = await client.getApiKey({ key: apiKey }); console.log({ description: keyInfo.description, maxQueriesPerIPPerHour: keyInfo.maxQueriesPerIPPerHour, acl: keyInfo.acl, indexes: keyInfo.indexes, }); } // Check record count vs plan limit async function checkRecordUsage() { const { items } = await client.listIndices(); const totalRecords = items.reduce((sum, idx) => sum + (idx.entries || 0), 0); console.log(`Total records across all indices: ${totalRecords.toLocaleString()}`); }
Error Handling
| Scenario | Detection | Response |
|---|---|---|
| Burst spike (429) | with status 429 | Built-in retry handles it; add backoff for persistence |
| Sustained overload | Repeated 429s across minutes | Reduce batch size and frequency |
| Indexing queue full | 429 with "Too many jobs" | Pause indexing, wait for queue drain |
| Plan limit reached | 429 with quota message | Upgrade plan or reduce record count |
Resources
Next Steps
For security configuration, see
algolia-security-basics.