Claude-code-plugins-plus-skills notion-rate-limits
install
source · Clone the upstream repo
git clone https://github.com/jeremylongshore/claude-code-plugins-plus-skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/jeremylongshore/claude-code-plugins-plus-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/plugins/saas-packs/notion-pack/skills/notion-rate-limits" ~/.claude/skills/jeremylongshore-claude-code-plugins-plus-skills-notion-rate-limits && rm -rf "$T"
manifest:
plugins/saas-packs/notion-pack/skills/notion-rate-limits/SKILL.mdsource content
Notion Rate Limits
Overview
The Notion API enforces 3 requests per second per integration token across all endpoints and tiers. Exceeding this returns HTTP 429 with a
Retry-After header. Detect with isNotionClientError() + APIErrorCode.RateLimited, implement exponential backoff with jitter, and use queue-based throttling for high-throughput workloads.
Prerequisites
v2.x (TypeScript) or@notionhq/client
(Python)notion-client- Integration token in
from notion.so/my-integrationsNOTION_TOKEN - For queue patterns:
v8+ (p-queue
)npm install p-queue
Instructions
Step 1 — Detect Rate Limits and Apply Exponential Backoff
| Aspect | Value |
|---|---|
| Rate limit | 3 req/s per integration token (all tiers) |
| Throttle response | HTTP 429 + header (seconds) |
| Scope | Per token, not per user or workspace |
| Max block children | 1,000 per |
| Max page size | 100 results per paginated request |
The SDK retries 429 automatically (2 retries, 3 total attempts). For heavier workloads, use custom backoff that honors
Retry-After and adds jitter to prevent thundering herd:
import { Client, isNotionClientError, APIErrorCode } from '@notionhq/client'; const notion = new Client({ auth: process.env.NOTION_TOKEN }); async function withBackoff<T>( fn: () => Promise<T>, maxRetries = 5, baseMs = 1000, maxMs = 32_000 ): Promise<T> { for (let i = 0; i <= maxRetries; i++) { try { return await fn(); } catch (err) { if (i === maxRetries) throw err; if (isNotionClientError(err) && err.code === APIErrorCode.RateLimited) { const wait = parseInt((err as any).headers?.['retry-after'] ?? '1', 10); await new Promise(r => setTimeout(r, wait * 1000)); continue; } if (isNotionClientError(err) && err.status && err.status < 500) throw err; const delay = Math.min(baseMs * 2 ** i + Math.random() * 500, maxMs); await new Promise(r => setTimeout(r, delay)); } } throw new Error('Exhausted retries'); }
Step 2 — Throttle with Queue-Based Request Management
Enforce the 3 req/s limit at the application level instead of relying on 429 responses:
import PQueue from 'p-queue'; const queue = new PQueue({ concurrency: 3, interval: 1000, intervalCap: 3, carryoverConcurrencyCount: true, }); async function throttled<T>(fn: () => Promise<T>): Promise<T> { return queue.add(fn, { throwOnTimeout: true }) as Promise<T>; } // Fetch 50 pages — automatically throttled to 3/s const pages = await Promise.all( pageIds.map(id => throttled(() => notion.pages.retrieve({ page_id: id }))) );
Step 3 — Optimize Batch Operations to Minimize API Calls
Set
page_size: 100 on every paginated query. Batch block appends into chunks of 100 instead of one-per-block. See batch patterns for full implementations with progress tracking.
// Paginate with max page size async function queryAll(dbId: string, filter?: any) { const results = []; let cursor: string | undefined; do { const resp = await throttled(() => notion.databases.query({ database_id: dbId, page_size: 100, start_cursor: cursor, filter, })); results.push(...resp.results); cursor = resp.has_more ? resp.next_cursor ?? undefined : undefined; } while (cursor); return results; }
Output
- 429 errors retried automatically using
headers with jitterRetry-After - Queue-based throttling keeps requests at 3/s proactively
- Batch operations reduce total API calls via chunking and max
page_size
Error Handling
| Scenario | Strategy |
|---|---|
| Single 429 | Honor , retry once |
| Repeated 429s | Exponential backoff + reduce concurrency |
| Bulk ops (50+ items) | Queue with at 3 req/s |
| Server error (5xx) | Backoff + retry up to 5 attempts |
| Client error (4xx) | Do not retry — fix the request |
Examples
See full TypeScript and Python examples for database sync, bulk export, and rate limit monitoring patterns.
Resources
- Notion Request Limits — Official rate limit docs
- Notion Status Codes — 429 and error responses
- @notionhq/client — SDK with built-in retry
- p-queue — Promise queue with rate limiting
Next Steps
- See
for 401/403/404 troubleshooting alongside rate limitsnotion-common-errors - See
for query patterns that work with these strategiesnotion-sdk-patterns - See
for optimizing search to reduce API call volumenotion-search-retrieve