Claude-code-plugins-plus glean-rate-limits

install
source · Clone the upstream repo
git clone https://github.com/jeremylongshore/claude-code-plugins-plus-skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/jeremylongshore/claude-code-plugins-plus-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/plugins/saas-packs/glean-pack/skills/glean-rate-limits" ~/.claude/skills/jeremylongshore-claude-code-plugins-plus-glean-rate-limits && rm -rf "$T"
manifest: plugins/saas-packs/glean-pack/skills/glean-rate-limits/SKILL.md
source content

Glean Rate Limits

Overview

Glean's APIs split into two tiers: the Indexing API for pushing documents into the search corpus, and the Client API for executing searches. The Indexing API handles bulk document ingestion at approximately 100 requests per minute, while search queries are capped at 60 per minute per token. Organizations indexing large knowledge bases (100K+ documents from Confluence, Notion, or internal wikis) must implement careful batching to avoid 429 responses that can stall multi-hour ingestion pipelines.

Rate Limit Reference

EndpointLimitWindowScope
Indexing - single document100 req1 minutePer API token
Indexing - bulk (100 docs/req)20 req1 minutePer API token
Search queries60 req1 minutePer API token
People search30 req1 minutePer API token
Entity extraction40 req1 minutePer API token

Rate Limiter Implementation

class GleanRateLimiter {
  private tokens: number;
  private lastRefill: number;
  private readonly max: number;
  private readonly refillRate: number;
  private queue: Array<{ resolve: () => void }> = [];

  constructor(maxPerMinute: number) {
    this.max = maxPerMinute;
    this.tokens = maxPerMinute;
    this.lastRefill = Date.now();
    this.refillRate = maxPerMinute / 60_000;
  }

  async acquire(): Promise<void> {
    this.refill();
    if (this.tokens >= 1) { this.tokens -= 1; return; }
    return new Promise(resolve => this.queue.push({ resolve }));
  }

  private refill() {
    const now = Date.now();
    this.tokens = Math.min(this.max, this.tokens + (now - this.lastRefill) * this.refillRate);
    this.lastRefill = now;
    while (this.tokens >= 1 && this.queue.length) {
      this.tokens -= 1;
      this.queue.shift()!.resolve();
    }
  }
}

const indexLimiter = new GleanRateLimiter(18);  // buffer under 20 bulk/min
const searchLimiter = new GleanRateLimiter(50);

Retry Strategy

async function gleanRetry<T>(
  limiter: GleanRateLimiter, fn: () => Promise<Response>, maxRetries = 4
): Promise<T> {
  for (let attempt = 0; attempt <= maxRetries; attempt++) {
    await limiter.acquire();
    const res = await fn();
    if (res.ok) return res.json();
    if (res.status === 429) {
      const retryAfter = parseInt(res.headers.get("Retry-After") || "30", 10);
      const jitter = Math.random() * 5000;
      await new Promise(r => setTimeout(r, retryAfter * 1000 + jitter));
      continue;
    }
    if (res.status >= 500 && attempt < maxRetries) {
      await new Promise(r => setTimeout(r, Math.pow(2, attempt) * 2000));
      continue;
    }
    throw new Error(`Glean API ${res.status}: ${await res.text()}`);
  }
  throw new Error("Max retries exceeded");
}

Batch Processing

async function bulkIndexDocuments(docs: any[], batchSize = 100) {
  const results: any[] = [];
  for (let i = 0; i < docs.length; i += batchSize) {
    const batch = docs.slice(i, i + batchSize);
    const result = await gleanRetry(indexLimiter, () =>
      fetch(`${GLEAN_BASE}/api/index/v1/bulkindexdocuments`, {
        method: "POST", headers,
        body: JSON.stringify({ uploadId: `batch-${i}`, documents: batch }),
      })
    );
    results.push(result);
    if (i + batchSize < docs.length) await new Promise(r => setTimeout(r, 4000));
  }
  return results;
}

Error Handling

IssueCauseFix
429 on bulk indexExceeded 20 bulk req/minSpace batches 4s apart, use Retry-After
413 Payload Too LargeBatch > 100 docs or > 10MBSplit into smaller batches
Search 429Monitoring dashboard polling too fastCache search results for 30s
Partial index failureSome docs rejected in bulkCheck response
failedDocuments
array, retry those
401 token expiredRotating API credentialsRefresh token before long ingestion runs

Resources

Next Steps

See

glean-performance-tuning
.