Claude-code-plugins-plus-skills clickhouse-rate-limits
install
source · Clone the upstream repo
git clone https://github.com/jeremylongshore/claude-code-plugins-plus-skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/jeremylongshore/claude-code-plugins-plus-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/plugins/saas-packs/clickhouse-pack/skills/clickhouse-rate-limits" ~/.claude/skills/jeremylongshore-claude-code-plugins-plus-skills-clickhouse-rate-limits && rm -rf "$T"
manifest:
plugins/saas-packs/clickhouse-pack/skills/clickhouse-rate-limits/SKILL.mdsource content
ClickHouse Rate Limits & Concurrency
Overview
ClickHouse does not have REST API rate limits like a SaaS product. Instead, it has server-side concurrency limits, memory quotas, and per-user settings that control resource usage. This skill covers how to configure and work within those limits.
Prerequisites
- ClickHouse admin access (or Cloud console)
- Understanding of your concurrency requirements
Instructions
Step 1: Understand Server-Side Limits
| Setting | Default | Description |
|---|---|---|
| 100 | Max queries running simultaneously |
| 4096 | Max TCP/HTTP connections |
| ~10GB | Per-query memory limit |
| 0 (unlimited) | Per-query timeout in seconds |
| CPU cores | Threads per query |
ClickHouse Cloud API limit: The Cloud management API (not the query interface) is limited to 10 requests per 10 seconds.
Step 2: Configure Per-User Quotas
-- Create a quota that limits query resources per user CREATE QUOTA IF NOT EXISTS app_quota FOR INTERVAL 1 HOUR MAX queries = 10000, result_rows = 100000000, read_rows = 1000000000, execution_time = 3600 TO app_user; -- Create a profile with resource limits CREATE SETTINGS PROFILE IF NOT EXISTS app_profile SETTINGS max_memory_usage = 5000000000, -- 5GB per query max_execution_time = 30, -- 30s timeout max_threads = 4, -- 4 threads per query max_concurrent_queries_for_user = 10 -- 10 parallel queries TO app_user;
Step 3: Client-Side Connection Pooling
import { createClient } from '@clickhouse/client'; // The @clickhouse/client manages HTTP keep-alive connections internally const client = createClient({ url: process.env.CLICKHOUSE_HOST!, username: process.env.CLICKHOUSE_USER!, password: process.env.CLICKHOUSE_PASSWORD!, max_open_connections: 10, // Connection pool size request_timeout: 30_000, // 30s per request compression: { request: true, // Compress request bodies (saves bandwidth) response: true, // Decompress responses }, });
Step 4: Application-Level Concurrency Control
import PQueue from 'p-queue'; // Limit concurrent ClickHouse queries from your app const queryQueue = new PQueue({ concurrency: 5, // Max 5 concurrent queries timeout: 30_000, // 30s timeout per query throwOnTimeout: true, }); async function rateLimitedQuery<T>(sql: string): Promise<T[]> { return queryQueue.add(async () => { const rs = await client.query({ query: sql, format: 'JSONEachRow' }); return rs.json<T>(); }); }
Step 5: Retry on Concurrency Errors
async function queryWithRetry<T>( sql: string, maxRetries = 3, ): Promise<T[]> { for (let attempt = 0; attempt <= maxRetries; attempt++) { try { const rs = await client.query({ query: sql, format: 'JSONEachRow' }); return await rs.json<T>(); } catch (err: any) { const msg = err.message ?? ''; const isRetryable = msg.includes('TOO_MANY_SIMULTANEOUS_QUERIES') || msg.includes('TIMEOUT_EXCEEDED') || msg.includes('NETWORK_ERROR'); if (!isRetryable || attempt === maxRetries) throw err; const delay = 1000 * Math.pow(2, attempt) + Math.random() * 500; await new Promise((r) => setTimeout(r, delay)); } } throw new Error('Unreachable'); }
Step 6: Monitor Concurrency
-- Currently running queries SELECT user, count() AS running_queries, sum(memory_usage) AS total_memory FROM system.processes GROUP BY user; -- Query queue depth (if queries are waiting) SELECT metric, value FROM system.metrics WHERE metric IN ('Query', 'MaxConcurrentQueries', 'TCPConnection', 'HTTPConnection'); -- Historical peak concurrency SELECT toStartOfMinute(event_time) AS minute, max(ProfileEvents['ConcurrentQuery']) AS peak_concurrent FROM system.query_log WHERE event_time >= now() - INTERVAL 1 HOUR GROUP BY minute ORDER BY minute;
Step 7: Insert Throttling
// Buffer inserts to avoid "too many parts" class InsertBuffer<T extends Record<string, unknown>> { private buffer: T[] = []; private timer: NodeJS.Timeout | null = null; constructor( private client: ReturnType<typeof import('@clickhouse/client').createClient>, private table: string, private batchSize = 10_000, private flushIntervalMs = 5_000, ) {} async add(row: T) { this.buffer.push(row); if (this.buffer.length >= this.batchSize) { await this.flush(); } else if (!this.timer) { this.timer = setTimeout(() => this.flush(), this.flushIntervalMs); } } async flush() { if (this.timer) { clearTimeout(this.timer); this.timer = null; } if (this.buffer.length === 0) return; const batch = this.buffer.splice(0); await this.client.insert({ table: this.table, values: batch, format: 'JSONEachRow' }); } }
Error Handling
| Error | Code | Solution |
|---|---|---|
| 202 | Reduce concurrency or increase |
| 241 | Lower , add query filters |
| 159 | Increase or optimize query |
| 252 | Batch inserts, wait for merges |
Resources
Next Steps
For security hardening, see
clickhouse-security-basics.