Skills qdrant-scaling-qps
Guides Qdrant query throughput (QPS) scaling. Use when someone asks 'how to increase QPS', 'need more throughput', 'queries per second too low', 'batch search', 'read replicas', or 'how to handle more concurrent queries'.
git clone https://github.com/qdrant/skills
T=$(mktemp -d) && git clone --depth=1 https://github.com/qdrant/skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/qdrant-scaling/scaling-qps" ~/.claude/skills/qdrant-skills-qdrant-scaling-qps && rm -rf "$T"
skills/qdrant-scaling/scaling-qps/SKILL.mdScaling for Query Throughput (QPS)
Throughput scaling means handling more parallel queries per second. This is different from latency - throughput and latency are opposite tuning directions and cannot be optimized simultaneously on the same node.
High throughput favors fewer, larger segments so each query touches less overhead.
Performance Tuning for Higher RPS
- Use fewer, larger segments (
) Maximizing throughputdefault_segment_number: 2 - Enable quantization with
to reduce disk IO Quantizationalways_ram=true - Use batch search API to amortize overhead Batch search
Minimize impact of Update Workloads
- Configure update throughput control (v1.17+) to prevent unoptimized searches degrading reads Low latency search
- Set
to limit indexing CPUs (e.g.optimizer_cpu_budget
on an 8-CPU node reserves 6 for queries)2 - Configure delayed read fan-out (v1.17+) for tail latency Delayed fan-outs
Horizontal Scaling for Throughput
If a single node is saturated on CPU after applying the tuning above, scale horizontally with read replicas.
- Shard replicas serve queries from replicated shards, distributing read load across nodes
- Each replica adds independent query capacity without re-sharding
- Use
and route reads to replicas Distributed deploymentreplication_factor: 2+
See also Horizontal Scaling for general horizontal scaling guidance.
Disk I/O Bottlenecks
If it is not possible to keep all vectors in RAM, disk I/O can become the bottleneck for throughput. In this case:
- Upgrade to provisioned IOPS or local NVMe first. See impact of disk performance to vector search in Disk performance article
- Use
on Linux (kernel 5.11+) io_uring articleio_uring - In case of quantized vectors, prefer global rescoring over per-segment rescoring to reduce disk reads. Example in the tutorial
- Configure higher number of search threads to parallelize disk reads. Default is
, which is optimal for RAM-based search but may be too low for disk-based search. See configuration referencecpu_count - 1 - If still saturated, scale out horizontally (each node adds independent IOPS)
What NOT to Do
- Do not expect to optimize throughput and latency simultaneously on the same node
- Do not use many small segments for throughput workloads (increases per-query overhead)
- Do not scale horizontally when IOPS-bound without also upgrading disk tier
- Do not run at >90% RAM (OS cache eviction = severe performance degradation)