Skills qdrant-minimize-latency
Guides Qdrant query latency optimization. Use when someone asks 'search is slow', 'how to reduce latency', 'p99 is too high', 'tail latency', 'single query too slow', 'how to make search faster', or 'latency spikes'.
git clone https://github.com/qdrant/skills
T=$(mktemp -d) && git clone --depth=1 https://github.com/qdrant/skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/qdrant-scaling/minimize-latency" ~/.claude/skills/qdrant-skills-qdrant-minimize-latency && rm -rf "$T"
skills/qdrant-scaling/minimize-latency/SKILL.mdScaling for Query Latency
Latency of a single query is determined by the slowest component in the query execution path. It is sometimes correlated with throughput, but not always — throughput and latency are opposite tuning directions.
Low latency optimization is aimed at utilising maximum resource saturation for a single query, while throughput optimization is aimed at minimizing per-query resource usage to allow more parallel queries.
Performance Tuning for Lower Latency
- Increase segment count to match CPU cores (
) Minimizing latencydefault_segment_number: 16 - Keep quantized vectors and HNSW in RAM (
)always_ram=true - Reduce
at query time (trade recall for speed) Search paramshnsw_ef - Use local NVMe, avoid network-attached storage
Memory Pressure and Latency
RAM is the most critical resource for latency. If working set exceeds available RAM, OS cache eviction causes severe, sustained latency degradation.
- Vertical scale RAM first. Critical if working set >80%.
- Use quantization: scalar (4x reduction) or binary (16x reduction) Quantization
- Move payload indexes to disk if filtering is infrequent On-disk payload index
- Set
to limit background optimization CPUsoptimizer_cpu_budget - Schedule indexing: set high
during peak hoursindexing_threshold
Vertical Scaling for Latency
More RAM and faster CPU directly reduce latency. See Vertical Scaling for node sizing guidelines.
What NOT to Do
- Do not expect to optimize latency and throughput simultaneously on the same node
- Do not use few large segments for latency-sensitive workloads (each segment takes longer to search)
- Do not run at >90% RAM (cache eviction causes severe latency degradation that can last days)
- Do not ignore optimizer status during performance debugging
- Do not scale down RAM without load testing (cache eviction causes days-long latency incidents)