Skills qdrant-search-speed-optimization
Diagnoses and fixes slow Qdrant search. Use when someone reports 'search is slow', 'high latency', 'queries take too long', 'low QPS', 'throughput too low', 'filtered search is slow', or 'search was fast but now it's slow'. Also use when search performance degrades after config changes or data growth.
install
source · Clone the upstream repo
git clone https://github.com/qdrant/skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/qdrant/skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/qdrant-performance-optimization/search-speed-optimization" ~/.claude/skills/qdrant-skills-qdrant-search-speed-optimization && rm -rf "$T"
manifest:
skills/qdrant-performance-optimization/search-speed-optimization/SKILL.mdsource content
Diagnose a problem
There the multiple possible reasons for search performance degradation. The most common ones are:
- Memory pressure: if the working set exceeds available RAM
- Complex requests (e.g. high
, complex filters without payload index)hnsw_ef - Competing background processes (e.g. optimizer still running after bulk upload)
- Problem with the cluster (e.g. network issues, hardware degradation)
Single Query Too Slow (Latency)
Use when: individual queries take too long regardless of load.
Diagnostic steps:
- Check if second run of the same request is significantly faster (indicates memory pressure)
- Try the same query with
andwith_payload: false
to see if payload retrieval is the bottleneckwith_vectors: false - If request uses filters, try to remove them one by one to identify if a specific filter condition is the bottleneck
Common fixes:
- Tune HNSW parameters: Fine-tuning search
- Enable in-memory quantization: Scalar quantization
- Reduce Vector Dimensionality with Matryoshka Models: Matryoshka Models
- Use oversampling + rescore for high-dimensional vectors Search with quantization
- Enable io_uring for disk-heavy workloads on Linux io_uring
Can't Handle Enough QPS (Throughput)
Use when: system can't serve enough queries per second under load.
- Reduce segment count (
to 2) Maximizing throughputdefault_segment_number - Use batch search API instead of single queries Batch search
- Enable quantization to reduce CPU cost Scalar quantization
- Add replicas to distribute read load Replication
Filtered Search Is Slow
Use when: filtered search is significantly slower than unfiltered. Most common SA complaint after memory.
- Create payload index on the filtered field Payload index
- Use
for primary filtering condition: Tenant indexis_tenant=true - Try ACORN algorithm for complex filters: ACORN
- Avoid using
filtering conditions as a primary filter. It might force qdrant to read raw payload values instead of using index.nested - If payload index was added after HNSW build, trigger re-index to create filterable subgraph links
Optimize search performance with parallel updates
Diagnostic steps
- Try to run the same query with
parameter, if the query is significantly faster, it means that the optimizer is still running and has not yet indexed all segments.indexed_only=true - If CPU or IO usage is high even with no queries, it also indicates that the optimizer is still running.
Recommended configuration changes
- reduce
to reserve more CPU for queriesoptimizer_cpu_budget - Use
to prevent creating segments with a large amount of unindexed data for searches. Instead, once a segment reaches the so called indexing_threshold, all additional points will be added in ‘deferred state’.prevent_unoptimized=true
Learn more here
What NOT to Do
- Set
on quantization (disk thrashing on every search)always_ram=false - Put HNSW on disk for latency-sensitive production (only for cold storage)
- Increase segment count for throughput (opposite: fewer = better)
- Create payload indexes on every field (wastes memory)
- Blame Qdrant before checking optimizer status