Awesome-copilot qdrant-indexing-performance-optimization
Diagnoses and fixes slow Qdrant indexing and data ingestion. Use when someone reports 'uploads are slow', 'indexing takes forever', 'optimizer is stuck', 'HNSW build time too long', or 'data uploaded but search is bad'. Also use when optimizer status shows errors, segments won't merge, or indexing threshold questions arise.
git clone https://github.com/github/awesome-copilot
T=$(mktemp -d) && git clone --depth=1 https://github.com/github/awesome-copilot "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/qdrant-performance-optimization/indexing-performance-optimization" ~/.claude/skills/github-awesome-copilot-qdrant-indexing-performance-optimization && rm -rf "$T"
skills/qdrant-performance-optimization/indexing-performance-optimization/SKILL.mdWhat to Do When Qdrant Indexing Is Too Slow
Qdrant does NOT build HNSW indexes immediately. Small segments use brute-force until they exceed
indexing_threshold_kb (default: 20 MB). Search during this window is slower by design, not a bug.
- Understand the indexing optimizer Indexing optimizer
Uploads/Ingestion Too Slow
Use when: upload or upsert API calls are slow. Identify bottleneck: client-side (network, batching) vs server-side (CPU, disk I/O)
For client-side, optimize batching and parallelism:
- Use batch upserts (64-256 points per request) Points API
- Use 2-4 parallel upload streams
For server-side, optimize Qdrant configuration and indexing strategy:
- Create more shards (3-12), each shard has an independent update worker Sharding
- Create payload indexes before HNSW builds (needed for filterable vector index) Payload index
Suitable for initial bulk load of large datasets:
- Disable HNSW during bulk load (set
very high, restore after) Collection paramsindexing_threshold_kb - Setting
to disable HNSW is legacy, use highm=0
insteadindexing_threshold_kb
Careful, fast unindexed upload might temporarily use more RAM and degrade search performance until optimizer catches up.
See https://search.qdrant.tech/md/documentation/tutorials-develop/bulk-upload/
Optimizer Stuck or Taking Too Long
Use when: optimizer running for hours, not finishing.
- Check actual progress via optimizations endpoint (v1.17+) Optimization monitoring
- Large merges and HNSW rebuilds legitimately take hours on big datasets
- Check CPU and disk I/O (HNSW is CPU-bound, merging is I/O-bound, HDD is not viable)
- If
shows an error, check logs for disk full or corrupted segmentsoptimizer_status
HNSW Build Time Too High
Use when: HNSW index build dominates total indexing time.
- Reduce
(default 16, good for most cases, 32+ rarely needed) HNSW paramsm - Reduce
(100-200 sufficient) HNSW configef_construct - Keep
proportional to CPU cores Configurationmax_indexing_threads - Use GPU for indexing GPU indexing
HNSW index for multi-tenant collections
If you have a multi-tenant use case where all data is split by some payload field (e.g.
tenant_id), you can avoid building a global HNSW index and instead rely on payload_m to build HNSW index only for subsets of data.
Skipping global HNSW index can significantly reduce indexing time.
See Multi-tenant collections for details.
Additional Payload Indexes Are Too Slow
Qdrant builds extra HNSW links for all payload indexes to ensure that quality of filtered vector search does not degrade. Some payload indexes (e.g.
text fields with long texts) can have a very high number of unique values per point, which can lead to long HNSW build time.
You can disable building extra HNSW links for specific payload index and instead rely on slightly slower query-time strategies like ACORN.
Read more about disabling extra HNSW links in documentation
Read more about ACORN in documentation
What NOT to Do
- Do not create payload indexes AFTER HNSW is built (breaks filterable vector index)
- Do not use
for bulk uploads into an existing collection, it might drop the existing HNSW and cause long reindexingm=0 - Do not upload one point at a time (per-request overhead dominates)