Awesome-copilot qdrant-monitoring-debugging
Diagnoses Qdrant production issues using metrics and observability tools. Use when someone reports 'optimizer stuck', 'indexing too slow', 'memory too high', 'OOM crash', 'queries are slow', 'latency spike', or 'search was fast now it's slow'. Also use when performance degrades without obvious config changes.
install
source · Clone the upstream repo
git clone https://github.com/github/awesome-copilot
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/github/awesome-copilot "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/qdrant-monitoring/debugging" ~/.claude/skills/github-awesome-copilot-qdrant-monitoring-debugging && rm -rf "$T"
manifest:
skills/qdrant-monitoring/debugging/SKILL.mdsource content
How to Debug Qdrant with Metrics
First check optimizer status. Most production issues trace back to active optimizations competing for resources. If optimizer is clean, check memory, then request metrics.
Optimizer Stuck or Too Slow
Use when: optimizer running for hours, not finishing, or showing errors.
- Use
endpoint (v1.17+) to check status Optimization monitoring/collections/{collection_name}/optimizations - Query with optional detail flags:
?with=queued,completed,idle_segments - Returns: queued optimizations count, active optimizer type, involved segments, progress tracking
- Web UI has an Optimizations tab with timeline view and per-task duration metrics Web UI
- If
shows an error in collection info, check logs for disk full or corrupted segmentsoptimizer_status - Large merges and HNSW rebuilds legitimately take hours on big datasets. Check progress before assuming it's stuck.
Memory Seems Too High
Use when: memory exceeds expectations, node crashes with OOM, or memory keeps growing.
- Process memory metrics available via
(RSS, allocated bytes, page faults)/metrics - Qdrant uses two types of RAM: resident memory (data structures, quantized vectors) and OS page cache (cached disk reads). Page cache filling available RAM is normal. Memory article
- If resident memory (RSSAnon) exceeds 80% of total RAM, investigate
- Check
for per-collection breakdown of point counts and vector configurations/telemetry - Estimate expected memory:
for vectors, plus payload and index overhead Capacity planningnum_vectors * dimensions * 4 bytes * 1.5 - Common causes of unexpected growth: quantized vectors with
, too many payload indexes, largealways_ram=true
during optimizationmax_segment_size
Queries Are Slow
Use when: queries slower than expected and you need to identify the cause.
- Track
andrest_responses_avg_duration_seconds
per endpointrest_responses_max_duration_seconds - Use histogram metric
(v1.8+) for percentile analysis in Grafanarest_responses_duration_seconds - Equivalent gRPC metrics with
prefixgrpc_responses_ - Check optimizer status first. Active optimizations compete for CPU and I/O, degrading search latency.
- Check segment count via collection info. Too many unmerged segments after bulk upload causes slower search.
- Compare filtered vs unfiltered query times. Large gap means missing payload index. Payload index
What NOT to Do
- Ignore optimizer status when debugging slow queries (most common root cause)
- Assume memory leak when page cache fills RAM (normal OS behavior)
- Make config changes while optimizer is running (causes cascading re-optimizations)
- Blame Qdrant before checking if bulk upload just finished (unmerged segments)