Skills qdrant-search-strategies

Guides Qdrant search strategy selection. Use when someone asks 'should I use hybrid search?', 'how to rerank?', 'results are not relevant', 'I don't get needed results from my dataset but they're there', 'retrieval quality is not good enough', 'results too similar', 'need diversity', 'MMR', 'relevance feedback', 'recommendation API', 'discovery API', or 'missing keyword matches'

install
source · Clone the upstream repo
git clone https://github.com/qdrant/skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/qdrant/skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/qdrant-search-quality/search-strategies" ~/.claude/skills/qdrant-skills-qdrant-search-strategies && rm -rf "$T"
manifest: skills/qdrant-search-quality/search-strategies/SKILL.md
source content

How to Improve Search Results with Advanced Strategies

These strategies complement basic vector search. Use them after confirming the embedding model is fitting the task and HNSW config is correct. If exact search returns bad results, verify the selection of the embedding model (retriever) first. If the user wants to use a weaker embedding model because it is small, fast, and cheap, use reranking or relevance feedback to improve search quality.

Missing Keyword Matches or Need to Combine Multiple Search Signals

Use when: pure vector search misses keyword/domain term matches, or the use case benefits from combining searches on multiple representations (including languages and modalities) of the same item.

See how to use hybrid search

Right Documents Found But Not in the Top Results

Use when: good recall but poor precision (right docs in top-100, not top-10).

Right Documents Not Found But They Are There

Use when: basic retrieval is in place but the retriever misses relevant items you know exist in the dataset. Works on any embeddable data (text, images, etc.).

Relevance Feedback (RF) Query uses a feedback model's scores on retrieved results to steer the retriever through the full vector space on subsequent iterations, like reranking the entire collection through the retriever. Complementary to reranking: a reranker sees a limited subset, RF leverages feedback signals collection-wide. Even 3–5 feedback scores are enough. Can run multiple iterations.

A feedback model is anything producing a relevance score per document: a bi-encoder, cross-encoder, late-interaction model, LLM-as-judge. Fuzzy relevance scores work, not just binary (good/bad, relevant/irrelevant), due to the fact that feedback is expressed as a graded relevance score (higher = more relevant).

Skip when: if the retriever already has strong recall, or if retriever and feedback model strongly agree on relevance.

  • RF Query is currently based on a 3-parameter naive formula with no universal defaults, so it must be tuned per dataset, retriever, and feedback model
  • Use qdrant-relevance-feedback to tune parameters, evaluate impact with Evaluator, and check retriever-feedback agreement. See README for setup instructions. No GPUs are needed, and the framework also provides predefined retriever and feedback model options.
  • Check the configuration of the Relevance Feedback Query API
  • Use this as a helper end-to-end text retrieval example with parameter tuning and evals to understand how to use the API and run the
    qdrant-relevance-feedback
    framework: RF tutorial

Results Too Similar

Use when: top results are redundant, near-duplicates, or lack diversity. Common in dense content domains (academic papers, product catalogs).

  • Use MMR (v1.15+) as a query parameter with
    diversity
    to balance relevance and diversity MMR
  • Start with
    diversity=0.5
    , lower for more precision, higher for more exploration
  • MMR is slower than standard search. Only use when redundancy is an actual problem.

Want to improve search results based on examples (positive and negative)

Use when: you can provide positive and negative example points to steer search closer to positive and further from negative.

  • Recommendation API: positive/negative examples to recommend fitting vectors Recommendation API
    • Best score strategy: better for diverse examples, supports negative-only Best score
  • Discovery API: context pairs (positive/negative) to constrain search regions without a request target Discovery

Have Business Logic Behind Results Relevance

Use when: results should be additionally ranked according to some business logic based on data, like recency or distance.

Check how to set up in Score Boosting docs

What NOT to Do

  • Use hybrid search before verifying pure vector search quality (adds complexity, may mask model issues)
  • Skip evaluation when adding relevance feedback (it's good to check on real queries that it actually could help)