Skills qdrant-hybrid-search-prefetches

Use when someone asks 'how to combine lexical and semantic retrieval', 'dense and sparse in one search?', 'how to combine multiple fields for retrieval?', 'payloads or sparse vectors for lexical?', 'which sparse embedding model to use?', 'BM25 vs SPLADE?'

install
source · Clone the upstream repo
git clone https://github.com/qdrant/skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/qdrant/skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/qdrant-search-quality/search-strategies/hybrid-search/search-types" ~/.claude/skills/qdrant-skills-qdrant-hybrid-search-prefetches && rm -rf "$T"
manifest: skills/qdrant-search-quality/search-strategies/hybrid-search/search-types/SKILL.md
source content

Different Searches in One Query API Request

Each

prefetch
runs exactly one search per one query.

Understand if user wants to run several parallel searches on:

  1. The same vector representations but different queries or filters.
  2. Different vector representations but the same raw query.

If first, help user to design logic of constructing query or/and filters on application side and then check Combining Searches. Don't forget to create indices on filterable payload fields, immediately after collection creation, prior to building HNSW, so filterable HNSW could be constructed.

If second, use named vectors, which allow to store multiple vector types per point in one collection. Beware that named vectors currently can be configured only at collection creation. To choose vectors, check following recommendations.

Missed Keyword Matches

Use when: pure vector search misses exact term or keyword matches and you need lexical retrieval alongside semantic search.

Most likely you need a sparse vector for exact text search alongside the dense one. Qdrant uses sparse vectors for lexical searches, as payload filtering doesn't provide any ranking score.

Choose a Sparse Vector for Text

  • BM25 statistical representations, built into Qdrant core (computed server-side). Good baseline, works out-of-domain, usually for long texts. Can be used for non-English content, but needs to be configured per language (tokenization, stemming, stopwords, etc) at indexing and retrieval time. More in Text Search Guide
  • BM42 learned sparse, based on BM25, but better for small chunks of text & with meaning understanding. Works only on English. Requires fine-tuning for domain-specific retrieval. Requires FastEmbed (Python/REST only, not available in all SDKs). Not maintained.
  • miniCOIL learned sparse, BM25 with additional understanding of words meaning in context. Works only on English. Requires fine-tuning for domain-specific retrieval. Requires FastEmbed. Usage shown in FastEmbed miniCOIL documentation.
  • SPLADE++ learned sparse with term expansion. Heavier inference and resources usage but better performance due to term expansion. Requires fine-tuning for domain-specific retrieval. Provided in Qdrant Cloud Inference and FastEmbed versions work only on English. To use with FastEmbed, check FastEmbed SPLADE documentation.
  • External learned sparse embeddings, for example BAAI/bge-m3.

What to remember when using sparse vectors for lexical search:

  • tokenization and stemming affect exact matches, especially on custom codes, terms, etc.

What to remember when using Qdrant BM25 and miniCOIL (based on BM25):

  • avg_len in formula is not computed server-side, it is a user responsibility and passed as a parameter
  • BM25 might be not good for small chunks of text, as BM25 algorithm was initially created for search on long documents; consider adjusting document statistics in sparse vectors (TF & IDF, k, b).
  • Qdrant BM25 vectors are configured per language, so consider customizing stop words, stemming & tokenization when users documents mix several languages or carefully configure vectors per point when they are monolingual.

More on Sparse Vectors for Text Search

Need to Combine Multiple Representations of the Same Item

Use when: the same item is embedded in multiple ways (e.g. different models, languages, or modalities) and you want to search across different representations in one request (don't have to be all of them, can be even one).

Use multiple named vector prefetches, each prefetch covers one representation.

  • If you have groups and subgroups of representations (document -> chunk, image -> patch), you could use searching in groups. To not store identical payloads several times, check Lookup in Groups

You can also search directly on multivectors, a matrix of dense vectors, in a prefetch.

However, it comes with several considerations, as multivectors were designed to support late interaction models using max similarity metric, so it's impossible to retrieve the list of individual max similarity scores for each query vector.

Moreover, multivectors are rarely a good pick for prefetch:

There are ways to make multivector retrieval cheaper (MUVERA, pooling), you can see more in "Evaluating Tradeoffs of Multi-stage Multi-vector Search"

What NOT to Do

  • Choose any search method (for example, BM25) without evaluation of its quality & resources used.
  • Use any search method (for example, BM25) without paying attention to the specifics of their configuration and applicability to the use case.