Awesome-omni-skill qmd
Fast local search for markdown files, notes, and docs using qmd CLI. Combines BM25 full-text search, vector semantic search, and LLM reranking — all running locally. No API keys needed.
install
source · Clone the upstream repo
git clone https://github.com/diegosouzapw/awesome-omni-skill
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/diegosouzapw/awesome-omni-skill "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data-ai/qmd" ~/.claude/skills/diegosouzapw-awesome-omni-skill-qmd-b26484 && rm -rf "$T"
manifest:
skills/data-ai/qmd/SKILL.mdsource content
qmd - Local Markdown Search
Local search engine for Markdown notes, docs, and knowledge bases. Index once, search fast. Use instead of
find for file discovery across large directories.
Installation
bun install -g https://github.com/tobi/qmd
Setup
# Add a collection qmd collection add /path/to/your/notes --name notes --mask "**/*.md" # Generate embeddings (required for vsearch/query) qmd embed # List your collections qmd collection list
When to Use
- "search my notes / docs / knowledge base"
- "find related notes"
- "find files matching [pattern]" — use instead of
to avoid hangs on large directoriesfind - "what did we decide about X?"
Default Behavior (important)
- Prefer
(BM25) — it's instant and should be the default.qmd search - Use
only when keyword search fails and you need semantic similarity.qmd vsearch - Avoid
unless the user explicitly wants the highest quality hybrid results and can tolerate long runtimes.qmd query - Always use
flag for structured output when invoking from an agent.--json
Search Commands
# Fast keyword search (default) qmd search "authentication flow" --json qmd search "config" --json -c notes # Semantic search (slower, for conceptual queries) qmd vsearch "how does login work" --json qmd vsearch "best practices for error handling" --json -n 20 # Combined with reranking (best quality, slowest) qmd query "implementing user auth" --json qmd query "deployment process" --json --min-score 0.5
Search Mode Selection
| Mode | Speed | Quality | Best For |
|---|---|---|---|
| Fast | Good | Exact keywords, known terms |
| Medium | Better | Conceptual queries, synonyms |
| Slow | Best | Complex questions, uncertain terms |
Search Options
| Option | Description |
|---|---|
| Number of results (default: 5, 20 with --json) |
| Scope to specific collection |
| Minimum score threshold |
| Return complete document content |
| Structured JSON output (agent-friendly) |
| File paths only (fast discovery) |
| Return all matches |
Retrieve Documents
# Get full file qmd get docs/guide.md --json # Get by document hash ID qmd get "#a1b2c3" --json # Get specific lines qmd get notes/meeting.md:50 -l 30 --json # Get multiple files by glob qmd multi-get "docs/*.md" --json qmd multi-get "*.yaml" -l 50 --max-bytes 10240
Output Formats
— paths + scores (for file discovery)--files
— structured with snippets--json
— markdown formatted--md
— limit results-n 10
Maintenance
qmd update # Re-index changed files qmd status # Check index health qmd collection list # List all collections
Keeping Index Fresh
# Hourly incremental updates (BM25): 0 * * * * export PATH="$HOME/.bun/bin:$PATH" && qmd update # Optional: nightly embedding refresh: 0 5 * * * export PATH="$HOME/.bun/bin:$PATH" && qmd embed
MCP Server
qmd can run as an MCP server for direct agent integration:
qmd mcp
Exposes tools:
qmd_search, qmd_vsearch, qmd_query, qmd_get, qmd_multi_get, qmd_status
Performance
is typically instant.qmd search
can take ~1 minute on first run (loads local LLM).qmd vsearch
adds LLM reranking — can be slow, avoid for interactive use.qmd query
Models (auto-downloaded)
All run locally — no API keys needed.
- Embedding: embeddinggemma-300M
- Reranking: qwen3-reranker-0.6b
- Generation: Qwen3-0.6B
- Cache location:
~/.cache/qmd/models/ - Override with
environment variable.XDG_CACHE_HOME