install
source · Clone the upstream repo
git clone https://github.com/majiayu000/claude-skill-registry
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data/consult-llm" ~/.claude/skills/majiayu000-claude-skill-registry-consult-llm && rm -rf "$T"
manifest:
skills/data/consult-llm/SKILL.mdsource content
When consulting with external LLMs:
1. Gather Context First:
- Use Glob/Grep to find relevant files
- Read key files to understand their relevance
- Select files directly related to the question
2. Determine Mode and Model:
- Web mode: Use if user says "ask in browser" or "consult in browser"
- Codex mode: Use if user says "ask codex" → use model "gpt-5.1-codex-max"
- Gemini mode: Default for "ask gemini" → use model "gemini-2.5-pro"
3. Call the MCP Tool: Use
mcp__consult-llm__consult_llm with:
-
For API mode (Gemini):
: "gemini-2.5-pro"model
: Clear, neutral question without suggesting solutionsprompt
: Array of relevant file pathsfiles
-
For API mode (Codex):
: "gpt-5.1-codex-max"model
: Clear, neutral question without suggesting solutionsprompt
: Array of relevant file pathsfiles
-
For web mode:
: trueweb_mode
: Clear, neutral question without suggesting solutionsprompt
: Array of relevant file pathsfiles- (model parameter is ignored in web mode)
4. Present Results:
- API mode: Summarize key insights, recommendations, and considerations from the response
- Web mode: Inform user the prompt was copied to clipboard and ask them to paste it into their browser-based LLM and share the response back
Critical Rules:
- ALWAYS gather file context before consulting
- Ask neutral, open-ended questions to avoid bias
- Provide focused, relevant files (quality over quantity)