Claude-skill-registry llm-call
External LLM invocation. Triggered ONLY by @council,@probe,@crossref,@gpt,@gemini,@grok,@qwen.
git clone https://github.com/majiayu000/claude-skill-registry
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data/llm-call" ~/.claude/skills/majiayu000-claude-skill-registry-llm-call && rm -rf "$T"
skills/data/llm-call/SKILL.mdLLM Call
External LLM access. Only activates on explicit triggers.
Triggers
| Trigger | Action |
|---|---|
| Query all 4 models in parallel |
| GPT-5.1 only |
| Gemini 3 Pro only |
| Grok 4.1 Fast only |
| Qwen3 Max only |
| Follow-up question with auto-context from session history |
| Models comment on each other's previous responses |
No trigger → Claude handles alone.
Auto-save: All requests automatically save to session with step-based folders for history tracking.
Why This Pattern Exists
The value is independent perspective, not review.
If Claude shows its draft to external models, they anchor on it. Instead:
- Claude forms complete answer first
- External models answer the same question independently (they never see Claude's draft)
- Claude compares all answers afterward
External models cannot: search web, use tools, see files, or access conversation history. Claude must include all relevant context in the query.
Workflow - Follow STRICTLY
Input:
===QUERY=== (required), ===DRAFT=== (optional), ===PROBE=== (probe only)
Draft: If Claude has answered the question, Claude SHOULD NOT include this section in
council phase to save context window. Claude can pass the draft if the user invoked @crossref afterall.
Single Model (@gpt
, @gemini
, @grok
, @qwen
)
@gpt@gemini@grok@qwencli.py -m single -M gpt << 'EOF' ===QUERY=== Question + context ===DRAFT=== Claude's answer EOF
Council (@council
)
@councilcli.py -m council << 'EOF' ===QUERY=== Question + context ===DRAFT=== Claude's answer EOF
Add
-c for confidence ratings.
Probe (@probe
)
@probeFollow-up with auto-context from ALL previous steps.
cli.py -m probe << 'EOF' ===QUERY=== Explain more about [point] ===PROBE=== @gpt EOF
Auto-gathers history → sends to model → saves to new step.
Crossref (@crossref
)
@crossrefPurpose: Each model sees what ALL others said and comments on their responses.
Crossref requires Claude's draft. Two ways to provide it:
- If you already included ===DRAFT=== in the council step:
cli.py -m crossref
- If you only sent the query in council (no draft):
cli.py -m crossref << 'EOF' ===DRAFT=== Claude's answer here EOF
What each model receives:
- Original question
- Their own previous answer (if they had one)
- Claude's draft (if available)
- All OTHER models' responses
ALL 4 models are always invoked, even if one failed in the council step. A model that failed earlier can still comment on others' responses.
Session: Auto-saves to
/tmp/sessions/s_TIMESTAMP/1/, /2/, etc. Each step folder contains .md files (query, draft, gpt, gemini, grok).
Script Reference
| Mode | Usage |
|---|---|
| All 4 models (auto-save) |
| One model (auto-save) |
| Follow-up with auto-context |
| Models critique each other |
| Show session |
| Delete session |
Flags:
model (gpt/gemini/grok/qwen)-M
confidence mode-c
session ID or-S
(optional)new
The -c
Flag (Confidence)
-cWhat it does: Asks each model to rate its confidence and explain what would change its answer.
When to use it:
- Factual/analytical questions where certainty matters
- To surface what evidence each model is relying on
The -S
Flag (Session)
-SOptions:
— Force create a new session (useful when starting a new topic)-S new
— Use a specific session (e.g.,-S <session_id>
)-S s_20250101_120000_1234- (omit) — Auto-use current session, or create if none exists
When to use
:-S new
- Starting a completely new topic/question
- Want to keep previous session separate