Awesome-omni-skill ai-consultants
Consult Gemini CLI, Codex CLI, Mistral Vibe, Kilo CLI, Cursor, Claude, Amp, Kimi, Qwen, and Ollama as external experts for coding questions. Automatically excludes the invoking agent from the panel to avoid self-consultation. Use when you have doubts about implementations, want a second opinion, need to choose between different approaches, or when explicitly requested with phrases like "ask the consultants", "what do the other models think", "compare solutions".
git clone https://github.com/diegosouzapw/awesome-omni-skill
T=$(mktemp -d) && git clone --depth=1 https://github.com/diegosouzapw/awesome-omni-skill "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data-ai/ai-consultants" ~/.claude/skills/diegosouzapw-awesome-omni-skill-ai-consultants && rm -rf "$T"
skills/data-ai/ai-consultants/SKILL.mdAI Consultants v2.9.1 - AI Expert Panel
Simultaneously consult multiple AIs as "consultants" for coding questions. Each consultant has a configurable persona that influences their response style.
Quick Start
/ai-consultants:config-wizard # Initial setup /ai-consultants:consult "Your question here"
What's New in v2.9
- Kimi CLI Consultant: New "The Eastern Sage" persona for holistic understanding (v2.9)
- Amp CLI Consultant: "The Systems Thinker" persona for system design (v2.8)
- Qwen CLI Support: CLI/API mode switching for Qwen3 (v2.7)
- CLI/API Mode Switching: Gemini, Codex, Claude, Mistral, Qwen3 can use CLI or API (v2.6)
- Model Quality Tiers: premium, standard, economy with
(v2.5)apply_model_tier() - Budget Enforcement: Configurable cost limits with
(v2.4)ENABLE_BUDGET_LIMIT - Premium Model Defaults: All consultants now use flagship models by default
- 14 Consultants: Gemini, Codex, Mistral, Kilo, Cursor, Aider, Amp, Kimi, Claude, Qwen3, GLM, Grok, DeepSeek, Ollama
Slash Commands
Consultation Commands
| Command | Description |
|---|---|
| Main consultation - ask AI consultants a coding question |
| Quick query alias for consult |
| Run consultation with multi-round debate |
| Show all commands and usage |
Configuration Commands
| Command | Description |
|---|---|
| Full interactive setup (CLI detection, API keys, personas) |
| Verify CLI agents are installed and authenticated |
| View current configuration |
| Set default preset (minimal, balanced, high-stakes, local) |
| Set default synthesis strategy |
| Toggle features (Debate, Synthesis, Peer Review, etc.) |
| Change consultant personas |
| Configure API-based consultants (Qwen3, GLM, Grok, DeepSeek) |
Configuration Workflow
Set your preferences using slash commands:
/ai-consultants:config-preset # Choose default preset /ai-consultants:config-strategy # Choose synthesis strategy /ai-consultants:config-features # Enable/disable features /ai-consultants:config-status # View current settings
Consultants and Personas
| Consultant | CLI | Persona | Focus |
|---|---|---|---|
| Google Gemini | | The Architect | Design patterns, scalability |
| OpenAI Codex | | The Pragmatist | Simplicity, proven solutions |
| Mistral Vibe | | The Devil's Advocate | Edge cases, vulnerabilities |
| Kilo Code | | The Innovator | Creativity, unconventional |
| Cursor | | The Integrator | Full-stack perspective |
| Aider | | The Pair Programmer | Collaborative coding |
| Amp | | The Systems Thinker | System design, interactions |
| Kimi | | The Eastern Sage | Holistic, balanced perspectives |
| Claude | | The Synthesizer | Big picture, synthesis |
| Qwen | | The Analyst | Data-driven, metrics |
| Ollama | | The Local Expert | Privacy-first, zero cost |
API-only consultants: GLM (The Methodologist), Grok (The Provocateur), DeepSeek (The Code Specialist)
CLI/API Mode: Gemini, Codex, Claude, Mistral, and Qwen can switch between CLI and API mode via
*_USE_API environment variables.
Self-Exclusion: The invoking agent is automatically excluded from the panel. When invoked from Claude Code, Claude is excluded; when invoked from Codex CLI, Codex is excluded, etc.
Requirements
- At least 2 consultant CLIs installed and authenticated
- jq for JSON processing
Quick Install
curl -fsSL https://raw.githubusercontent.com/matteoscurati/ai-consultants/main/scripts/install.sh | bash ~/.claude/skills/ai-consultants/scripts/doctor.sh --fix
CLI Installation
npm install -g @google/gemini-cli # Gemini npm install -g @openai/codex # Codex pip install mistral-vibe # Mistral npm install -g @kilocode/cli # Kilo npm install -g @qwen-code/qwen-code@latest # Qwen curl -fsSL https://ampcode.com/install.sh | bash # Amp pip install kimi-cli && kimi login # Kimi brew install jq # Required # For local inference (optional) curl -fsSL https://ollama.com/install.sh | sh ollama pull llama3.2
Configuration Presets
| Preset | Consultants | Use Case |
|---|---|---|
| 2 (Gemini + Codex) | Quick questions |
| 4 (+Mistral +Kilo) | Standard use |
| 5 (+Cursor) | Comprehensive |
| All + debate | Critical decisions |
| Ollama only | Full privacy |
| Security-focused | +Debate |
| Budget-friendly | Low cost |
Synthesis Strategies
| Strategy | Description |
|---|---|
| Most common answer wins (default) |
| Weight conservative responses |
| Prioritize security |
| Prefer cheaper solutions |
| No recommendation |
Usage Examples
Basic Consultation
/ai-consultants:consult "How to optimize this SQL query?"
With File Context
/ai-consultants:consult "Review this authentication flow" src/auth.ts
With Debate
/ai-consultants:debate "Microservices or monolith for our new service?"
Bash Usage
cd ~/.claude/skills/ai-consultants # With preset ./scripts/consult_all.sh --preset balanced "Best approach for caching?" # With strategy ./scripts/consult_all.sh --strategy risk_averse "Security question" # With local model ./scripts/consult_all.sh --preset local "Private question"
Workflow
Query -> Classify -> Parallel Queries -> Voting -> Synthesis -> Report | | | Gemini (8) Consensus Recommendation Codex (7) Analysis Comparison Mistral (6) Risk Assessment
With debate:
Round 1 -> Cross-Critique -> Round 2 -> Final Synthesis
Usage Triggers
Automatic
- Doubts about implementation approach
- Validating complex solutions
- Exploring architectural alternatives
Explicit
- "Ask the consultants..."
- "What do the other models think?"
- "Compare solutions"
- "I want a second opinion"
Features
| Feature | Description | Toggle |
|---|---|---|
| Personas | Each consultant has a role that shapes responses | |
| Synthesis | Auto-combine responses into recommendation | |
| Debate | Consultants critique each other's answers | |
| Peer Review | Consultants anonymously rank each other | |
| Smart Routing | Auto-select best consultants per question type | |
| Cost Tracking | Track API usage costs | |
| Panic Mode | Auto-add rigor when uncertainty detected | |
Configuration
# Defaults (v2.9) DEFAULT_PRESET=balanced # Preset when --preset not given DEFAULT_STRATEGY=majority # Strategy when --strategy not given # Core features ENABLE_DEBATE=true # Multi-agent debate ENABLE_SYNTHESIS=true # Automatic synthesis ENABLE_PEER_REVIEW=false # Anonymous peer review ENABLE_PANIC_MODE=auto # Auto-rigor for uncertainty # CLI/API Mode Switching (v2.6+) GEMINI_USE_API=false # Use Google AI API instead of CLI CODEX_USE_API=false # Use OpenAI API instead of CLI CLAUDE_USE_API=false # Use Anthropic API instead of CLI MISTRAL_USE_API=false # Use Mistral API instead of CLI QWEN3_USE_API=true # Use DashScope API (default) or CLI # New consultants (v2.7-2.9) ENABLE_AMP=false # Amp CLI - The Systems Thinker AMP_MODEL=amp ENABLE_KIMI=false # Kimi CLI - The Eastern Sage KIMI_MODEL=kimi-code/kimi-for-coding ENABLE_QWEN3=false # Qwen CLI/API - The Analyst QWEN3_MODEL=qwen3-max # Ollama (local models) ENABLE_OLLAMA=true OLLAMA_MODEL=qwen2.5-coder:32b # Budget management (v2.4) ENABLE_BUDGET_LIMIT=false MAX_SESSION_COST=1.00 BUDGET_ACTION=warn # warn or stop
Output
/tmp/ai_consultations/TIMESTAMP/ ├── gemini.json # Individual responses ├── codex.json ├── voting.json # Consensus ├── synthesis.json # Recommendation └── report.md # Human-readable
Doctor Command
Diagnose and fix issues:
./scripts/doctor.sh # Full check ./scripts/doctor.sh --fix # Auto-fix ./scripts/doctor.sh --json # JSON output
Interpreting Results
| Scenario | Recommendation |
|---|---|
| High confidence + High consensus | Proceed confidently |
| Low confidence OR Low consensus | Consider more options |
| Mistral disagrees | Investigate risks |
| Panic mode triggered | Add debate rounds |
Best Practices
Security
- Never include credentials in queries
- Use
for sensitive code--preset local
Effective Queries
- Be specific about the question
- Include constraints (performance, etc.)
- Use debate for controversial decisions
Troubleshooting
| Issue | Solution |
|---|---|
| "Unknown skill" | Run install script or check |
| "Exit code 1" | Run to diagnose |
| No consultants | Run |
| API errors | Check |
| CLI not found | Run |
Extended Documentation
- Setup Guide - Installation, authentication, Claude Code setup
- Cost Rates - Model pricing
- Smart Routing - Category routing
- JSON Schema - Output format
Known Limitations
- Minimum 2 consultants required
- Smart Routing off by default
- Synthesis requires Claude CLI (fallback available)
- Estimated costs (heuristic token counting)