Awesome-omni-skill ai-pair-programmer
Get second opinions from AI providers (Grok, ChatGPT, Gemini) on implementation plans, code, or architecture decisions. Use when the user asks to "review with [AI name]", "get [AI]'s opinion", "pair program with [AI]", or wants a second perspective on their solution. Supports multiple providers simultaneously for comparative feedback. (Triggers: review with grok, review with gemini, review with chatgpt, pair program, second opinion, ai review)
git clone https://github.com/diegosouzapw/awesome-omni-skill
T=$(mktemp -d) && git clone --depth=1 https://github.com/diegosouzapw/awesome-omni-skill "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data-ai/ai-pair-programmer" ~/.claude/skills/diegosouzapw-awesome-omni-skill-ai-pair-programmer && rm -rf "$T"
skills/data-ai/ai-pair-programmer/SKILL.mdAI Pair Programmer
You are the lead architect. AI pair programmers (Grok, ChatGPT, Gemini) provide second opinions.
Supported Providers
| Provider | Aliases | API Key Env Var | Model Override Env Var | Default Model |
|---|---|---|---|---|
| Grok (xAI) | , | | | grok-4-1-fast-reasoning |
| ChatGPT (OpenAI) | , , | | | gpt-5.1 |
| Gemini (Google) | , | | | gemini-3-pro-preview |
Choosing a Provider
When the user mentions a specific AI, use that provider:
- "review with Grok" →
--provider grok - "get ChatGPT's opinion" →
--provider chatgpt - "ask Gemini" →
--provider gemini - "review with Grok and Gemini" →
--provider grok,gemini - "get multiple opinions" / "ask all AIs" →
--provider all
If no specific AI is mentioned, prefer using multiple providers for better coverage:
uses all configured providers in parallel--provider all
CRITICAL: Content Requirement
Every call MUST have content to review. The script will fail with "No content to review" if you don't provide one of these:
| Content Source | When to Use |
|---|---|
| Code review with multiple files |
| Code review with single file |
| Architecture/plan reviews WITHOUT code files |
(positional) | Quick reviews with inline content |
| stdin (piped) | Git diffs via |
Common mistake: Calling the script with only
--context and --app-context but no content source. This will fail!
# WRONG - No content source, will fail! python3 pair_review.py --provider grok \ --app-context "React app" \ --context "User wants dark mode" # CORRECT - Use --proposal for architecture reviews without files python3 pair_review.py --provider grok \ --app-context "React app" \ --context "User wants dark mode" \ --proposal "Add ThemeContext provider, store preference in localStorage, use CSS variables"
REQUIRED: Always Provide Full Context
When calling any AI pair programmer, you MUST provide:
-
App Context (
): Summarize the tech stack--app-context- Language/runtime (TypeScript, Python, Go, C#, etc.)
- Framework (React, Next.js, Django, Express, .NET, etc.)
- Architecture patterns (REST API, GraphQL, microservices, monolith, MVVM, etc.)
- Key libraries or infrastructure (Redis, PostgreSQL, Docker, etc.)
-
Problem Context (
): What is the user trying to accomplish?--context- The user's original request or the problem being solved
- Any constraints or requirements mentioned
-
Your Proposal (
): What is YOUR proposed solution?--proposal- Your approach to solving the problem
- Key decisions you've made and why
- This is what the AI(s) will evaluate
- IMPORTANT: If not providing
or--files
,--file
becomes the content to review!--proposal
-
Already Considered (
): What you already tried or rejected (optional)--considered- Approaches that didn't work and why
- Ideas you ruled out so reviewers don't suggest them again
- Example: "Tried using localStorage but it's not persistent enough; considered Redux but overkill for this use case"
-
Files/Code (
or--files
): The relevant code--file- Only include files directly relevant to the review
- For large changes, use
flag instead--summary - If omitted: You MUST provide
which will be used as the content--proposal
Workflow
- Understand the request - Clarify what the user wants
- Analyze the codebase - Read relevant files, understand patterns
- Formulate YOUR solution - Decide on an approach as lead architect
- Call AI pair programmer(s) - Provide full context
- Synthesize feedback - Evaluate points critically, note agreements/disagreements
- Present to user - Share your final recommendation
When NOT to Call AI Pair Programmers
Skip to save API costs:
- Trivial syntax fixes or one-liners
- Pure UI/styling tweaks
- When you're 100% confident
- User says "no second opinion" or "just implement"
- Questions better answered by docs/search
Reserve for substantive decisions: plans, refactors, architecture trade-offs.
Script Usage
# Single provider - Plan review (with files) python3 {{SKILL_DIR}}/scripts/pair_review.py \ --provider grok \ --app-context "React/TypeScript frontend with Redux, Node.js/Express backend, PostgreSQL" \ --context "User wants to add real-time notifications" \ --proposal "Use WebSockets with Socket.io, store notification state in Redux, persist to DB" \ --type plan \ --files src/services/NotificationService.ts src/store/notificationSlice.ts # Architecture decision WITHOUT files (--proposal becomes the content) python3 {{SKILL_DIR}}/scripts/pair_review.py \ --provider grok \ --app-context ".NET 9 MvvmCross app for iOS/Android" \ --context "Auto-scroll to first validation error field after form submit fails" \ --proposal "Add ScrollToField observable property to ViewModel. iOS: TableViewSource calls ScrollToRow(). Android: Find View Y position and scroll NestedScrollView." \ --type architecture # Multiple providers - Architecture decision (with already-considered alternatives) python3 {{SKILL_DIR}}/scripts/pair_review.py \ --provider grok,gemini \ --app-context "Python FastAPI microservices, Docker/Kubernetes, Redis for caching" \ --context "Need to decide on inter-service communication pattern" \ --proposal "Use async message queue (RabbitMQ) instead of synchronous HTTP calls" \ --considered "Tried gRPC but adds complexity; considered Redis pub/sub but need persistence" \ --type architecture # All configured providers - Code review python3 {{SKILL_DIR}}/scripts/pair_review.py \ --provider all \ --app-context "Go REST API with Chi router, PostgreSQL, clean architecture" \ --context "Refactoring authentication to support OAuth2" \ --proposal "Add OAuth2 middleware, separate auth logic into domain service" \ --type code \ --file internal/auth/service.go # ChatGPT only - API design python3 {{SKILL_DIR}}/scripts/pair_review.py \ --provider chatgpt \ --app-context "Django REST Framework backend, React frontend" \ --context "Designing new API endpoints for user management" \ --proposal "RESTful endpoints with versioning, pagination, and rate limiting" \ --type architecture # Gemini only - Performance review python3 {{SKILL_DIR}}/scripts/pair_review.py \ --provider gemini \ --app-context "Next.js app with server components, Prisma ORM, Vercel deployment" \ --context "Page load times are slow on the dashboard" \ --proposal "Add Redis caching layer, optimize database queries, use React Suspense" \ --type code \ --files src/app/dashboard/page.tsx src/lib/queries.ts # Git diff review with multiple providers git diff | python3 {{SKILL_DIR}}/scripts/pair_review.py \ --provider grok,chatgpt \ --app-context "Ruby on Rails monolith, Sidekiq for background jobs" \ --context "Bug fix for race condition in order processing" \ --proposal "Added database-level locking and idempotency checks" \ --diff # List available providers and their status python3 {{SKILL_DIR}}/scripts/pair_review.py --list-providers
Review Types
| Type | Use For |
|---|---|
| Implementation plans, step-by-step approaches |
| Code review, bug fixes, refactoring |
| System design, technology choices, patterns |
| Anything else |
| High-level descriptions of large changes |
| Git diffs, focused on what changed |
| Multiple related files as a unit |
Multi-Provider Output
When using multiple providers, the output includes:
- Each provider's feedback with clear attribution
- A synthesis guidance section highlighting:
- Points of agreement (high confidence)
- Points of disagreement (needs your judgment)
- Unique insights from each AI
After Receiving Feedback
As lead architect, YOU make the final decision:
- Evaluate each point - Is the concern valid for this specific context?
- Note agreements - Where AIs agree, confidence increases
- Consider adjustments - If valid issues raised, incorporate them
- Disagree when appropriate - If you have good reasons, explain to user
Response Format (Single Provider)
I consulted with [AI Name] on this approach. Here's the synthesis: **The Problem:** [what we're solving] **My Approach:** [your proposed solution] **[AI Name]'s Feedback:** - [Key point 1 - whether you agree/disagree and why] - [Key point 2] **Final Recommendation:** [your decision, incorporating valid feedback]
Response Format (Multiple Providers)
I consulted with [AI 1] and [AI 2] on this approach. Here's the synthesis: **The Problem:** [what we're solving] **My Approach:** [your proposed solution] **Points of Agreement:** - [Both/All AIs agreed on X] - [This gives high confidence in Y] **Differing Perspectives:** - [AI 1] suggested Z, while [AI 2] preferred W - My take: [your evaluation of these perspectives] **Final Recommendation:** [your decision, synthesizing the best insights]
Environment Setup
API keys can be configured in two ways (environment variables take priority):
Option 1: Environment Variables
# Grok (xAI) export XAI_API_KEY="xai-your-api-key-here" # ChatGPT (OpenAI) export OPENAI_API_KEY="sk-your-api-key-here" # Gemini (Google) export GEMINI_API_KEY="your-api-key-here"
Option 2: config.json (Persistent)
Add
api_key to each provider in config.json:
{ "providers": { "chatgpt": { "model": "gpt-5.1", "api_key": "sk-your-api-key-here" } } }
Only configure the providers you plan to use. The skill will automatically detect which are available.
Model Configuration
Model selection priority (highest to lowest):
- CLI
flag--model - Environment variable (
,GROK_MODEL
,OPENAI_MODEL
)GEMINI_MODEL
fileconfig.json- Built-in defaults
Using Environment Variables (Easiest)
Set model override via environment variable:
# Override Grok model export GROK_MODEL="grok-3-mini" # Override ChatGPT model export OPENAI_MODEL="gpt-4-turbo" # Override Gemini model export GEMINI_MODEL="gemini-2.0-flash"
Run
--list-providers to see current model sources:
python3 {{SKILL_DIR}}/scripts/pair_review.py --list-providers
Using config.json
Models and API keys can be configured in
config.json in the skill directory:
{ "providers": { "grok": { "model": "grok-4-1-fast-reasoning", "api_key": "", "description": "Grok (xAI) - Set api_key here or XAI_API_KEY env var" }, "chatgpt": { "model": "gpt-5.1", "api_key": "", "description": "ChatGPT (OpenAI) - Set api_key here or OPENAI_API_KEY env var" }, "gemini": { "model": "gemini-3-pro-preview", "api_key": "", "description": "Gemini (Google) - Set api_key here or GEMINI_API_KEY env var" } }, "defaults": { "provider": "grok", "temperature": 0.7, "max_tokens": 4096 } }
Priority order: Environment variables > config.json api_key
You can still override any model at runtime with
--model.
Debugging
Use
--debug to see the full prompt sent to providers:
python3 {{SKILL_DIR}}/scripts/pair_review.py --debug --provider grok ...
Use
--list-providers to check which providers are configured:
python3 {{SKILL_DIR}}/scripts/pair_review.py --list-providers