Auto-claude-code-research-in-sleep research-review
Get a deep critical review of research from Gemini via gemini-review MCP. Use when user says \"review my research\", \"help me review\", \"get external review\", or wants critical feedback on research ideas, papers, or experimental results.
git clone https://github.com/wanshuiyin/Auto-claude-code-research-in-sleep
T=$(mktemp -d) && git clone --depth=1 https://github.com/wanshuiyin/Auto-claude-code-research-in-sleep "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/skills-codex-gemini-review/research-review" ~/.claude/skills/wanshuiyin-auto-claude-code-research-in-sleep-research-review-867877 && rm -rf "$T"
skills/skills-codex-gemini-review/research-review/SKILL.mdOverride for Codex users who want Gemini, not a second Codex agent, to act as the reviewer. Install this package after
.skills/skills-codex/*
Research Review via gemini-review
MCP (high-rigor review)
gemini-reviewGet a multi-round critical review of research work from an external LLM with maximum reasoning depth.
Constants
- REVIEWER_MODEL =
— Gemini reviewer invoked through the localgemini-review
MCP bridge. Setgemini-review
if you need a specific Gemini model override.GEMINI_REVIEW_MODEL
Context: $ARGUMENTS
Prerequisites
- Install the base Codex-native skills first: copy
intoskills/skills-codex/*
.~/.codex/skills/ - Then install this overlay package: copy
intoskills/skills-codex-gemini-review/*
and allow it to overwrite the same skill names.~/.codex/skills/ - Register the local reviewer bridge:
codex mcp add gemini-review -- python3 ~/.codex/mcp-servers/gemini-review/server.py - This gives Codex access to
,mcp__gemini-review__review_start
, andmcp__gemini-review__review_reply_start
.mcp__gemini-review__review_status
Workflow
Step 1: Gather Research Context
Before calling the external reviewer, compile a comprehensive briefing:
- Read project narrative documents (e.g., STORY.md, README.md, paper drafts)
- Read any memory/notes files for key findings and experiment history
- Identify: core claims, methodology, key results, known weaknesses
Step 2: Initial Review (Round 1)
Send a detailed prompt with high-rigor review:
mcp__gemini-review__review_start: prompt: | [Full research context + specific questions] Please act as a senior ML reviewer (NeurIPS/ICML level). Identify: 1. Logical gaps or unjustified claims 2. Missing experiments that would strengthen the story 3. Narrative weaknesses 4. Whether the contribution is sufficient for a top venue Please be brutally honest.
After this start call, immediately save the returned
jobId and poll mcp__gemini-review__review_status with a bounded waitSeconds until done=true. Treat the completed status payload's response as the reviewer output, and save the completed threadId for any follow-up round.
Step 3: Iterative Dialogue (Rounds 2-N)
Use
mcp__gemini-review__review_reply_start with the saved completed threadId, then poll mcp__gemini-review__review_status with the returned jobId until done=true to continue the conversation:
For each round:
- Respond to criticisms with evidence/counterarguments
- Ask targeted follow-ups on the most actionable points
- Request specific deliverables: experiment designs, paper outlines, claims matrices
Key follow-up patterns:
- "If we reframe X as Y, does that change your assessment?"
- "What's the minimum experiment to satisfy concern Z?"
- "Please design the minimal additional experiment package (highest acceptance lift per GPU week)"
- "Please write a mock NeurIPS/ICML review with scores"
- "Give me a results-to-claims matrix for possible experimental outcomes"
Step 4: Convergence
Stop iterating when:
- Both sides agree on the core claims and their evidence requirements
- A concrete experiment plan is established
- The narrative structure is settled
Step 5: Document Everything
Save the full interaction and conclusions to a review document in the project root:
- Round-by-round summary of criticisms and responses
- Final consensus on claims, narrative, and experiments
- Claims matrix (what claims are allowed under each possible outcome)
- Prioritized TODO list with estimated compute costs
- Paper outline if discussed
Update project memory/notes with key review conclusions.
Key Rules
- Always ask the Gemini reviewer for strict, high-rigor feedback.
- Send comprehensive context in Round 1 — the external model cannot read your files
- Be honest about weaknesses — hiding them leads to worse feedback
- Push back on criticisms you disagree with, but accept valid ones
- Focus on ACTIONABLE feedback — "what experiment would fix this?"
- Document the completed
for potential future resumptionthreadId - The review document should be self-contained (readable without the conversation)
Prompt Templates
For initial review:
"I'm going to present a complete ML research project for your critical review. Please act as a senior ML reviewer (NeurIPS/ICML level)..."
For experiment design:
"Please design the minimal additional experiment package that gives the highest acceptance lift per GPU week. Our compute: [describe]. Be very specific about configurations."
For paper structure:
"Please turn this into a concrete paper outline with section-by-section claims and figure plan."
For claims matrix:
"Please give me a results-to-claims matrix: what claim is allowed under each possible outcome of experiments X and Y?"
For mock review:
"Please write a mock NeurIPS review with: Summary, Strengths, Weaknesses, Questions for Authors, Score, Confidence, and What Would Move Toward Accept."