Auto-claude-code-research-in-sleep novelty-check
Verify research idea novelty against recent literature. Use when user says \"查新\", \"novelty check\", \"有没有人做过\", \"check novelty\", or wants to verify a research idea is novel before implementing.
git clone https://github.com/wanshuiyin/Auto-claude-code-research-in-sleep
T=$(mktemp -d) && git clone --depth=1 https://github.com/wanshuiyin/Auto-claude-code-research-in-sleep "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/skills-codex-claude-review/novelty-check" ~/.claude/skills/wanshuiyin-auto-claude-code-research-in-sleep-novelty-check-94bd93 && rm -rf "$T"
skills/skills-codex-claude-review/novelty-check/SKILL.mdOverride for Codex users who want Claude Code, not a second Codex agent, to act as the reviewer. Install this package after
.skills/skills-codex/*
Novelty Check Skill
Check whether a proposed method/idea has already been done in the literature: $ARGUMENTS
Constants
- REVIEWER_MODEL =
— Claude reviewer invoked through the localclaude-review
MCP bridge. Setclaude-review
if you need a specific Claude model override.CLAUDE_REVIEW_MODEL
Instructions
Given a method description, systematically verify its novelty:
Phase A: Extract Key Claims
- Read the user's method description
- Identify 3-5 core technical claims that would need to be novel:
- What is the method?
- What problem does it solve?
- What is the mechanism?
- What makes it different from obvious baselines?
Phase B: Multi-Source Literature Search
For EACH core claim, search using ALL available sources:
-
Web Search (via
):WebSearch- Search arXiv, Google Scholar, Semantic Scholar
- Use specific technical terms from the claim
- Try at least 3 different query formulations per claim
- Include year filters for 2024-2026
-
Known paper databases: Check against:
- ICLR 2025/2026, NeurIPS 2025, ICML 2025/2026
- Recent arXiv preprints (2025-2026)
-
Read abstracts: For each potentially overlapping paper, WebFetch its abstract and related work section
Phase C: Cross-Model Verification
Call REVIEWER_MODEL via
mcp__claude-review__review_start with high-rigor review:
mcp__claude-review__review_start: prompt: | [Full novelty briefing + prior work list + specific novelty questions]
After this start call, immediately save the returned
jobId and poll mcp__claude-review__review_status with a bounded waitSeconds until done=true. Treat the completed status payload's response as the reviewer output, and save the completed threadId for any follow-up round.
Prompt should include:
- The proposed method description
- All papers found in Phase B
- Ask: "Is this method novel? What is the closest prior work? What is the delta?"
Phase D: Novelty Report
Output a structured report:
## Novelty Check Report ### Proposed Method [1-2 sentence description] ### Core Claims 1. [Claim 1] — Novelty: HIGH/MEDIUM/LOW — Closest: [paper] 2. [Claim 2] — Novelty: HIGH/MEDIUM/LOW — Closest: [paper] ... ### Closest Prior Work | Paper | Year | Venue | Overlap | Key Difference | |-------|------|-------|---------|----------------| ### Overall Novelty Assessment - Score: X/10 - Recommendation: PROCEED / PROCEED WITH CAUTION / ABANDON - Key differentiator: [what makes this unique, if anything] - Risk: [what a reviewer would cite as prior work] ### Suggested Positioning [How to frame the contribution to maximize novelty perception]
Important Rules
- Be BRUTALLY honest — false novelty claims waste months of research time
- "Applying X to Y" is NOT novel unless the application reveals surprising insights
- Check both the method AND the experimental setting for novelty
- If the method is not novel but the FINDING would be, say so explicitly
- Always check the most recent 6 months of arXiv — the field moves fast