Auto-claude-code-research-in-sleep novelty-check

Verify research idea novelty against recent literature. Use when user says "查新", "novelty check", "有没有人做过", "check novelty", or wants to verify a research idea is novel before implementing.

install
source · Clone the upstream repo
git clone https://github.com/wanshuiyin/Auto-claude-code-research-in-sleep
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/wanshuiyin/Auto-claude-code-research-in-sleep "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/novelty-check" ~/.claude/skills/wanshuiyin-auto-claude-code-research-in-sleep-novelty-check && rm -rf "$T"
manifest: skills/novelty-check/SKILL.md
source content

Novelty Check Skill

Check whether a proposed method/idea has already been done in the literature: $ARGUMENTS

Constants

  • REVIEWER_MODEL =
    gpt-5.4
    — Model used via Codex MCP. Must be an OpenAI model (e.g.,
    gpt-5.4
    ,
    o3
    ,
    gpt-4o
    )

Instructions

Given a method description, systematically verify its novelty:

Phase A: Extract Key Claims

  1. Read the user's method description
  2. Identify 3-5 core technical claims that would need to be novel:
    • What is the method?
    • What problem does it solve?
    • What is the mechanism?
    • What makes it different from obvious baselines?

Phase B: Multi-Source Literature Search

For EACH core claim, search using ALL available sources:

  1. Web Search (via

    WebSearch
    ):

    • Search arXiv, Google Scholar, Semantic Scholar
    • Use specific technical terms from the claim
    • Try at least 3 different query formulations per claim
    • Include year filters for 2024-2026
  2. Known paper databases: Check against:

    • ICLR 2025/2026, NeurIPS 2025, ICML 2025/2026
    • Recent arXiv preprints (2025-2026)
  3. Read abstracts: For each potentially overlapping paper, WebFetch its abstract and related work section

Phase C: Cross-Model Verification

Call REVIEWER_MODEL via Codex MCP (

mcp__codex__codex
) with xhigh reasoning:

config: {"model_reasoning_effort": "xhigh"}

Prompt should include:

  • The proposed method description
  • All papers found in Phase B
  • Ask: "Is this method novel? What is the closest prior work? What is the delta?"

Phase D: Novelty Report

Output a structured report:

## Novelty Check Report

### Proposed Method
[1-2 sentence description]

### Core Claims
1. [Claim 1] — Novelty: HIGH/MEDIUM/LOW — Closest: [paper]
2. [Claim 2] — Novelty: HIGH/MEDIUM/LOW — Closest: [paper]
...

### Closest Prior Work
| Paper | Year | Venue | Overlap | Key Difference |
|-------|------|-------|---------|----------------|

### Overall Novelty Assessment
- Score: X/10
- Recommendation: PROCEED / PROCEED WITH CAUTION / ABANDON
- Key differentiator: [what makes this unique, if anything]
- Risk: [what a reviewer would cite as prior work]

### Suggested Positioning
[How to frame the contribution to maximize novelty perception]

Important Rules

  • Be BRUTALLY honest — false novelty claims waste months of research time
  • "Applying X to Y" is NOT novel unless the application reveals surprising insights
  • Check both the method AND the experimental setting for novelty
  • If the method is not novel but the FINDING would be, say so explicitly
  • Always check the most recent 6 months of arXiv — the field moves fast

Review Tracing

After each

mcp__codex__codex
or
mcp__codex__codex-reply
reviewer call, save the trace following
shared-references/review-tracing.md
. Use
tools/save_trace.sh
or write files directly to
.aris/traces/<skill>/<date>_run<NN>/
. Respect the
--- trace:
parameter (default:
full
).