Claude-skill-registry claim-extraction
Extract structured claims, predictions, hints, and opinions from AI research content. Use when processing tweets, blog posts, substacks, or other content from AI researchers to identify substantive assertions about AI capabilities, limitations, and progress.
install
source · Clone the upstream repo
git clone https://github.com/majiayu000/claude-skill-registry
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data/claim-extraction" ~/.claude/skills/majiayu000-claude-skill-registry-claim-extraction && rm -rf "$T"
manifest:
skills/data/claim-extraction/SKILL.mdsource content
Claim Extraction Skill
Extract all substantive claims from AI research content. A claim is any assertion that:
- States something as true about AI capabilities, limitations, or progress
- Predicts future developments
- Hints at unreleased work
- Expresses a positioned opinion on the field's direction
- Critiques others' claims or work
Extraction Schema
For each claim, extract:
1. claimText
The claim in clear, standalone form. Paraphrase if needed for clarity.
2. claimType
: Assertion about current state ("GPT-4 can do X")fact
: Forward-looking ("By 2026, we'll have...")prediction
: Implies unreleased work ("We've been seeing interesting results with...")hint
: Positioned take ("I think scaling is/isn't sufficient")opinion
: Challenges others ("Marcus is wrong because...")critique
: Genuine uncertainty expressed ("I'm not sure if...")question
3. topic
Primary topic category:
: Scaling laws, compute, training efficiencyscaling
: LLM reasoning, chain-of-thought, planningreasoning
: AI agents, tool use, autonomyagents
: AI safety, alignment, controlsafety
: Mechanistic interpretabilityinterpretability
: Vision, audio, video modelsmultimodal
: RLHF, preference learning, Constitutional AIrlhf
: Evals, benchmarks, capability measurementbenchmarks
: Training infra, chips, hardwareinfrastructure
: AI policy, regulation, governancepolicy
: General AI commentarygeneral
4. stance
: Optimistic about AI progress/capabilitiesbullish
: Skeptical/pessimistic about AI progressbearish
: Balanced or factual without clear stanceneutral
5. bullishness
Float from 0.0 (maximally bearish) to 1.0 (maximally bullish)
6. confidence
How confident does the author seem? (0.0-1.0)
- Hedging language: "might", "could", "I think", "possibly" → lower
- Certainty language: "will", "definitely", "it's clear that" → higher
7. timeframe (for predictions)
: < 1 yearnear-term
: 1-3 yearsmedium-term
: 3-10 yearslong-term
: No clear timeframeunspecified
: Not a predictionnull
8. evidenceProvided
: Cites data, papers, or detailed reasoningstrong
: Some reasoning but not rigorousmoderate
: Assertion without supportweak
: "Trust me, I work on this"appeal-to-authority
9. quoteworthiness
Is this claim notable enough to quote in a digest? (0.0-1.0)
Output Format
Return JSON:
{ "claims": [ { "claimText": "The claim in clear form", "claimType": "prediction", "topic": "reasoning", "stance": "bullish", "bullishness": 0.8, "confidence": 0.7, "timeframe": "medium-term", "evidenceProvided": "moderate", "quoteworthiness": 0.6, "relatedTo": ["o1", "chain-of-thought"], "originalQuote": "Brief relevant quote if notable" } ] }
Guidelines
- Extract MULTIPLE claims from a single piece of content if present
- Don't over-extract - only substantive, meaningful claims
- A tweet saying "Interesting paper" is NOT a claim
- Look for IMPLICIT claims ("We've made a lot of progress" implies capability gains)
- Pay attention to WHO is speaking - lab researchers hinting at their own work is high signal
- Critics often make claims by contradiction ("X is wrong, therefore Y")
Author Context Matters
Consider the author's affiliation when assessing:
- Lab researchers (Anthropic, OpenAI, DeepMind): May hint at unreleased work
- Critics (Marcus, Chollet, Mitchell): Often make claims through critique
- Independent (Simon Willison, Jim Fan): Provide practitioner perspectives