Claude-skill-registry create-research-brief

Two-phase research design and consolidation skill for multi-LLM optimized research

install
source · Clone the upstream repo
git clone https://github.com/majiayu000/claude-skill-registry
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data/create-research-brief" ~/.claude/skills/majiayu000-claude-skill-registry-create-research-brief && rm -rf "$T"
manifest: skills/data/create-research-brief/SKILL.md
source content

Create Research Brief

A comprehensive two-phase skill for designing multi-LLM research strategies (Phase 1) and consolidating multi-model outputs into actionable intelligence (Phase 2).


1. Purpose

This skill provides 9 core capabilities:

#CapabilityPhaseDescription
1Decompose1Break research questions into MECE structures
2Assign1Map question categories to optimal LLMs
3Assess1Evaluate research risks at appropriate depth
4Generate1Produce model-specific optimized prompts
5Consolidate2Synthesize multi-model outputs into unified findings
6Resolve2Handle conflicting information with WWHTBT protocol
7Classify2Score evidence quality and tag uncertainty types
8Detect2Identify coverage gaps and unknown unknowns
9Produce2Generate tiered, decision-ready research reports

Checkpoints

This skill uses interactive checkpoints (see

references/checkpoints.yaml
) to resolve ambiguity:

  • research_type_classification — When research type is ambiguous
  • risk_depth_selection — When risk assessment depth not specified
  • model_mode_selection — When model execution mode not specified
  • hypothesis_priors_required — When multi_hypothesis enabled but priors missing
  • conflict_resolution_approach — When model outputs have significant conflicts (Phase 2)

2. Two-Phase Workflow

Phase 1: Research Design (Before Research)

StepActionOutput
1Validate ObjectiveConfirm research question is answerable
2Classify Research Typemarket | competitive | technology | strategic
CHECKPOINT: research_type_classificationIf type ambiguous: AskUserQuestion
3Define ScopeIn-scope, out-of-scope, boundaries
4Select MECE Pattern5-category decomposition structure
5Generate Sub-Questions3-4 questions per category
6Assess RisksQuick | Standard | Comprehensive
CHECKPOINT: risk_depth_selectionIf depth not specified: AskUserQuestion
7Assign ModelsMap categories to Claude/Gemini/GPT
CHECKPOINT: model_mode_selectionIf mode not specified: AskUserQuestion
8Frame HypothesesIf
multi_hypothesis=true
CHECKPOINT: hypothesis_priors_requiredIf priors missing: AskUserQuestion
9Recommend Expert PanelIf
expert_panel=true
10Produce Research BriefXML-structured Phase 1 deliverable

Phase 2: Consolidation (After Research)

StepActionOutput
1Ingest Model OutputsParse all LLM research results
2Score EvidenceApply 5-point Evidence Strength Rubric
3Detect ConflictsIdentify where models disagree
4Resolve ConflictsApply WWHTBT for unresolved
5Classify UncertaintyTag as epistemic/aleatory/model
6Audit MECE CoverageCheck for coverage gaps
7Probe Unknown UnknownsRun 5 discovery probes
8Tier FindingsAssign to Tier 1/2/3 by confidence
9Build Decision SupportCreate if-then decision tree
10Define Kill CriteriaConditions that invalidate research
11Produce ReportXML-structured Phase 2 deliverable

3. Parameters

ParameterTypeDefaultDescription
research_objective
stringrequiredThe core research question or goal
research_type
enum
market
market | competitive | technology | strategic
model_mode
enum
parallel
parallel | sequential | convergent
openai_depth
enum
balanced
minimal | balanced | exhaustive
risk_depth
enum
standard
quick | standard | comprehensive
multi_hypothesis
bool
false
Enable hypothesis-driven framing
expert_panel
bool
false
Include expert panel recommendations
context
string
""
Additional context for research

4. Model Strengths & Assignment

Model Profiles

ModelPrimary StrengthBest ForLimitation
Claude Opus 4.5Judgment, synthesis, nuanceStrategic questions, conflict resolution, synthesisMay not surface all sources
Gemini Pro 3Breadth, citations, groundingFactual lookup, comprehensive sourcing, current dataLess depth on complex reasoning
GPT-5.2 DeepRecency, depth, exhaustivenessTechnical details, narrow deep-dives, edge casesCan miss broader context

Default Category Assignments

Research TypeClaudeGeminiGPT
MarketDemand, TrendsSize, Structure, Supply
CompetitivePositioning, StrategyProduct, GTM, OrgDeep Dive
TechnologyFit, RiskMaturity, CostCapability
StrategicOptions, StakeholdersEnvironmentImplementation

5. Risk Assessment Depths

Quick (5 Factors)

Basic risk identification for time-sensitive research:

  • Top 3 risks with likelihood/impact
  • No mitigations or scenarios

Standard (+ Bias Audit)

Adds mitigation planning and cognitive bias check:

  • Mitigations and contingencies per risk
  • Early warning signals
  • Bias audit: confirmation, availability, anchoring

Comprehensive (+ Base Rates)

Full risk analysis with historical grounding:

  • Risk scenarios with trigger conditions
  • Risk dependencies and cascades
  • Base rate comparison from similar research
  • Pre-mortem analysis

6. MECE Decomposition Patterns

Pattern 1: Market Research

CategoryFocusModel
Market Size & DynamicsTAM/SAM/SOM, growth ratesGemini
Market StructureSegmentation, value chainGemini
Demand CharacteristicsBuyers, use cases, criteriaClaude
Supply & CompetitionPlayers, barriers, substitutesGemini
Market EvolutionTrends, regulatory, disruptionClaude

Pattern 2: Competitive Intelligence

CategoryFocusModel
Product & OfferingFeatures, pricing, roadmapGPT
Customers & PositioningSegments, win/loss, messagingClaude
Go-to-MarketSales, marketing, partnershipsGemini
Organization & OperationsTeam, tech stack, cost structureGemini
Strategy & TrajectoryDirection, investments, SWOTClaude

Pattern 3: Technology Evaluation

CategoryFocusModel
Capability & PerformanceFeatures, benchmarks, limitsGPT
Maturity & EcosystemStability, community, toolsGemini
Fit & IntegrationUse case alignment, migrationClaude
Cost & InvestmentTCO, licensing, infrastructureGemini
Risk & GovernanceTechnical, vendor, complianceClaude

Pattern 4: Strategic Research

CategoryFocusModel
Current StatePosition, strengths, weaknessesClaude
External EnvironmentIndustry, macro, technologyGemini
Strategic OptionsDirections, trade-offs, requirementsClaude
Stakeholder ConsiderationsCustomer, competitor, employeeClaude
Implementation RequirementsCapabilities, investments, timelineGPT

7. Multi-Hypothesis Framing

When to Enable

  • Testing predictions or forecasts
  • Evaluating competing theories
  • Decision involves binary or multi-way choice
  • Need to avoid confirmation bias

Process

  1. Define core question as testable prediction
  2. Generate 2-4 MECE hypotheses covering all outcomes
  3. Assign prior probabilities (must sum to 100%)
  4. Define supporting and refuting evidence for each
  5. Research gathers evidence against criteria
  6. Update posteriors based on evidence strength

Example

<hypotheses question="Will enterprise adopt GenAI for customer service by 2027?">
  <hypothesis id="H1" position="broad" prior="30%">
    >50% enterprise adoption
  </hypothesis>
  <hypothesis id="H2" position="selective" prior="50%">
    10-50% adoption in specific use cases
  </hypothesis>
  <hypothesis id="H3" position="limited" prior="20%">
    <10% adoption due to barriers
  </hypothesis>
</hypotheses>

8. Evidence Strength Tribunal

5-point scale for evaluating source quality:

ScoreNameDefinitionExamples
5PrimaryDirect from entity being researchedSEC filings, earnings calls, official docs
4Auth. SecondaryMajor analysts with citationsGartner, Forrester, WSJ investigative
3Credible SecondaryReputable sources, some sourcingTechCrunch, industry publications
2Weak SecondaryUnsourced, outdated, anonymousLinkedIn self-reports, old reports
1SpeculativeNo verifiable basisRumors, predictions, fabrications

Time Decay: Apply -1 for technology data >6 months, market data >1 year.

Reference: See

references/evidence-strength-rubric.md
for full scoring guidelines.


9. Conflict Resolution: WWHTBT

When models or sources disagree and resolution isn't clear, apply What Would Have To Be True analysis:

<conflict claim="Market size for X">
  <position holder="Gartner" value="$50B">
    <evidence score="4">2024 market report with methodology</evidence>
  </position>
  <position holder="IDC" value="$35B">
    <evidence score="4">Different scope definition</evidence>
  </position>

  <wwhtbt>
    <for_gartner>
      <condition>Adjacent markets included in scope</condition>
      <condition>Projected vs. realized revenue counted</condition>
    </for_gartner>
    <for_idc>
      <condition>Only core product category</condition>
      <condition>Realized revenue only</condition>
    </for_idc>
  </wwhtbt>

  <recommendation>
    Report range ($35-50B) with scope dependency noted.
    For our purposes, IDC definition more aligned.
  </recommendation>
</conflict>

10. Uncertainty Decomposition

TypeDefinitionCan Reduce?Action
EpistemicKnowledge gaps that COULD be closedYESResearch further
AleatoryInherent randomness that CANNOT be predictedNOQuantify range, build scenarios
ModelFramework/definition dependenciesDEPENDSMake choices explicit

Classification Questions

  • Epistemic: "Does someone, somewhere know this?"
  • Aleatory: "Even with perfect info, would this still be uncertain?"
  • Model: "Would a different definition change the answer?"

Reference: See

references/uncertainty-taxonomy.md
for full classification protocol.


11. Gap Analysis

Part 1: MECE Coverage Audit

Compare findings against expected coverage matrix for research type. Flag:

  • Critical gaps: Core dimensions missing or Score ≤2
  • Significant gaps: Supporting dimensions weak
  • Minor gaps: Context items missing

Part 2: Unknown Unknowns Probes

ProbeQuestion
Adjacent DomainWhat lessons from related industries apply?
Stakeholder Blind SpotWhose voice is missing from sources?
Time HorizonWhat historical precedents or future implications are ignored?
Failure ModeWhat would have to be true for conclusions to be wrong?
Second-Order EffectsIf findings are true, what else must follow?

Reference: See

references/gap-analysis-protocol.md
for full audit process.


12. Output Specifications

Phase 1 Deliverable: Research Brief

research-brief.xml
├── Header (ID, type, mode, parameters)
├── Section 1: Research Classification
├── Section 2: MECE Question Decomposition
├── Section 3: Multi-Hypothesis Framing (if enabled)
├── Section 4: Risk Assessment
├── Section 5: Expert Panel (if enabled)
├── Section 6: Model Role Assignments
├── Section 7: Ready-to-Execute Prompts
├── Section 8: Consolidation Strategy
├── Section 9: Verification Priorities
└── Section 10: Effort Estimates

Phase 2 Deliverable: Consolidated Report

consolidated-report.xml
├── Header (quality summary)
├── Part 1: Executive Summary (≤5 findings, bottom line)
├── Part 2: Tiered Findings (1: >75%, 2: 50-75%, 3: <50%)
├── Part 3: Evidence Quality Assessment
├── Part 4: Contested Claims & Conflict Resolution
├── Part 5: Uncertainty Analysis
├── Part 6: Gap Analysis
├── Part 7: Model Contribution Analysis
├── Part 8: Decision Support (if-then tree)
├── Part 9: Kill Criteria
├── Part 10: Methodology Transparency
├── Part 11: Appendices
└── CRITICAL CONSTRAINTS (at end for context retention)

Templates: See

templates/research-brief-template.md
and
templates/consolidated-report-template.md


13. Expert Panel Integration

When to Enable

  • High-stakes decisions
  • Multi-disciplinary topics
  • Need for challenge/red-teaming
  • Regulatory or compliance implications

Process

  1. Identify panel size (3-8 experts) and balance
  2. Select domain-appropriate experts
  3. Define deliberation format (round-robin, debate, Delphi)
  4. Assign challenger role for assumption testing
  5. Synthesize panel perspectives into findings

Expert Selection by Domain

DomainRecommended Experts
MarketMarket analyst, Customer representative, Industry veteran
CompetitiveCompetitive intel analyst, Former competitor employee, Sales leader
TechnologyTechnical architect, Security specialist, Operations lead
StrategicStrategy consultant, Board member, Industry analyst

14. Quality Gates

Phase 1 Gates (Research Design)

#GateCriterion
1Objective ClaritySingle, answerable research question
2MECE ValidityCategories non-overlapping and exhaustive
3Question QualityAll sub-questions researchable
4Model FitAssignments match model strengths
5Prompt ExecutabilityPrompts can run without modification
6CompletenessAll required sections populated

Phase 2 Gates (Consolidation)

#GateCriterion
1Evidence ScoredAll findings have evidence scores
2Conflicts SurfacedNo hidden disagreements
3Uncertainty ClassifiedAll gaps tagged by type
4Coverage AuditedMECE matrix reviewed
5Probes Executed≥3 of 5 unknown-unknowns probes run
6Tiers JustifiedConfidence matches evidence profile
7Decision SupportActionable if-then structure
8Constraints VerifiedAll 7 critical constraints checked

15. Use Cases

Use CaseTypeModeRiskHypothesisPanel
Market sizingmarketparallelquicknono
Competitor deep-divecompetitivesequentialstandardnono
Build vs buytechnologyconvergentcomprehensiveyesyes
Strategic planningstrategicparallelcomprehensiveyesyes
Trend monitoringmarketparallelquicknono
Investment due diligencecompetitiveconvergentcomprehensiveyesyes

16. Workflow Integration

This skill integrates with the broader research workflow:

┌─────────────────────┐
│ research-interviewer│  Elicit research requirements
└──────────┬──────────┘
           │
           ▼
┌─────────────────────┐
│create-research-brief│  ◀── THIS SKILL (Phase 1)
│     (Phase 1)       │  Design multi-LLM research strategy
└──────────┬──────────┘
           │
           ▼
┌─────────────────────┐
│   Execute Research  │  Run prompts across models
│  (Manual or Agent)  │
└──────────┬──────────┘
           │
           ▼
┌─────────────────────┐
│create-research-brief│  ◀── THIS SKILL (Phase 2)
│     (Phase 2)       │  Consolidate into report
└──────────┬──────────┘
           │
           ▼
┌─────────────────────┐
│ consolidate-research│  Additional synthesis if needed
└─────────────────────┘

17. References and Templates

Reference Files

FilePurpose
references/evidence-strength-rubric.md
5-point evidence scoring with special cases
references/uncertainty-taxonomy.md
3 uncertainty types with classification protocol
references/gap-analysis-protocol.md
MECE audit + 5 unknown-unknowns probes
references/mece-decomposition-guide.md
Full decomposition patterns with examples

Template Files

FilePurpose
templates/research-brief-template.md
Phase 1 output structure (XML)
templates/consolidated-report-template.md
Phase 2 output structure (XML)

Quick Start

Phase 1: Create Research Brief

/create-research-brief
research_objective: "What is the market opportunity for AI legal research tools?"
research_type: market
risk_depth: standard

Phase 2: Consolidate Research

/create-research-brief --phase=2
input: [model outputs from Phase 1 execution]