Awesome-Agent-Skills-for-Empirical-Research b2

install
source · Clone the upstream repo
git clone https://github.com/brycewang-stanford/Awesome-Agent-Skills-for-Empirical-Research
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/brycewang-stanford/Awesome-Agent-Skills-for-Empirical-Research "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/25-HosungYou-Diverga/skills/b2" ~/.claude/skills/brycewang-stanford-awesome-agent-skills-for-empirical-research-b2 && rm -rf "$T"
manifest: skills/25-HosungYou-Diverga/skills/b2/SKILL.md
source content

⛔ Prerequisites (v8.2 — MCP Enforcement)

diverga_check_prerequisites("b2")
→ must return
approved: true
If not approved → AskUserQuestion for each missing checkpoint (see
.claude/references/checkpoint-templates.md
)

Checkpoints During Execution

  • 🟠 CP_QUALITY_REVIEW →
    diverga_mark_checkpoint("CP_QUALITY_REVIEW", decision, rationale)

Fallback (MCP unavailable)

Read

.research/decision-log.yaml
directly to verify prerequisites. Conversation history is last resort.


Evidence Quality Appraiser

Agent ID: 06 Category: B - Literature & Evidence VS Level: Enhanced (3-Phase) Tier: Core Icon: 🔬

Overview

Systematically evaluates methodological quality and risk of bias in individual studies. Selects and applies appropriate assessment tools based on study design type.

Applies VS-Research methodology to go beyond mechanical tool application, providing differentiated quality evaluation strategies tailored to research context and purpose.

VS-Research 3-Phase Process (Enhanced)

Phase 1: Modal Quality Assessment Approach Identification

Purpose: Recognize limitations of mechanical tool application

⚠️ **Modal Warning**: The following are the most predictable quality assessment approaches:

| Modal Approach | T-Score | Limitation |
|----------------|---------|------------|
| "RCT → Apply RoB 2.0" | 0.90 | Automatic matching ignoring context |
| "Observational → Apply NOS" | 0.88 | Ignores tool limitations |
| "Report GRADE rating only" | 0.85 | Rating rationale unclear |

➡️ Tool application is baseline. Proceeding with context-adaptive assessment.

Phase 2: Context-Adaptive Evaluation Strategy

Purpose: Present evaluation approaches suited to research purpose and context

**Direction A** (T ≈ 0.7): Standard tool + contextual interpretation
- Standard tool application + domain-specific weighting
- Suitable for: General systematic reviews

**Direction B** (T ≈ 0.4): Multi-tool triangulation
- Simultaneous application of multiple tools + discrepancy analysis
- Additional field-specific quality criteria
- Suitable for: Methodology papers, high-quality reviews

**Direction C** (T < 0.3): Purpose-specific evaluation
- Differentiated criteria by meta-analysis purpose
- Propose new evaluation dimensions (reproducibility, transparency)
- Suitable for: Methodological innovation, guideline development

Phase 4: Recommendation Execution

Based on selected evaluation strategy:

  1. State tool selection rationale
  2. Domain-specific detailed assessment + interpretive commentary
  3. Meta-analysis utilization recommendations
  4. Sensitivity analysis necessity determination

Quality Assessment Typicality Score Reference Table

T > 0.8 (Modal - Supplementation Required):
├── Study type → Standard tool automatic matching
├── Yes/No per checklist item
├── Report only total score or rating
└── Judgment rationale unclear

T 0.5-0.8 (Established - Add Interpretation):
├── Specific rationale per domain
├── Interpret meaning in research context
├── Meta-analysis inclusion/exclusion recommendation
└── Sensitivity analysis necessity determination

T 0.3-0.5 (In-depth - Recommended):
├── Multi-tool triangulation
├── Additional field-specific criteria
├── Quality-effect size relationship analysis
└── Rating uncertainty quantification

T < 0.3 (Innovative - For Leading Research):
├── Propose new evaluation dimensions
├── Critical discussion of tool limitations
├── Purpose-specific evaluation framework
└── Quality assessment uncertainty propagation

When to Use

  • Evaluating included studies in systematic reviews
  • Verifying study quality before meta-analysis
  • Assessing evidence for evidence-based decision making
  • Judging reliability of research findings

Core Functions

  1. Study Type-Specific Tool Selection

    • RCT: Cochrane Risk of Bias 2.0
    • Observational studies: Newcastle-Ottawa Scale, ROBINS-I
    • Qualitative studies: CASP, JBI Critical Appraisal
    • Mixed methods: MMAT
  2. Risk of Bias Assessment

    • Domain-specific bias evaluation
    • Overall risk of bias judgment
    • Evidence-based determination
  3. GRADE Certainty Rating

    • Certainty of evidence assessment
    • Identify upgrade/downgrade factors
    • Support recommendation strength judgment
  4. Quality Summary Visualization

    • Traffic light plot
    • Summary of findings table

Assessment Tool Library

RCT: Cochrane Risk of Bias 2.0

DomainAssessment Content
D1Bias arising from randomization process
D2Bias due to deviations from intended interventions
D3Bias due to missing outcome data
D4Bias in measurement of outcome
D5Bias in selection of reported result

Judgment: Low risk / Some concerns / High risk

Observational Studies: Newcastle-Ottawa Scale

DomainItemsPoints
SelectionRepresentativeness of exposed cohort
Selection of non-exposed cohort
Ascertainment of exposure
Demonstration outcome not present at start
ComparabilityComparability of cohorts★★
OutcomeAssessment of outcome
Adequate follow-up length
Adequacy of follow-up

Total Score: /9 points

Qualitative Studies: CASP Checklist

  1. Was there a clear statement of aims?
  2. Is a qualitative methodology appropriate?
  3. Was the research design appropriate?
  4. Was the recruitment strategy appropriate?
  5. Was data collected in a way that addressed the research issue?
  6. Has the researcher-participant relationship been considered?
  7. Have ethical issues been considered?
  8. Was data analysis sufficiently rigorous?
  9. Is there a clear statement of findings?
  10. Is the research valuable?

Input Requirements

Required:
  - study_type: "RCT, cohort, case-control, qualitative, etc."
  - study_information: "Methods section or full paper"

Optional:
  - assessment_tool: "If specific tool preferred"
  - assessment_purpose: "Meta-analysis, guideline development, etc."

Output Format

## Study Quality Assessment Report

### 1. Study Information
- Authors: [Author names]
- Year: [Publication year]
- Study Type: [Design type]
- Applied Tool: [Assessment tool name]

### 2. Risk of Bias Assessment (RCT Example)

| Domain | Judgment | Rationale |
|--------|----------|-----------|
| D1: Randomization process | 🟢/🟡/🔴 | [Specific rationale] |
| D2: Deviations from interventions | 🟢/🟡/🔴 | [Specific rationale] |
| D3: Missing outcome data | 🟢/🟡/🔴 | [Specific rationale] |
| D4: Outcome measurement | 🟢/🟡/🔴 | [Specific rationale] |
| D5: Selection of reported result | 🟢/🟡/🔴 | [Specific rationale] |

**Overall Judgment**: [Low risk / Some concerns / High risk]

### 3. Quality Assessment Summary

**Key Strengths:**
1. [Strength 1]
2. [Strength 2]

**Key Weaknesses:**
1. [Weakness 1]
2. [Weakness 2]

### 4. Evidence Utilization Recommendations

- Meta-analysis inclusion: [Recommended/Caution needed/Exclude recommended]
- Sensitivity analysis: [Needed/Not needed]
- Interpretation caveats: [Specific cautions]

### 5. GRADE Assessment (If Applicable)

| Factor | Assessment | Impact |
|--------|------------|--------|
| Study design | | |
| Risk of bias | | ↓ |
| Inconsistency | | |
| Indirectness | | |
| Imprecision | | |
| Publication bias | | |

**Certainty Rating**: ⊕⊕⊕⊕ High / ⊕⊕⊕◯ Moderate / ⊕⊕◯◯ Low / ⊕◯◯◯ Very Low

Prompt Template

You are a research quality assessment expert.

Please evaluate the methodological quality of the following study:

[Study Type]: {study_type}
[Study Information]: {study_info}

Tasks to perform:

[For RCT - Cochrane RoB 2.0]
1. Bias arising from randomization process
2. Bias due to deviations from intended interventions
3. Bias due to missing outcome data
4. Bias in measurement of outcome
5. Bias in selection of reported result
→ Overall judgment: Low / Some concerns / High

[For Observational - Newcastle-Ottawa Scale]
1. Selection - 4 points
2. Comparability - 2 points
3. Outcome/Exposure - 3 points
→ Total: /9

[For Qualitative - CASP]
1. Clear research aim
2. Appropriate qualitative methodology
3. Appropriate research design
... (10 items)

Final output:
- Quality assessment summary table
- Key strengths and weaknesses
- Evidence utilization caveats

GRADE Rating Determination Guide

Downgrade Factors

FactorCriteriaDowngrade
Risk of biasSerious limitations-1 or -2
InconsistencyI² > 75%, CI non-overlap-1 or -2
IndirectnessPICO mismatch-1 or -2
ImprecisionOIS not met, wide CI-1 or -2
Publication biasFunnel plot asymmetry-1

Upgrade Factors (Observational Studies)

FactorCriteriaUpgrade
Large effect sizeRR > 2 or < 0.5+1
Dose-responseClear gradient+1
ConfoundingActs toward reducing effect+1

Extraction Quality Validation (V7 Lesson)

Statistical Consistency Checks

CheckRuleAlert
F-to-t consistencyF(1, df) = t^2Error if >5% deviation
Standardization detection"standardized" in measureCritical flag
Pre-test as outcomePre-test used as ESREJECT
Missing correlationGain score needs r_pre_postWarning

Effect Size Quality Rating

RatingCriteria
HIGHReported g with n, verified calculation
MEDIUMCalculated from M/SD, needs verification
LOWEstimated from t/F/p, high uncertainty
UNACCEPTABLEPre-test as outcome, missing key data

Quality Validation Checklist

extraction_quality_checklist:
  - item: "Source verification"
    check: "ES matches original paper values"
    required: true
  - item: "Calculation verification"
    check: "d-to-g conversion within tolerance"
    required: true
  - item: "Independence check"
    check: "No pre-test as outcome"
    required: true
  - item: "Design classification"
    check: "Between/within/mixed correctly identified"
    required: true
  - item: "Dependency documentation"
    check: "Multiple ES from same study flagged"
    required: true

Related Agents

  • 05-systematic-literature-scout: Search for studies to evaluate
  • 07-effect-size-extractor: Extract effect sizes from quality-assessed studies
  • 14-checklist-manager: Checklist-based assessment support

v3.0 Creativity Mechanism Integration

Available Creativity Mechanisms (ENHANCED)

MechanismApplication TimingUsage Example
Forced AnalogyPhase 2Apply quality criteria from other fields by analogy
Iterative LoopPhase 24-round divergence-convergence for strategy refinement
Semantic DistancePhase 2Discover new evaluation dimensions beyond existing tools

Checkpoint Integration

Applied Checkpoints:
  - CP-INIT-002: Select creativity level
  - CP-VS-001: Select quality assessment direction (multiple)
  - CP-VS-003: Final assessment strategy satisfaction confirmation
  - CP-SD-001: Concept combination distance threshold

Module References

../../research-coordinator/core/vs-engine.md
../../research-coordinator/core/t-score-dynamic.md
../../research-coordinator/creativity/forced-analogy.md
../../research-coordinator/creativity/iterative-loop.md
../../research-coordinator/creativity/semantic-distance.md
../../research-coordinator/interaction/user-checkpoints.md

References

  • VS Engine v3.0:
    ../../research-coordinator/core/vs-engine.md
  • Dynamic T-Score:
    ../../research-coordinator/core/t-score-dynamic.md
  • Creativity Mechanisms:
    ../../research-coordinator/references/creativity-mechanisms.md
  • Project State v4.0:
    ../../research-coordinator/core/project-state.md
  • Pipeline Templates v4.0:
    ../../research-coordinator/core/pipeline-templates.md
  • Integration Hub v4.0:
    ../../research-coordinator/core/integration-hub.md
  • Guided Wizard v4.0:
    ../../research-coordinator/core/guided-wizard.md
  • Auto-Documentation v4.0:
    ../../research-coordinator/core/auto-documentation.md
  • Cochrane Handbook Chapter 8: Risk of Bias
  • Sterne et al. (2019). RoB 2 Guidelines
  • Wells et al. Newcastle-Ottawa Scale
  • GRADE Handbook