Medical-research-skills forest-plot-styler

Analyze data with `forest-plot-styler` using a reproducible workflow, explicit validation, and structured outputs for review-ready interpretation.

install
source · Clone the upstream repo
git clone https://github.com/aipoch/medical-research-skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/aipoch/medical-research-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/scientific-skills/Data Analysis/forest-plot-styler" ~/.claude/skills/aipoch-medical-research-skills-forest-plot-styler && rm -rf "$T"
manifest: scientific-skills/Data Analysis/forest-plot-styler/SKILL.md
source content

Source: https://github.com/aipoch/medical-research-skills

Forest Plot Styler

ID: 157

Beautifies Meta-analysis or subgroup analysis forest plots, customizes Odds Ratio point sizes and confidence interval line styles.


When to Use

  • Use this skill when the task needs Beautify meta-analysis forest plots with customizable odds ratio points.
  • Use this skill for data analysis tasks that require explicit assumptions, bounded scope, and a reproducible output format.
  • Use this skill when you need a documented fallback path for missing inputs, execution errors, or partial evidence.

Key Features

See

## Features
above for related details.

  • Scope-focused workflow aligned to: Analyze data with
    forest-plot-styler
    using a reproducible workflow, explicit validation, and structured outputs for review-ready interpretation.
  • Packaged executable path(s):
    scripts/main.py
    .
  • Reference material available in
    references/
    for task-specific guidance.
  • Structured execution path designed to keep outputs consistent and reviewable.

Dependencies

  • Python >= 3.8
  • matplotlib >= 3.5.0
  • pandas >= 1.3.0
  • numpy >= 1.20.0
  • openpyxl >= 3.0.0 (for reading Excel)

Example Usage

See

## Usage
above for related details.

cd "20260318/scientific-skills/Data Analytics/forest-plot-styler"
python -m py_compile scripts/main.py
python scripts/main.py --help

Example run plan:

  1. Confirm the user input, output path, and any required config values.
  2. Edit the in-file
    CONFIG
    block or documented parameters if the script uses fixed settings.
  3. Run
    python scripts/main.py
    with the validated inputs.
  4. Review the generated output and return the final artifact with any assumptions called out.

Implementation Details

See

## Workflow
above for related details.

  • Execution model: validate the request, choose the packaged workflow, and produce a bounded deliverable.
  • Input controls: confirm the source files, scope limits, output format, and acceptance criteria before running any script.
  • Primary implementation surface:
    scripts/main.py
    .
  • Reference guidance:
    references/
    contains supporting rules, prompts, or checklists.
  • Parameters to clarify first: input path, output path, scope filters, thresholds, and any domain-specific constraints.
  • Output discipline: keep results reproducible, identify assumptions explicitly, and avoid undocumented side effects.

Quick Check

Use this command to verify that the packaged script entry point can be parsed before deeper execution.

python -m py_compile scripts/main.py

Audit-Ready Commands

Use these concrete commands for validation. They are intentionally self-contained and avoid placeholder paths.

python -m py_compile scripts/main.py
python scripts/main.py --help
python scripts/main.py --input "Audit validation sample with explicit symptoms, history, assessment, and next-step plan." --format json

Workflow

  1. Confirm the user objective, required inputs, and non-negotiable constraints before doing detailed work.
  2. Validate that the request matches the documented scope and stop early if the task would require unsupported assumptions.
  3. Use the packaged script path or the documented reasoning path with only the inputs that are actually available.
  4. Return a structured result that separates assumptions, deliverables, risks, and unresolved items.
  5. If execution fails or inputs are incomplete, switch to the fallback path and state exactly what blocked full completion.

Features

  • Reads Meta-analysis data (CSV/Excel format)
  • Draws high-quality forest plots
  • Customizes Odds Ratio point sizes, colors, and shapes
  • Customizes confidence interval line styles (color, thickness, endpoint style)
  • Supports subgroup analysis display
  • Automatically calculates and displays pooled effect values
  • Outputs to PNG, PDF, or SVG format

Usage

python -m py_compile scripts/main.py

# Example invocation: python scripts/main.py --input <data.csv> [options]

Parameters

ParameterTypeDefaultRequiredDescription
--input
,
-i
string-YesInput data file (CSV or Excel)
--output
,
-o
stringforest_plot.pngNoOutput file path
--format
,
-f
stringpngNoOutput format (png/pdf/svg)
--point-size
int8NoOR point size
--point-color
string#2E86ABNoOR point color
--ci-color
string#2E86ABNoConfidence interval line color
--ci-linewidth
int2NoConfidence interval line thickness
--ci-capwidth
int5NoConfidence interval endpoint width
--summary-color
string#A23B72NoPooled effect point color
--summary-shape
stringdiamondNoPooled effect point shape
--subgroup
string-NoSubgroup analysis column name
--title
,
-t
stringForest PlotNoChart title
--xlabel
,
-x
stringOdds Ratio (95% CI)NoX-axis label
--reference-line
float1.0NoReference line position
--width
,
-W
int12NoImage width (inches)
--height
,
-H
intautoNoImage height (inches)
--dpi
int300NoImage resolution
--font-size
int10NoFont size
--style
,
-s
stringdefaultNoPreset style (default/minimal/dark)

Input Data Format

CSV/Excel files must contain the following columns:

Column NameDescriptionType
study
Study nameText
or
Odds Ratio valueNumeric
ci_lower
Confidence interval lower boundNumeric
ci_upper
Confidence interval upper boundNumeric
weight
Weight (optional, for point size)Numeric
subgroup
Subgroup label (optional)Text

Sample Data

study,or,ci_lower,ci_upper,weight,subgroup
Study A,0.85,0.65,1.12,15.2,Drug A
Study B,0.72,0.55,0.94,18.5,Drug A
Study C,1.15,0.88,1.50,12.3,Drug B
Study D,0.95,0.75,1.20,14.8,Drug B

Examples

Basic Usage

python scripts/main.py -i meta_data.csv

Custom Style

python scripts/main.py -i meta_data.csv \
    --point-color="#E63946" \
    --ci-color="#457B9D" \
    --point-size=10 \
    --ci-linewidth=3 \
    -t "Meta-Analysis of Treatment Effects"

Subgroup Analysis

python scripts/main.py -i meta_data.csv \
    --subgroup subgroup_column \
    --summary-color="#F4A261" \
    -o subgroup_forest.png

Output PDF Vector Graphic

python scripts/main.py -i meta_data.csv \
    -f pdf \
    -o forest_plot.pdf

Preset Styles

default

  • Blue color scheme
  • Standard font size
  • White background

minimal

  • Clean lines
  • Grayscale color scheme
  • No grid lines

dark

  • Dark background
  • Bright data points
  • Suitable for dark theme presentations

Output Example

Generated forest plot contains:

  • Left side: Study name list
  • Middle: OR values and confidence intervals
  • Right side: Weight percentage (if available)
  • Bottom: Pooled effect value (diamond marker)
  • Reference line (OR=1)

Notes

  1. Ensure input file encoding is UTF-8
  2. OR values are automatically converted when log scale is suggested
  3. Studies with confidence intervals crossing 1 are not statistically significant
  4. Weight values are used to adjust point size, reflecting study contribution

Risk Assessment

Risk IndicatorAssessmentLevel
Code ExecutionPython/R scripts executed locallyMedium
Network AccessNo external API callsLow
File System AccessRead input files, write output filesMedium
Instruction TamperingStandard prompt guidelinesLow
Data ExposureOutput files saved to workspaceLow

Security Checklist

  • No hardcoded credentials or API keys
  • No unauthorized file system access (../)
  • Output does not expose sensitive information
  • Prompt injection protections in place
  • Input file paths validated (no ../ traversal)
  • Output directory restricted to workspace
  • Script execution in sandboxed environment
  • Error messages sanitized (no stack traces exposed)
  • Dependencies audited

Prerequisites


# Python dependencies
pip install -r requirements.txt

Evaluation Criteria

Success Metrics

  • Successfully executes main functionality
  • Output meets quality standards
  • Handles edge cases gracefully
  • Performance is acceptable

Test Cases

  1. Basic Functionality: Standard input → Expected output
  2. Edge Case: Invalid input → Graceful error handling
  3. Performance: Large dataset → Acceptable processing time

Lifecycle Status

  • Current Stage: Draft
  • Next Review Date: 2026-03-06
  • Known Issues: None
  • Planned Improvements:
    • Performance optimization
    • Additional feature support

Output Requirements

Every final response should make these items explicit when they are relevant:

  • Objective or requested deliverable
  • Inputs used and assumptions introduced
  • Workflow or decision path
  • Core result, recommendation, or artifact
  • Constraints, risks, caveats, or validation needs
  • Unresolved items and next-step checks

Error Handling

  • If required inputs are missing, state exactly which fields are missing and request only the minimum additional information.
  • If the task goes outside the documented scope, stop instead of guessing or silently widening the assignment.
  • If
    scripts/main.py
    fails, report the failure point, summarize what still can be completed safely, and provide a manual fallback.
  • Do not fabricate files, citations, data, search results, or execution outcomes.

Input Validation

This skill accepts requests that match the documented purpose of

forest-plot-styler
and include enough context to complete the workflow safely.

Do not continue the workflow when the request is out of scope, missing a critical input, or would require unsupported assumptions. Instead respond:

forest-plot-styler
only handles its documented workflow. Please provide the missing required inputs or switch to a more suitable skill.

Response Template

Use the following fixed structure for non-trivial requests:

  1. Objective
  2. Inputs Received
  3. Assumptions
  4. Workflow
  5. Deliverable
  6. Risks and Limits
  7. Next Checks

If the request is simple, you may compress the structure, but still keep assumptions and limits explicit when they affect correctness.

Inputs to Collect

  • Required inputs: the user goal, the primary data or source file, and the requested output format.
  • Optional inputs: output directory, formatting preferences, and validation constraints.
  • If a required input is unavailable, return a short clarification request before continuing.

Output Contract

  • Return a short summary, the main deliverables, and any assumptions that materially affect interpretation.
  • If execution is partial, label what succeeded, what failed, and the next safe recovery step.
  • Keep the final answer within the documented scope of the skill.

Validation and Safety Rules

  • Validate identifiers, file paths, and user-provided parameters before execution.
  • Do not fabricate results, metrics, citations, or downstream conclusions.
  • Use safe fallback behavior when dependencies, credentials, or required inputs are missing.
  • Surface any execution failure with a concise diagnosis and recovery path.