Skills comparison-table-gen

Auto-generates comparison tables for concepts, drugs, or study results

install
source · Clone the upstream repo
git clone https://github.com/openclaw/skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/aipoch-ai/comparison-table-gen" ~/.claude/skills/openclaw-skills-comparison-table-gen && rm -rf "$T"
OpenClaw · Install into ~/.openclaw/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.openclaw/skills && cp -r "$T/skills/aipoch-ai/comparison-table-gen" ~/.openclaw/skills/openclaw-skills-comparison-table-gen && rm -rf "$T"
manifest: skills/aipoch-ai/comparison-table-gen/SKILL.md
source content

Comparison Table Gen

Generates comparison tables for medical content.

Features

  • Side-by-side comparisons
  • Markdown table output
  • Drug comparison templates
  • Study result comparisons

Parameters

ParameterTypeDefaultRequiredDescription
--items
,
-i
string-YesItems to compare (comma-separated)
--attributes
,
-a
string-YesComparison attributes (comma-separated)
--output
,
-o
string-NoOutput JSON file path

Usage

# Compare two drugs
python scripts/main.py --items "Drug A,Drug B" --attributes "Mechanism,Dose,Side Effects"

# Save to file
python scripts/main.py --items "Surgery,Chemo,Radiation" --attributes "Cost,Efficacy" --output comparison.json

Input Format

  • items: Comma-separated list of items to compare (e.g., "Drug A,Drug B")
  • attributes: Comma-separated list of comparison attributes (e.g., "Mechanism,Dose")

Output Format

{
  "markdown_table": "string",
  "html_table": "string"
}

Risk Assessment

Risk IndicatorAssessmentLevel
Code ExecutionPython/R scripts executed locallyMedium
Network AccessNo external API callsLow
File System AccessRead input files, write output filesMedium
Instruction TamperingStandard prompt guidelinesLow
Data ExposureOutput files saved to workspaceLow

Security Checklist

  • No hardcoded credentials or API keys
  • No unauthorized file system access (../)
  • Output does not expose sensitive information
  • Prompt injection protections in place
  • Input file paths validated (no ../ traversal)
  • Output directory restricted to workspace
  • Script execution in sandboxed environment
  • Error messages sanitized (no stack traces exposed)
  • Dependencies audited

Prerequisites

No additional Python packages required.

Evaluation Criteria

Success Metrics

  • Successfully executes main functionality
  • Output meets quality standards
  • Handles edge cases gracefully
  • Performance is acceptable

Test Cases

  1. Basic Functionality: Standard input → Expected output
  2. Edge Case: Invalid input → Graceful error handling
  3. Performance: Large dataset → Acceptable processing time

Lifecycle Status

  • Current Stage: Draft
  • Next Review Date: 2026-03-06
  • Known Issues: None
  • Planned Improvements:
    • Performance optimization
    • Additional feature support