Skills comparison-table-gen
Auto-generates comparison tables for concepts, drugs, or study results
install
source · Clone the upstream repo
git clone https://github.com/openclaw/skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/aipoch-ai/comparison-table-gen" ~/.claude/skills/openclaw-skills-comparison-table-gen && rm -rf "$T"
OpenClaw · Install into ~/.openclaw/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.openclaw/skills && cp -r "$T/skills/aipoch-ai/comparison-table-gen" ~/.openclaw/skills/openclaw-skills-comparison-table-gen && rm -rf "$T"
manifest:
skills/aipoch-ai/comparison-table-gen/SKILL.mdsource content
Comparison Table Gen
Generates comparison tables for medical content.
Features
- Side-by-side comparisons
- Markdown table output
- Drug comparison templates
- Study result comparisons
Parameters
| Parameter | Type | Default | Required | Description |
|---|---|---|---|---|
, | string | - | Yes | Items to compare (comma-separated) |
, | string | - | Yes | Comparison attributes (comma-separated) |
, | string | - | No | Output JSON file path |
Usage
# Compare two drugs python scripts/main.py --items "Drug A,Drug B" --attributes "Mechanism,Dose,Side Effects" # Save to file python scripts/main.py --items "Surgery,Chemo,Radiation" --attributes "Cost,Efficacy" --output comparison.json
Input Format
- items: Comma-separated list of items to compare (e.g., "Drug A,Drug B")
- attributes: Comma-separated list of comparison attributes (e.g., "Mechanism,Dose")
Output Format
{ "markdown_table": "string", "html_table": "string" }
Risk Assessment
| Risk Indicator | Assessment | Level |
|---|---|---|
| Code Execution | Python/R scripts executed locally | Medium |
| Network Access | No external API calls | Low |
| File System Access | Read input files, write output files | Medium |
| Instruction Tampering | Standard prompt guidelines | Low |
| Data Exposure | Output files saved to workspace | Low |
Security Checklist
- No hardcoded credentials or API keys
- No unauthorized file system access (../)
- Output does not expose sensitive information
- Prompt injection protections in place
- Input file paths validated (no ../ traversal)
- Output directory restricted to workspace
- Script execution in sandboxed environment
- Error messages sanitized (no stack traces exposed)
- Dependencies audited
Prerequisites
No additional Python packages required.
Evaluation Criteria
Success Metrics
- Successfully executes main functionality
- Output meets quality standards
- Handles edge cases gracefully
- Performance is acceptable
Test Cases
- Basic Functionality: Standard input → Expected output
- Edge Case: Invalid input → Graceful error handling
- Performance: Large dataset → Acceptable processing time
Lifecycle Status
- Current Stage: Draft
- Next Review Date: 2026-03-06
- Known Issues: None
- Planned Improvements:
- Performance optimization
- Additional feature support