Skills ehr-semantic-compressor

AI-powered EHR summarization using Transformer architecture to extract

install
source · Clone the upstream repo
git clone https://github.com/openclaw/skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/aipoch-ai/ehr-semantic-compressor" ~/.claude/skills/clawdbot-skills-ehr-semantic-compressor && rm -rf "$T"
manifest: skills/aipoch-ai/ehr-semantic-compressor/SKILL.md
source content

EHR Semantic Compressor

Overview

AI-powered EHR summarization using Transformer architecture to extract key clinical information from lengthy medical records. This skill processes lengthy Electronic Health Record (EHR) documents and generates structured, clinically accurate summaries.

Technical Difficulty: High

When to Use

  • Input contains lengthy EHR documents (1600+ words) requiring summarization
  • Clinical records need structured extraction of key information
  • Quick review of patient history, medications, allergies, or diagnoses is needed
  • Medical documentation requires compression while maintaining accuracy

Core Features

  1. Fast Processing: Process lengthy EHR documents (1600+ words) in 10-20 seconds
  2. Structured Summaries: Generate bullet-point summaries (200-300 words)
  3. Critical Information Extraction:
    • Patient allergies and adverse reactions
    • Family medical history
    • Current and past medications
    • Diagnoses and conditions
    • Vital signs and lab results
    • Procedures and surgeries
  4. Clinical Accuracy: Maintains completeness of medical information

Usage

Basic Usage

python scripts/main.py --input ehr_document.txt --output summary.json

Input Format

{
  "ehr_text": "Full EHR document text...",
  "max_length": 300,
  "extract_sections": ["allergies", "medications", "diagnoses", "family_history"]
}

Output Format

{
  "status": "success",
  "data": {
    "summary": "Structured bullet-point summary...",
    "extracted_sections": {
      "allergies": [...],
      "medications": [...],
      "diagnoses": [...],
      "family_history": [...]
    },
    "metadata": {
      "original_length": 2500,
      "summary_length": 280,
      "compression_ratio": 0.89
    }
  }
}

Parameters

ParameterTypeDefaultRequiredDescription
--input
,
-i
string-YesInput EHR document text file path
--output
,
-o
string-NoOutput JSON file path
--max-length
int300NoMaximum summary length in words
--extract-sections
stringallNoComma-separated sections to extract
--format
stringjsonNoOutput format (json, markdown, text)

Technical Details

Architecture

  • Base Model: Transformer-based encoder-decoder architecture
  • Medical Domain Adaptation: Fine-tuned on clinical text corpora
  • Section Extraction: Rule-based + ML hybrid approach for structured data
  • Processing Pipeline: Text segmentation -> Summarization -> Section extraction -> Output formatting

Dependencies

See

references/requirements.txt
for complete list.

Key dependencies:

  • transformers >= 4.30.0
  • torch >= 2.0.0
  • spacy >= 3.6.0
  • scispacy >= 0.5.3

Performance

  • Processing Time: 10-20 seconds for 1600+ word documents
  • Memory: Requires ~2GB RAM
  • Output Length: 200-300 words (configurable)
  • Compression Ratio: ~85-90%

References

  • references/requirements.txt
    - Python dependencies
  • references/guidelines.md
    - Clinical summarization guidelines
  • references/sample_input.json
    - Example input format
  • references/sample_output.json
    - Example output format

Safety & Compliance

  • No external API calls or service dependencies
  • All processing performed locally
  • No patient data transmitted outside the system
  • Error messages are semantic and do not expose technical details

Testing

Run unit tests:

cd scripts
python test_main.py

Error Handling

All errors return semantic messages:

{
  "status": "error",
  "error": {
    "type": "input_validation_error",
    "message": "EHR text is empty or too short",
    "suggestion": "Provide EHR text with at least 100 words"
  }
}

Risk Assessment

Risk IndicatorAssessmentLevel
Code ExecutionPython/R scripts executed locallyMedium
Network AccessNo external API callsLow
File System AccessRead input files, write output filesMedium
Instruction TamperingStandard prompt guidelinesLow
Data ExposureOutput files saved to workspaceLow

Security Checklist

  • No hardcoded credentials or API keys
  • No unauthorized file system access (../)
  • Output does not expose sensitive information
  • Prompt injection protections in place
  • Input file paths validated (no ../ traversal)
  • Output directory restricted to workspace
  • Script execution in sandboxed environment
  • Error messages sanitized (no stack traces exposed)
  • Dependencies audited

Prerequisites

# Python dependencies
pip install -r requirements.txt

Evaluation Criteria

Success Metrics

  • Successfully executes main functionality
  • Output meets quality standards
  • Handles edge cases gracefully
  • Performance is acceptable

Test Cases

  1. Basic Functionality: Standard input → Expected output
  2. Edge Case: Invalid input → Graceful error handling
  3. Performance: Large dataset → Acceptable processing time

Lifecycle Status

  • Current Stage: Draft
  • Next Review Date: 2026-03-06
  • Known Issues: None
  • Planned Improvements:
    • Performance optimization
    • Additional feature support