Skills bio-ontology-mapper
Map unstructured biomedical text to standardized ontologies (SNOMED CT.
git clone https://github.com/openclaw/skills
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/aipoch-ai/bio-ontology-mapper-1" ~/.claude/skills/clawdbot-skills-bio-ontology-mapper && rm -rf "$T"
skills/aipoch-ai/bio-ontology-mapper-1/SKILL.mdBio-Ontology Mapper
When to Use
- Use this skill when the task is to Map unstructured biomedical text to standardized ontologies (SNOMED CT.
- Use this skill for evidence insight tasks that require explicit assumptions, bounded scope, and a reproducible output format.
- Use this skill when you need a documented fallback path for missing inputs, execution errors, or partial evidence.
Key Features
- Scope-focused workflow aligned to: Map unstructured biomedical text to standardized ontologies (SNOMED CT.
- Packaged executable path(s):
.scripts/main.py - Reference material available in
for task-specific guidance.references/ - Structured execution path designed to keep outputs consistent and reviewable.
Dependencies
:Python
. Repository baseline for current packaged skills.3.10+
:dataclasses
. Declared inunspecified
.requirements.txt
:difflib
. Declared inunspecified
.requirements.txt
Example Usage
cd "20260318/scientific-skills/Evidence Insight/bio-ontology-mapper" python -m py_compile scripts/main.py python scripts/main.py --help
Example run plan:
- Confirm the user input, output path, and any required config values.
- Edit the in-file
block or documented parameters if the script uses fixed settings.CONFIG - Run
with the validated inputs.python scripts/main.py - Review the generated output and return the final artifact with any assumptions called out.
Implementation Details
See
## Workflow above for related details.
- Execution model: validate the request, choose the packaged workflow, and produce a bounded deliverable.
- Input controls: confirm the source files, scope limits, output format, and acceptance criteria before running any script.
- Primary implementation surface:
.scripts/main.py - Reference guidance:
contains supporting rules, prompts, or checklists.references/ - Parameters to clarify first: input path, output path, scope filters, thresholds, and any domain-specific constraints.
- Output discipline: keep results reproducible, identify assumptions explicitly, and avoid undocumented side effects.
Quick Check
Use this command to verify that the packaged script entry point can be parsed before deeper execution.
python -m py_compile scripts/main.py
Audit-Ready Commands
Use these concrete commands for validation. They are intentionally self-contained and avoid placeholder paths.
python -m py_compile scripts/main.py python scripts/main.py --help
Workflow
- Confirm the user objective, required inputs, and non-negotiable constraints before doing detailed work.
- Validate that the request matches the documented scope and stop early if the task would require unsupported assumptions.
- Use the packaged script path or the documented reasoning path with only the inputs that are actually available.
- Return a structured result that separates assumptions, deliverables, risks, and unresolved items.
- If execution fails or inputs are incomplete, switch to the fallback path and state exactly what blocked full completion.
Overview
Biomedical terminology normalization tool that maps free-text clinical and scientific concepts to standardized ontologies for semantic interoperability and data harmonization.
Key Capabilities:
- Multi-Ontology Support: SNOMED CT, MeSH, ICD-10, LOINC, RxNorm
- Entity Extraction: NER for diseases, symptoms, procedures, drugs
- Fuzzy Matching: Handle typos, abbreviations, and synonyms
- Confidence Scoring: Reliability metrics for each mapping
- Batch Processing: Normalize large datasets efficiently
- Cross-Mapping: Translate between ontology systems
Core Capabilities
1. Entity Recognition and Mapping
Extract and map biomedical entities to ontologies:
from scripts.mapper import BioOntologyMapper mapper = BioOntologyMapper() # Map clinical text result = mapper.map_text( text="Patient has diabetes and hypertension, taking metformin", ontologies=["snomed", "mesh", "rxnorm"], confidence_threshold=0.7 ) for entity in result.entities: print(f"{entity.text} → {entity.concept_id} ({entity.ontology})") print(f" Preferred: {entity.preferred_term}") print(f" Confidence: {entity.confidence:.2f}")
Supported Ontologies:
| Ontology | Domain | Use Case |
|---|---|---|
| SNOMED CT | Clinical | EHR interoperability |
| MeSH | Literature | PubMed indexing |
| ICD-10 | Billing | Diagnosis codes |
| LOINC | Labs | Test result standardization |
| RxNorm | Drugs | Medication normalization |
| HGNC | Genes | Gene name standardization |
2. Cross-Ontology Translation
Map concepts between different ontologies:
# Cross-map SNOMED to ICD-10 translation = mapper.cross_map( source_id="22298006", # SNOMED: Myocardial infarction source_ontology="snomed", target_ontology="icd10" ) print(f"ICD-10: {translation.target_id} - {translation.target_term}") # Output: I21.9 - Acute myocardial infarction, unspecified
Cross-Mapping Coverage:
- SNOMED CT ↔ ICD-10-CM (clinical modifications)
- MeSH ↔ SNOMED CT (literature to clinical)
- RxNorm ↔ ATC (drug classifications)
- LOINC ↔ SNOMED (lab to clinical)
3. Batch Normalization
Process large datasets:
# Batch process CSV results = mapper.batch_map( input_file="clinical_terms.csv", text_column="diagnosis_description", ontologies=["snomed", "icd10"], output_format="csv", max_workers=4 ) # Results include: # - Original term # - Mapped concept ID # - Confidence score # - Alternative mappings (if ambiguous)
Performance:
- ~100 terms/second (with caching)
- ~20 terms/second (API lookup)
- Parallel processing for large datasets
4. Confidence Scoring and Validation
Assess mapping reliability:
scoring = mapper.score_mapping( term="heart attack", candidate="22298006", # Myocardial infarction factors=["string_similarity", "context_match", "frequency"] ) print(f"Overall confidence: {scoring.confidence:.2f}") print(f"Breakdown: {scoring.factors}")
Scoring Factors:
- String similarity: Levenshtein distance, n-grams
- Context match: Surrounding words alignment
- Frequency: Common usage in corpus
- Semantic similarity: Vector embeddings
Quality Checklist
Pre-Mapping:
- Text preprocessed (lowercase, punctuation handled)
- Abbreviations expanded where possible
- Language identified (multilingual support)
During Mapping:
- Confidence threshold appropriate (>0.7 for clinical)
- Multiple candidates considered for ambiguous terms
- Context used for disambiguation
Post-Mapping:
- Low-confidence mappings flagged for review
- Unmapped terms logged
- CRITICAL: Clinical expert validation for high-stakes use
Before Production:
- Mapping accuracy validated on gold standard
- False positive rate acceptable (<5%)
- Recall acceptable for use case (>90%)
- API rate limits respected
Common Pitfalls
Mapping Errors:
-
❌ Abbreviation ambiguity → "MI" = Myocardial infarction OR Michigan
- ✅ Use context; flag for manual review
-
❌ Outdated terms → Old terminology not in current ontology
- ✅ Use historical mappings; update terminology
-
❌ False confidence → High score for wrong concept
- ✅ Always review top-3 candidates
Technical Issues:
-
❌ API failures → No local fallback
- ✅ Implement caching; use local reference files
-
❌ Version mismatches → Different ontology versions
- ✅ Track ontology version used
-
❌ PHI exposure → Sending patient data to external APIs
- ✅ De-identify before API calls; use local processing when possible
References
Available in
references/ directory:
- SNOMED CT hierarchy and relationshipssnomed_ct_guide.md
- MeSH tree structure and qualifiersmesh_structure.md
- Crosswalks between systemsontology_mappings.md
- Biomedical text processingnlp_best_practices.md
- External service integrationapi_documentation.md
- Gold standard test setsvalidation_datasets.md
Scripts
Located in
scripts/ directory:
- CLI interface for mappingmain.py
- Core ontology mapping enginemapper.py
- Named entity recognitionextractor.py
- Ontology-to-ontology translationcross_mapper.py
- Confidence calculationscorer.py
- Large dataset handlingbatch_processor.py
- Mapping quality checksvalidator.py
- Local storage for frequent lookupscaching.py
Limitations
- Ambiguity: Many-to-many mappings common; context required
- Coverage: Rare diseases and new concepts may not be in ontologies
- Versioning: Ontology updates can change mappings over time
- Language: Best support for English; other languages limited
- Real-time: Not suitable for time-critical clinical applications
- API Dependency: Requires internet for most lookups (caching helps)
⚠️ Critical: Ontology mapping is for research and data integration, not clinical decision-making. Always validate mappings with domain experts before use in patient care contexts. Never process PHI without appropriate de-identification and compliance measures.
Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
| str | Required | Single term to map |
| str | Required | Input file path |
| str | Required | Output file path |
| str | 'both' | |
| float | 0.7 | |
| str | 'json' | |
| str | Required | Use UMLS/MeSH APIs |
| str | Required |
Output Requirements
Every final response should make these items explicit when they are relevant:
- Objective or requested deliverable
- Inputs used and assumptions introduced
- Workflow or decision path
- Core result, recommendation, or artifact
- Constraints, risks, caveats, or validation needs
- Unresolved items and next-step checks
Error Handling
- If required inputs are missing, state exactly which fields are missing and request only the minimum additional information.
- If the task goes outside the documented scope, stop instead of guessing or silently widening the assignment.
- If
fails, report the failure point, summarize what still can be completed safely, and provide a manual fallback.scripts/main.py - Do not fabricate files, citations, data, search results, or execution outcomes.
Input Validation
This skill accepts requests that match the documented purpose of
bio-ontology-mapper and include enough context to complete the workflow safely.
Do not continue the workflow when the request is out of scope, missing a critical input, or would require unsupported assumptions. Instead respond:
only handles its documented workflow. Please provide the missing required inputs or switch to a more suitable skill.bio-ontology-mapper
Response Template
Use the following fixed structure for non-trivial requests:
- Objective
- Inputs Received
- Assumptions
- Workflow
- Deliverable
- Risks and Limits
- Next Checks
If the request is simple, you may compress the structure, but still keep assumptions and limits explicit when they affect correctness.