OpenClaw-Medical-Skills radiomics-pathomics-fusion-agent

<!--

install
source · Clone the upstream repo
git clone https://github.com/FreedomIntelligence/OpenClaw-Medical-Skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/FreedomIntelligence/OpenClaw-Medical-Skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/radiomics-pathomics-fusion-agent" ~/.claude/skills/freedomintelligence-openclaw-medical-skills-radiomics-pathomics-fusion-agent && rm -rf "$T"
OpenClaw · Install into ~/.openclaw/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/FreedomIntelligence/OpenClaw-Medical-Skills "$T" && mkdir -p ~/.openclaw/skills && cp -r "$T/skills/radiomics-pathomics-fusion-agent" ~/.openclaw/skills/freedomintelligence-openclaw-medical-skills-radiomics-pathomics-fusion-agent && rm -rf "$T"
manifest: skills/radiomics-pathomics-fusion-agent/SKILL.md
source content
<!-- # COPYRIGHT NOTICE # This file is part of the "Universal Biomedical Skills" project. # Copyright (c) 2026 MD BABU MIA, PhD <md.babu.mia@mssm.edu> # All Rights Reserved. # # This code is proprietary and confidential. # Unauthorized copying of this file, via any medium is strictly prohibited. # # Provenance: Authenticated by MD BABU MIA -->

name: 'radiomics-pathomics-fusion-agent' description: 'AI-powered multimodal fusion of radiology (CT/MRI/PET) and pathology (H&E/IHC) imaging with clinical and genomic data for comprehensive cancer diagnostics and treatment prediction.' measurable_outcome: Execute skill workflow successfully with valid output within 15 minutes. allowed-tools:

  • read_file
  • run_shell_command

Radiomics Pathomics Fusion Agent

The Radiomics Pathomics Fusion Agent integrates multimodal medical imaging data from radiology (CT, MRI, PET) and digital pathology (H&E, IHC whole slide images) with clinical and genomic data using deep learning fusion architectures. It enables comprehensive cancer phenotyping, treatment response prediction, and prognostic modeling.

When to Use This Skill

  • When predicting treatment response using multimodal imaging.
  • For comprehensive tumor phenotyping combining macro and micro views.
  • To identify imaging biomarkers correlated with genomic features.
  • When building prognostic models from combined radiology-pathology.
  • For AI-powered second opinion integrating all imaging modalities.

Core Capabilities

  1. Cross-Modal Fusion: Integrate radiology and pathology features using attention.

  2. Radiomics Extraction: Compute 3D texture, shape, intensity features from CT/MRI.

  3. Pathomics Extraction: Extract histopathological features from WSI.

  4. Clinical Integration: Combine imaging with clinical variables and genomics.

  5. Treatment Response Prediction: Predict chemotherapy, immunotherapy response.

  6. Survival Prediction: Multi-modal prognostic modeling.

Supported Imaging Modalities

ModalityFeatures ExtractedResolution
CTTexture, shape, densityVolumetric 3D
MRIMulti-sequence, perfusionVolumetric 3D
PETSUV, metabolic featuresVolumetric 3D
H&E WSINuclear, tissue architecture40x magnification
IHC WSIMarker quantification20-40x
Multiplexed IFSpatial protein patternsSubcellular

Fusion Architectures

ArchitectureMethodStrengths
Early FusionConcatenate featuresSimple, baseline
Late FusionCombine predictionsModular
Attention FusionCross-modal attentionInterpretable
Multimodal TransformerSelf-attention across modalitiesState-of-art
Graph FusionGNN for relationshipsSpatial awareness

Workflow

  1. Input: CT/MRI DICOM, pathology WSI, clinical data, optional genomics.

  2. Segmentation: Tumor ROI extraction from radiology.

  3. Radiomics: Extract 3D radiomic features.

  4. Pathomics: Extract histopathology features via foundation models.

  5. Fusion: Multimodal feature integration.

  6. Prediction: Treatment response, survival, biomarker prediction.

  7. Output: Integrated predictions, attention maps, explanations.

Example Usage

User: "Predict immunotherapy response for this lung cancer patient using their CT scan and biopsy pathology."

Agent Action:

python3 Skills/Oncology/Radiomics_Pathomics_Fusion_Agent/fusion_predict.py \
    --ct_dicom ct_scan/ \
    --wsi_path biopsy.svs \
    --clinical_data patient_clinical.json \
    --genomic_data tumor_wes.vcf \
    --task immunotherapy_response \
    --cancer_type nsclc \
    --fusion_method attention \
    --output fusion_prediction/

Radiomic Feature Categories

CategoryFeaturesCount
ShapeVolume, surface area, sphericity14
First-OrderMean, variance, skewness, entropy18
GLCMContrast, correlation, homogeneity24
GLRLMRun length, gray level emphasis16
GLSZMZone size, gray level variance16
GLDMDependence features14
NGTDMTexture features5
Total~107

Pathomics Feature Categories

CategorySourceFeatures
NuclearSegmentationSize, shape, texture
CellularDetectionDensity, clustering
TissueArchitectureGlandular, stromal ratios
Foundation ModelCONCH, TITAN, UNIDeep embeddings
SpatialGraph analysisNeighborhood patterns

Output Components

OutputDescriptionFormat
PredictionResponse/outcome probability.json
ConfidencePrediction uncertainty.json
Attention MapsCross-modal importance.npy, .png
Feature ImportanceShapley values.csv
ROI HighlightsPredictive regionsDICOM-SEG, GeoJSON
ReportClinical summary.pdf

Clinical Applications

ApplicationModalities UsedPerformance
NSCLC ImmunotherapyCT + H&EAUC 0.82-0.88
HCC SurvivalMRI + H&EC-index 0.78
Breast NeoadjuvantMRI + H&EAUC 0.85
HNSCC HPV/ResponseCT + H&EAUC 0.89
CRC MSI PredictionCT + H&EAUC 0.86

AI/ML Components

Radiomics Pipeline:

  • PyRadiomics for feature extraction
  • 3D-CNN for learned features
  • Transformer for volumetric analysis

Pathomics Pipeline:

  • Foundation models (CONCH, UNI, TITAN)
  • MIL (Multiple Instance Learning) for WSI
  • Graph networks for spatial patterns

Fusion Models:

  • Cross-attention transformers
  • Multimodal variational autoencoders
  • Contrastive learning for alignment

Prerequisites

  • Python 3.10+
  • PyRadiomics, SimpleITK
  • OpenSlide, HistoEncoder
  • PyTorch, transformers
  • CONCH/TITAN model weights
  • GPU with 16GB+ VRAM

Related Skills

  • Pathology_AI/CONCH_Agent - Pathology foundation model
  • Radiology_AI agents - Modality-specific analysis
  • Pan_Cancer_MultiOmics_Agent - Genomic integration
  • TMB_Estimation_Agent - Tumor mutational burden

Multimodal Integration Strategies

StrategyDescriptionUse Case
Feature-LevelCombine extracted featuresLimited data
Embedding-LevelFuse latent representationsModerate data
Decision-LevelEnsemble predictionsInterpretability
End-to-EndJoint trainingLarge data

Special Considerations

  1. Data Alignment: Ensure imaging from same timepoint
  2. Missing Modalities: Handle incomplete multimodal data
  3. Class Imbalance: Balance training across outcomes
  4. Interpretability: Attention maps for clinical trust
  5. Validation: External multi-site validation essential

Quality Control

QC CheckThresholdAction
CT coverage>90% tumorRescan if needed
WSI qualityBlur score <XRe-scan slide
SegmentationDice >0.85Manual review
Feature stabilityICC >0.8Robust features only

Regulatory Considerations

AspectStatus
FDA ClearanceIndividual modality tools cleared
Multimodal FusionResearch use only (RUO)
Clinical IntegrationPACS/LIS integration pathways
ExplainabilityRequired for clinical adoption

Author

AI Group - Biomedical AI Platform

<!-- AUTHOR_SIGNATURE: 9a7f3c2e-MD-BABU-MIA-2026-MSSM-SECURE -->