OpenClaw-Medical-Skills multimodal-radpath-fusion-agent

<!--

install
source · Clone the upstream repo
git clone https://github.com/FreedomIntelligence/OpenClaw-Medical-Skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/FreedomIntelligence/OpenClaw-Medical-Skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/multimodal-radpath-fusion-agent" ~/.claude/skills/freedomintelligence-openclaw-medical-skills-multimodal-radpath-fusion-agent && rm -rf "$T"
OpenClaw · Install into ~/.openclaw/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/FreedomIntelligence/OpenClaw-Medical-Skills "$T" && mkdir -p ~/.openclaw/skills && cp -r "$T/skills/multimodal-radpath-fusion-agent" ~/.openclaw/skills/freedomintelligence-openclaw-medical-skills-multimodal-radpath-fusion-agent && rm -rf "$T"
manifest: skills/multimodal-radpath-fusion-agent/SKILL.md
source content
<!-- # COPYRIGHT NOTICE # This file is part of the "Universal Biomedical Skills" project. # Copyright (c) 2026 MD BABU MIA, PhD <md.babu.mia@mssm.edu> # All Rights Reserved. # # This code is proprietary and confidential. # Unauthorized copying of this file, via any medium is strictly prohibited. # # Provenance: Authenticated by MD BABU MIA -->

name: 'multimodal-radpath-fusion-agent' description: 'AI-powered multimodal diagnostic fusion integrating radiology imaging (CT/MRI/PET), digital pathology (WSI), genomics, and clinical data for comprehensive cancer diagnosis and treatment planning.' measurable_outcome: Execute skill workflow successfully with valid output within 15 minutes. allowed-tools:

  • read_file
  • run_shell_command

Multimodal Radiology-Pathology Fusion Agent

The Multimodal Radpath Fusion Agent integrates diverse clinical data sources including radiology imaging (CT, MRI, PET), digital pathology whole slide images, genomic profiling, and electronic health records using state-of-the-art multimodal deep learning for comprehensive cancer diagnosis, treatment response prediction, and prognostic modeling.

When to Use This Skill

  • When integrating radiology and pathology for unified tumor assessment.
  • For treatment response prediction using multimodal imaging.
  • To predict molecular features from imaging (imaging genomics).
  • When building comprehensive prognostic models.
  • For tumor board decision support with AI second opinion.

Core Capabilities

  1. Radiology-Pathology Fusion: Integrate macro and microscopic views.

  2. Imaging-Genomics Correlation: Predict molecular features from imaging.

  3. Treatment Response Prediction: Multi-modal response modeling.

  4. Survival Prediction: Comprehensive prognostic models.

  5. Tumor Characterization: Integrate phenotype from all modalities.

  6. Clinical Decision Support: AI-assisted tumor board recommendations.

Supported Modalities

ModalityData TypeFeatures Extracted
CTDICOM volumesRadiomics, deep features
MRIMulti-sequence DICOMTexture, perfusion, ADC
PETSUV mapsMetabolic features
H&E WSISVS/NDPI imagesHistology, spatial patterns
IHCStained slidesBiomarker quantification
WES/WGSVCFMutations, TMB, signatures
RNA-seqExpression matrixPathway signatures
ClinicalEHR dataDemographics, labs, history

Fusion Architectures

ArchitectureMethodBest For
AMRI-NetAttention fusionRadiology focus
PathOmCLIPContrastive learningPath-omics alignment
SMuRFSwin TransformerMulti-region integration
MultiModal TransformerSelf-attentionAll modalities
GNN FusionGraph networksSpatial relationships

Workflow

  1. Data Ingestion: Collect radiology, pathology, genomics, clinical.

  2. Preprocessing: Standardize each modality.

  3. Feature Extraction: Extract modality-specific features.

  4. Alignment: Temporal and spatial alignment of data.

  5. Fusion: Multi-modal deep learning integration.

  6. Prediction: Diagnosis, response, survival prediction.

  7. Output: Integrated report with explanations.

Example Usage

User: "Integrate this lung cancer patient's CT scan, biopsy pathology, and genomic profiling for comprehensive assessment and treatment recommendation."

Agent Action:

python3 Skills/Clinical/Multimodal_Radpath_Fusion_Agent/multimodal_fusion.py \
    --ct_dicom ct_chest/ \
    --pet_dicom pet_scan/ \
    --wsi_path biopsy.svs \
    --genomic_vcf tumor_wes.vcf \
    --rna_expression expression.tsv \
    --clinical_ehr patient_data.json \
    --task treatment_recommendation \
    --cancer_type nsclc \
    --output integrated_assessment/

Output Components

OutputDescriptionFormat
Integrated DiagnosisMulti-modal classification.json
Treatment PredictionResponse probabilities.json
Survival EstimatePrognostic curves.json, .png
Feature AttributionModality importance.json
Attention MapsVisual explanations.npy, .png
Clinical ReportSummary for tumor board.pdf
Confidence ScoresPrediction uncertainty.json

Clinical Applications

ApplicationModalitiesPerformance
NSCLC IO ResponseCT + H&E + PD-L1AUC 0.85
HCC Treatment SelectionMRI + H&E + AFPAUC 0.82
Breast NeoadjuvantMRI + H&E + HER2AUC 0.88
HNSCC HPV/PrognosisCT + H&E + p16AUC 0.89
GBM SurvivalMRI + H&E + MGMTC-index 0.76

Imaging-Genomics Predictions

Molecular FeatureImaging ModalityAccuracy
EGFR mutationCT75-80%
KRAS mutationCT70-75%
PD-L1 expressionCT + H&E80-85%
MSI statusH&E85-90%
TMB levelH&E75-80%
HRD statusH&E78-83%

AI/ML Components

Feature Extraction:

  • 3D ResNet for CT/MRI volumes
  • Vision Transformers for WSI
  • Foundation models (CONCH, UNI)

Fusion Methods:

  • Cross-attention mechanisms
  • Multimodal transformers
  • Contrastive multimodal learning

Prediction Models:

  • Multi-task learning
  • Survival analysis (DeepSurv)
  • Uncertainty quantification

Prerequisites

  • Python 3.10+
  • PyTorch, transformers
  • SimpleITK, OpenSlide
  • Foundation model weights
  • GPU with 32GB+ VRAM (recommended)

Related Skills

  • Radiomics_Pathomics_Fusion_Agent - Imaging-specific fusion
  • Pathology_AI/CONCH_Agent - Pathology foundation model
  • Pan_Cancer_MultiOmics_Agent - Genomic integration
  • Virtual_Lab_Agent - AI research coordination

Integration with Clinical Workflow

Integration PointSystemPurpose
PACSRadiology archiveImage retrieval
LISPathology systemSlide access
EHRMedical recordsClinical data
Tumor BoardMDT platformDecision support
ReportingClinical reportsDocumentation

Special Considerations

  1. Data Alignment: Ensure temporal correspondence
  2. Missing Modalities: Handle incomplete multimodal data
  3. Privacy: HIPAA compliance for clinical integration
  4. Validation: Multi-site validation essential
  5. Explainability: Clinical trust requires interpretability

Explainability Methods

MethodOutputPurpose
Attention MapsHeatmapsImportant regions
SHAP ValuesFeature importanceModality contribution
GradCAMActivation mapsVisual explanation
CounterfactualsWhat-if analysisDecision boundaries

Quality Control

QC CheckThresholdAction
Image QualityScore >0.7Flag for review
Data Completeness>80% fieldsProceed or wait
Prediction Confidence>0.6Report with confidence
CalibrationECE <0.1Trust probabilities

Author

AI Group - Biomedical AI Platform

<!-- AUTHOR_SIGNATURE: 9a7f3c2e-MD-BABU-MIA-2026-MSSM-SECURE -->