install
source · Clone the upstream repo
git clone https://github.com/FreedomIntelligence/OpenClaw-Medical-Skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/FreedomIntelligence/OpenClaw-Medical-Skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/multimodal-medical-imaging" ~/.claude/skills/freedomintelligence-openclaw-medical-skills-multimodal-medical-imaging && rm -rf "$T"
OpenClaw · Install into ~/.openclaw/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/FreedomIntelligence/OpenClaw-Medical-Skills "$T" && mkdir -p ~/.openclaw/skills && cp -r "$T/skills/multimodal-medical-imaging" ~/.openclaw/skills/freedomintelligence-openclaw-medical-skills-multimodal-medical-imaging && rm -rf "$T"
manifest:
skills/multimodal-medical-imaging/SKILL.mdsource content
<!--
# COPYRIGHT NOTICE
# This file is part of the "Universal Biomedical Skills" project.
# Copyright (c) 2026 MD BABU MIA, PhD <md.babu.mia@mssm.edu>
# All Rights Reserved.
#
# This code is proprietary and confidential.
# Unauthorized copying of this file, via any medium is strictly prohibited.
#
# Provenance: Authenticated by MD BABU MIA
-->
name: 'multimodal-medical-imaging' description: 'Analyzes medical images (X-ray, MRI, CT) using multimodal LLMs to identify anomalies and generate reports.' measurable_outcome: Execute skill workflow successfully with valid output within 15 minutes. allowed-tools:
- read_file
- run_shell_command
Multimodal Medical Imaging Analysis
The Multimodal Medical Imaging Analysis Skill leverages state-of-the-art Vision-Language Models (VLMs) like Gemini 1.5 Pro and GPT-4o to interpret medical imagery alongside clinical text.
When to Use This Skill
- When you need a preliminary screening of medical images.
- When correlating visual findings with textual clinical notes.
- To generate structured reports (DICOM-SR-like) from raw images.
Core Capabilities
- Anomaly Detection: Identify potential pathologies in X-rays, CTs, etc.
- Report Generation: Draft radiology reports in standard formats.
- VQA (Visual Question Answering): Answer specific questions about an image (e.g., "Is there a fracture in the left femur?").
Workflow
- Input: Provide an image file path (JPG, PNG) and a specific clinical question or "generate report" instruction.
- Analyze: The agent sends the image and prompt to the VLM.
- Output: Returns a JSON object with findings, confidence scores, and reasoning.
Example Usage
User: "Analyze this chest X-ray for pneumonia."
Agent Action:
<!-- AUTHOR_SIGNATURE: 9a7f3c2e-MD-BABU-MIA-2026-MSSM-SECURE -->python3 Skills/Clinical/Medical_Imaging/Multimodal_Analysis/multimodal_agent.py \ --image "/path/to/cxr.jpg" \ --prompt "Check for signs of pneumonia and consolidation."