Medical-research-skills cross-disciplinary-bridge-finder
Use when identifying collaboration opportunities across fields, finding experts in complementary disciplines, translating methodologies between scientific domains, or building interdisciplinary research teams. Identifies synergies between scientific disciplines, matches researchers with complementary expertise, and facilitates cross-domain collaborations. Supports interdisciplinary grant applications and innovative research team formation.
git clone https://github.com/aipoch/medical-research-skills
T=$(mktemp -d) && git clone --depth=1 https://github.com/aipoch/medical-research-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/scientific-skills/Evidence Insight/cross-disciplinary-bridge-finder" ~/.claude/skills/aipoch-medical-research-skills-cross-disciplinary-bridge-finder && rm -rf "$T"
scientific-skills/Evidence Insight/cross-disciplinary-bridge-finder/SKILL.mdCross-Disciplinary Research Collaboration Finder
When to Use
- Use this skill when the task needs Use when identifying collaboration opportunities across fields, finding experts in complementary disciplines, translating methodologies between scientific domains, or building interdisciplinary research teams. Identifies synergies between scientific disciplines, matches researchers with complementary expertise, and facilitates cross-domain collaborations. Supports interdisciplinary grant applications and innovative research team formation.
- Use this skill for evidence insight tasks that require explicit assumptions, bounded scope, and a reproducible output format.
- Use this skill when you need a documented fallback path for missing inputs, execution errors, or partial evidence.
Key Features
- Scope-focused workflow aligned to: Use when identifying collaboration opportunities across fields, finding experts in complementary disciplines, translating methodologies between scientific domains, or building interdisciplinary research teams. Identifies synergies between scientific disciplines, matches researchers with complementary expertise, and facilitates cross-domain collaborations. Supports interdisciplinary grant applications and innovative research team formation.
- Packaged executable path(s):
.scripts/main.py - Reference material available in
for task-specific guidance.references/ - Structured execution path designed to keep outputs consistent and reviewable.
Dependencies
:Python
. Repository baseline for current packaged skills.3.10+
:dataclasses
. Declared inunspecified
.requirements.txt
:networkx
. Declared inunspecified
.requirements.txt
:numpy
. Declared inunspecified
.requirements.txt
:sklearn
. Declared inunspecified
.requirements.txt
:networkx
. Declared in>=2.8
.scripts/requirements.txt
:numpy
. Declared in>=1.21
.scripts/requirements.txt
:pandas
. Declared in>=1.3
.scripts/requirements.txt
:scikit-learn
. Declared in>=1.0
.scripts/requirements.txt
:matplotlib
. Declared in>=3.5
.scripts/requirements.txt
:seaborn
. Declared in>=0.11
.scripts/requirements.txt
:openai
. Declared in>=1.0
.scripts/requirements.txt
Example Usage
cd "20260318/scientific-skills/Evidence Insight/cross-disciplinary-bridge-finder" python -m py_compile scripts/main.py python scripts/main.py --help
Example run plan:
- Confirm the user input, output path, and any required config values.
- Edit the in-file
block or documented parameters if the script uses fixed settings.CONFIG - Run
with the validated inputs.python scripts/main.py - Review the generated output and return the final artifact with any assumptions called out.
Implementation Details
See
## Workflow above for related details.
- Execution model: validate the request, choose the packaged workflow, and produce a bounded deliverable.
- Input controls: confirm the source files, scope limits, output format, and acceptance criteria before running any script.
- Primary implementation surface:
.scripts/main.py - Reference guidance:
contains supporting rules, prompts, or checklists.references/ - Parameters to clarify first: input path, output path, scope filters, thresholds, and any domain-specific constraints.
- Output discipline: keep results reproducible, identify assumptions explicitly, and avoid undocumented side effects.
Quick Check
Use this command to verify that the packaged script entry point can be parsed before deeper execution.
python -m py_compile scripts/main.py
Audit-Ready Commands
Use these concrete commands for validation. They are intentionally self-contained and avoid placeholder paths.
python -m py_compile scripts/main.py python scripts/main.py --help
Workflow
- Confirm the user objective, required inputs, and non-negotiable constraints before doing detailed work.
- Validate that the request matches the documented scope and stop early if the task would require unsupported assumptions.
- Use the packaged script path or the documented reasoning path with only the inputs that are actually available.
- Return a structured result that separates assumptions, deliverables, risks, and unresolved items.
- If execution fails or inputs are incomplete, switch to the fallback path and state exactly what blocked full completion.
When to Use This Skill
- identifying collaboration opportunities across fields
- finding experts in complementary disciplines
- translating methodologies between scientific domains
- building interdisciplinary research teams
- discovering funding for interdisciplinary projects
- mapping knowledge transfer pathways
Quick Start
from scripts.interdisciplinary import CollaborationFinder finder = CollaborationFinder() # Find collaborators in different field collaborators = finder.find_experts( my_expertise="machine_learning", target_field="immunology", collaboration_type="co_authorship", min_publications=10, h_index_threshold=15 ) if not collaborators: print("No collaborators found — try lowering min_publications or h_index_threshold.") else: # Validate quality before proceeding: only consider complementarity_score > 0.7 qualified = [e for e in collaborators if e.complementarity_score > 0.7] print(f"Found {len(collaborators)} candidates; {len(qualified)} meet quality threshold (score > 0.7):") for expert in qualified[:5]: print(f" - {expert.name} ({expert.institution})") print(f" Research: {expert.research_focus}") print(f" Complementarity score: {expert.complementarity_score}") # Identify transferable methods methods = finder.identify_transferable_methods( from_field="physics", to_field="biology", application_area="systems_modeling" ) if not methods: print("No transferable methods found — consider broadening the application_area.") else: # Validate applicability before proceeding: review transfer_potential for method in methods: print(f"Method: {method.name}") print(f" Success in source field: {method.success_rate}") print(f" Application potential: {method.transfer_potential}") if method.transfer_potential < 0.6: print(f" ⚠ Low transfer potential — consider a different application_area.") # Find interdisciplinary funding grants = finder.find_interdisciplinary_funding( fields=["AI", "medicine", "ethics"], funder_types=["NIH", "NSF", "private_foundation"], deadline_within_months=6 ) if not grants: print("No grants found — try extending deadline_within_months or broadening funder_types.") # Generate collaboration proposal outline proposal_outline = finder.generate_collaboration_proposal( partner_expertise="clinical_trial_design", my_expertise="data_science", research_question="precision_medicine" )
Command Line Usage
python scripts/main.py --my-field machine_learning --target-field immunology --find-collaborators --output matches.json
Handling Poor Results
- Empty collaborator list: Lower
ormin_publications
; broadenh_index_threshold
.collaboration_type - No transferable methods: Widen
to a higher-level domain (e.g.,application_area
instead of"modeling"
)."systems_modeling" - No funding results: Extend
or add more entries todeadline_within_months
.funder_types - Weak proposal outline: Ensure
is a descriptive string rather than a short keyword.research_question
References
- Comprehensive user guidereferences/guide.md
- Working code examplesreferences/examples/
- Complete API documentationreferences/api-docs/
Output Requirements
Every final response should make these items explicit when they are relevant:
- Objective or requested deliverable
- Inputs used and assumptions introduced
- Workflow or decision path
- Core result, recommendation, or artifact
- Constraints, risks, caveats, or validation needs
- Unresolved items and next-step checks
Error Handling
- If required inputs are missing, state exactly which fields are missing and request only the minimum additional information.
- If the task goes outside the documented scope, stop instead of guessing or silently widening the assignment.
- If
fails, report the failure point, summarize what still can be completed safely, and provide a manual fallback.scripts/main.py - Do not fabricate files, citations, data, search results, or execution outcomes.
Input Validation
This skill accepts requests that match the documented purpose of
cross-disciplinary-bridge-finder and include enough context to complete the workflow safely.
Do not continue the workflow when the request is out of scope, missing a critical input, or would require unsupported assumptions. Instead respond:
only handles its documented workflow. Please provide the missing required inputs or switch to a more suitable skill.cross-disciplinary-bridge-finder
References
- references/audit-reference.md - Supported scope, audit commands, and fallback boundaries
Response Template
Use the following fixed structure for non-trivial requests:
- Objective
- Inputs Received
- Assumptions
- Workflow
- Deliverable
- Risks and Limits
- Next Checks
If the request is simple, you may compress the structure, but still keep assumptions and limits explicit when they affect correctness.