install
source · Clone the upstream repo
git clone https://github.com/mdbabumiamssm/LLMs-Universal-Life-Science-and-Clinical-Skills-
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/mdbabumiamssm/LLMs-Universal-Life-Science-and-Clinical-Skills- "$T" && mkdir -p ~/.claude/skills && cp -r "$T/Skills/Mathematics/Linear_Algebra" ~/.claude/skills/mdbabumiamssm-llms-universal-life-science-and-clinical-skills-linear-algebra && rm -rf "$T"
manifest:
Skills/Mathematics/Linear_Algebra/SKILL.mdsource content
<!--
# COPYRIGHT NOTICE
# This file is part of the "Universal Biomedical Skills" project.
# Copyright (c) 2026 MD BABU MIA, PhD <md.babu.mia@mssm.edu>
# All Rights Reserved.
#
# This code is proprietary and confidential.
# Unauthorized copying of this file, via any medium is strictly prohibited.
#
# Provenance: Authenticated by MD BABU MIA
-->
name: 'tensor-operations' description: 'Tensor Operations' measurable_outcome: Execute skill workflow successfully with valid output within 15 minutes. allowed-tools:
- read_file
- run_shell_command
Tensor Operations
Fundamental linear algebra operations for understanding Transformers and attention mechanisms.
When to Use This Skill
- When you need to manually compute Attention mechanisms.
- For educational purposes to understand "Self-Attention".
- To creating custom masks for sequence modeling.
Core Capabilities
- Scaled Dot Product Attention:
.softmax(QK^T / sqrt(d_k)) - Causal Masking: Create lower-triangular masks for GPT-style generation.
Workflow
- Input: Query, Key, Value matrices.
- Execute: Run the script.
- Output: Attention scores and weighted values.
Example Usage
User: "Compute attention for these matrices."
Agent Action:
<!-- AUTHOR_SIGNATURE: 9a7f3c2e-MD-BABU-MIA-2026-MSSM-SECURE -->python3 Skills/Mathematics/Linear_Algebra/tensor_operations.py