LLMs-Universal-Life-Science-and-Clinical-Skills- Linear_Algebra

<!--

install
source · Clone the upstream repo
git clone https://github.com/mdbabumiamssm/LLMs-Universal-Life-Science-and-Clinical-Skills-
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/mdbabumiamssm/LLMs-Universal-Life-Science-and-Clinical-Skills- "$T" && mkdir -p ~/.claude/skills && cp -r "$T/Skills/Mathematics/Linear_Algebra" ~/.claude/skills/mdbabumiamssm-llms-universal-life-science-and-clinical-skills-linear-algebra && rm -rf "$T"
manifest: Skills/Mathematics/Linear_Algebra/SKILL.md
source content
<!-- # COPYRIGHT NOTICE # This file is part of the "Universal Biomedical Skills" project. # Copyright (c) 2026 MD BABU MIA, PhD <md.babu.mia@mssm.edu> # All Rights Reserved. # # This code is proprietary and confidential. # Unauthorized copying of this file, via any medium is strictly prohibited. # # Provenance: Authenticated by MD BABU MIA -->

name: 'tensor-operations' description: 'Tensor Operations' measurable_outcome: Execute skill workflow successfully with valid output within 15 minutes. allowed-tools:

  • read_file
  • run_shell_command

Tensor Operations

Fundamental linear algebra operations for understanding Transformers and attention mechanisms.

When to Use This Skill

  • When you need to manually compute Attention mechanisms.
  • For educational purposes to understand "Self-Attention".
  • To creating custom masks for sequence modeling.

Core Capabilities

  1. Scaled Dot Product Attention:
    softmax(QK^T / sqrt(d_k))
    .
  2. Causal Masking: Create lower-triangular masks for GPT-style generation.

Workflow

  1. Input: Query, Key, Value matrices.
  2. Execute: Run the script.
  3. Output: Attention scores and weighted values.

Example Usage

User: "Compute attention for these matrices."

Agent Action:

python3 Skills/Mathematics/Linear_Algebra/tensor_operations.py
<!-- AUTHOR_SIGNATURE: 9a7f3c2e-MD-BABU-MIA-2026-MSSM-SECURE -->