Mycelium threat-model

Use to conduct STRIDE threat modeling for a system or feature design.

install
source · Clone the upstream repo
git clone https://github.com/haabe/mycelium
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/haabe/mycelium "$T" && mkdir -p ~/.claude/skills && cp -r "$T/.claude/skills/threat-model" ~/.claude/skills/haabe-mycelium-threat-model && rm -rf "$T"
manifest: .claude/skills/threat-model/SKILL.md
source content

Threat Model Skill

STRIDE threat modeling for secure design.

Workflow

  1. Define scope: What system/feature/component is being modeled?

  2. Draw data flow diagram (textual):

    • Identify actors (users, external systems)
    • Identify processes (services, functions)
    • Identify data stores (databases, caches, files)
    • Identify data flows (what moves between components)
    • Identify trust boundaries (where trust level changes)
  3. For each component and data flow, assess STRIDE threats:

    ThreatDescriptionQuestion to Ask
    SpoofingImpersonating something or someoneCan an attacker pretend to be this user/system?
    TamperingModifying data or codeCan data be changed in transit or at rest?
    RepudiationClaiming to not have done somethingCan a user deny an action without accountability?
    Info DisclosureExposing data to unauthorized partiesCan sensitive data leak?
    Denial of ServiceMaking the system unavailableCan this component be overwhelmed?
    Elevation of PrivilegeGaining unauthorized accessCan a user escalate their permissions?
  4. For each identified threat:

    • Severity: Critical / High / Medium / Low
    • Likelihood: High / Medium / Low
    • Existing mitigations (if any)
    • Recommended mitigations
    • Residual risk after mitigation

    For AI-powered systems: Extend STRIDE with AI-specific threat dimensions:

    • Autonomy risk: Can the AI take actions beyond its intended scope?
    • Oversight gap: Is human-in-the-loop oversight meaningful? (Test Authority/Time/Understanding per Bannerman's triad -- see security-trust.md)
    • Feedback poisoning: Can adversarial inputs degrade the system over time?
    • Opacity risk: Can decisions be explained to affected parties?
  5. Output:

    ## Threat Model: [System/Feature]
    
    ### Data Flow
    [textual diagram]
    
    ### Trust Boundaries
    - [boundary 1]: [what changes]
    - [boundary 2]: [what changes]
    
    ### Threats
    | ID | Component | STRIDE | Threat | Severity | Likelihood | Mitigation |
    |----|-----------|--------|--------|----------|-----------|------------|
    | T1 | ... | S | ... | ... | ... | ... |
    
    ### Priority Actions
    1. [highest priority mitigation]
    2. [next priority]
    3. [next priority]
    

OWASP Top 10 for LLM Applications (2025)

For AI-powered products (

product_type: ai_tool
or any product using LLM components), extend the STRIDE analysis with LLM-specific threats:

#ThreatDescription
LLM01Prompt InjectionManipulating model via crafted inputs (direct or indirect)
LLM02Sensitive Information DisclosureModel leaking training data, PII, or system prompts
LLM03Supply Chain VulnerabilitiesCompromised model weights, training data, or plugins
LLM04Data and Model PoisoningCorrupting training/fine-tuning data to alter behavior
LLM05Improper Output HandlingTrusting LLM output without validation (enables injection downstream)
LLM06Excessive AgencyGranting LLM too many permissions, functions, or autonomy
LLM07System Prompt LeakageExtraction of system-level instructions via adversarial prompts
LLM08Vector and Embedding WeaknessesManipulating RAG pipelines via poisoned embeddings
LLM09MisinformationModel generating false but plausible content (hallucination in high-stakes contexts)
LLM10Unbounded ConsumptionResource exhaustion via expensive queries, denial-of-wallet attacks

Source: OWASP Top 10 for LLM Applications v2025.1 (genai.owasp.org). Updated from v1.1 (2023) — new entries: System Prompt Leakage (LLM07), Vector and Embedding Weaknesses (LLM08), Misinformation (LLM09), Unbounded Consumption (LLM10).

For each LLM component in the threat model, assess all 10 threats. Use alongside STRIDE — STRIDE covers system-level threats, OWASP LLM covers model-level threats.

Theory Citations

  • STRIDE: Microsoft threat modeling methodology (Shostack)
  • OWASP Top 10:2025: Web application security risks
  • OWASP Top 10 for LLM Applications v2025: AI/LLM-specific security risks