Awesome-offsec-claude threat-model-generator
Generate feature-grounded threat scenarios and executable security test cases with prioritized risk rationale.
install
source · Clone the upstream repo
git clone https://github.com/1ikeadragon/awesome-offsec-claude
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/1ikeadragon/awesome-offsec-claude "$T" && mkdir -p ~/.claude/skills && cp -r "$T/threat-model-generator" ~/.claude/skills/1ikeadragon-awesome-offsec-claude-threat-model-generator && rm -rf "$T"
manifest:
threat-model-generator/SKILL.mdsource content
Threat Model Generator
Purpose
Translate architecture and feature behavior into an actionable security test backlog.
Inputs
system_descriptionfeature_inventorydata_flowsroles_permissions
(optional)deployment_context
Modeling Workflow
Phase 1: Asset and Boundary Mapping
- Identify sensitive assets and trust boundaries.
- Map data ingress, processing, and egress points.
- Identify privileged operations and administrative paths.
Phase 2: Threat Enumeration
- Enumerate attacker objectives per feature.
- Enumerate abuse primitives per parameter and state transition.
- Enumerate systemic risks from shared components.
Phase 3: Scenario Construction
- Build concrete scenario with attacker preconditions.
- Define target operation and exploit mechanism.
- Define success signal and defensive expectation.
Phase 4: Prioritization
- Score by likelihood, impact, and detectability.
- Tag fast-win vs deep-investigation cases.
- Highlight assumptions and missing architecture details.
Mandatory Coverage Areas
- authentication and session handling
- authorization and object access
- injection and parser abuse
- workflow/state manipulation
- file and data handling
- configuration and deployment weaknesses
Output Contract
{ "threat_scenarios": [], "test_cases": [], "risk_priorities": [], "assumptions": [], "unknowns": [] }
Constraints
- Ground scenarios in provided architecture.
- Flag unsupported assumptions explicitly.
Quality Checklist
- Each scenario maps to a real asset.
- Test cases are executable.
- Prioritization rationale is clear.
Detailed Operator Notes
Consistency Rules
- Normalize terminology before scoring or chaining.
- Separate prerequisite uncertainty from exploit uncertainty.
- Treat environmental blockers independently from mitigation strength.
Risk Scoring Inputs
- attacker starting privilege
- required chain length
- probability of reliable execution
- blast radius if successful
Prioritization Output
: low-effort high-impact chains/findings.immediate
: moderate effort with clear payoff.next
: plausible but currently low confidence.watch
Reporting Rules
- Include one-line executive summary per chain/finding.
- Include exact blocker needed to move an inconclusive item forward.
- Include confidence rationale in plain technical language.
Quick Scenarios
Scenario A: Access Check Placement
- Trace data fetch point.
- Trace policy check point.
- Determine whether check occurs before use.
- Identify alternate path without check.
Scenario B: Sanitization Mismatch
- Map sink execution context.
- Map sanitizer type and location.
- Validate context compatibility.
- Find branch that bypasses sanitizer.
Scenario C: Adjacent Pattern Sweep
- Identify sibling handlers/sinks.
- Compare guard and validation parity.
- Flag inconsistent control patterns.
- Prioritize high-impact siblings.
Conditional Decision Matrix
| Condition | Action | Evidence Requirement |
|---|---|---|
| Finding signal unstable | downgrade confidence and add retest plan | repeated run variance log |
| Chain link missing prerequisite | split chain and mark dependency blocker | prerequisite graph |
| Impact appears low in isolation | evaluate chain amplification paths | chain-level impact narrative |
| Mitigation claim is partial | verify alternate path and state variants | mitigation bypass check |
| Environment blocker dominates | classify inconclusive with unblock requests | blocker evidence |
Advanced Coverage Extensions
- Add attack-path branching for multiple privilege starting points.
- Add defender-detection assumptions and likely monitoring signals.
- Add rollback/cleanup verification after proof steps.
- Add business-impact mapping to concrete assets and workflows.
- Add reproducibility score based on run-to-run consistency.