Awesome-omni-skill tmdd-threat-modeling
Create and manage TMDD threat models grounded in actual codebase architecture. Use when the user wants to threat-model a system, add a feature, create security threat mappings, run tmdd commands, or work with .tmdd/ YAML files.
git clone https://github.com/diegosouzapw/awesome-omni-skill
T=$(mktemp -d) && git clone --depth=1 https://github.com/diegosouzapw/awesome-omni-skill "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/development/tmdd-threat-modeling" ~/.claude/skills/diegosouzapw-awesome-omni-skill-tmdd-threat-modeling && rm -rf "$T"
skills/development/tmdd-threat-modeling/SKILL.mdTMDD - Threat Modeling Driven Development
When to Use
Activate when the user asks to:
- Create, initialize, or scaffold a threat model
- Add a feature to an existing threat model
- Threat-model a codebase, service, or feature
- Run or fix
errorstmdd lint - Generate implementation prompts from a threat model
Also auto-activates when editing
.tmdd/**/*.yaml files.
Core Principle: Architecture-First Threat Modeling
Every threat model MUST be grounded in the actual codebase. Never produce generic/textbook threats. Before writing any YAML, analyze the code to discover real components, data flows, technologies, and attack surface.
Phase 1 — Codebase Architecture Analysis
Before touching any
.tmdd/ file, perform these steps:
1.1 Discover Project Structure
Scan the repository to identify:
- Language & framework (package.json, requirements.txt, go.mod, Cargo.toml, pom.xml, etc.)
- Entry points (main files, route definitions, CLI commands)
- Directory layout (src/, api/, lib/, services/, models/, controllers/, etc.)
1.2 Identify Architectural Components
Search the codebase for real building blocks. Map each to a TMDD component:
| Look for | Maps to TMDD component type |
|---|---|
| Route handlers, controllers, API endpoints | |
| Frontend pages, React/Vue/Angular components | |
| Database models, ORM schemas, migrations | |
| Background workers, cron jobs, queue consumers | |
| Redis/Memcached usage | |
| Message broker publishers/consumers (Kafka, RabbitMQ, SQS) | |
| Third-party API calls, SDK integrations | |
| Auth middleware, session management, token handling | (auth) |
For each component, record:
- The actual technology (e.g., "Express.js", "PostgreSQL via Prisma", "Redis")
- The trust boundary (
if internet-facing,public
if behind auth/VPN,internal
if third-party)external - Key source files/directories it lives in
1.3 Trace Real Data Flows
Follow how data actually moves through the code:
- HTTP requests from clients to API handlers
- Database reads/writes from handlers to ORM/query layer
- Inter-service calls (REST, gRPC, message queues)
- External API calls (payment providers, email services, OAuth providers)
- File uploads, websocket connections, SSE streams
For each flow, note:
- What data is transmitted (credentials, PII, tokens, user content)
- Authentication mechanism (JWT, session cookie, API key, mTLS, none)
- Protocol (HTTPS, gRPC, WebSocket, AMQP)
1.4 Identify Security-Relevant Code Patterns
Scan for patterns that inform threats directly:
- Authentication: How are users authenticated? (JWT, sessions, OAuth, API keys)
- Authorization: Is there RBAC/ABAC? Where are permission checks?
- Input validation: Is there schema validation (Zod, Joi, Pydantic)? Where?
- SQL/ORM usage: Raw queries vs parameterized? Which ORM?
- File handling: Uploads, path traversal risks, temp files?
- Secrets management: Env vars, vault, hardcoded?
- Serialization: JSON parsing, XML, YAML (deserialization attacks)?
- Cryptography: Hashing algorithms, encryption at rest/in transit?
- Error handling: Do errors leak stack traces or internal details?
- Logging: What is logged? Are secrets filtered?
- Rate limiting: Is there any? Where?
- CORS/CSP: What's the policy?
Phase 2 — Threat Model Creation
IMPORTANT — Before editing any YAML file:
- Check if
already exists and contains populated YAML files.tmdd/ - If YES: you are in incremental mode — read each file first, then append new
entries or edit existing ones. NEVER rewrite a file from scratch. Skip
. Go directly to Phase 3 if adding a feature, or follow Phase 2.2 in append mode.tmdd init - If NO: you are in creation mode — run
, then populate files per Phase 2.2tmdd init
2.1 New Project (no .tmdd/
directory)
.tmdd/tmdd init .tmdd --template <template> -n "System Name" -d "Description"
Templates:
minimal (blank), web-app (7 web threats), api (OWASP API Top 10).
After init, replace the template content with architecture-specific data from Phase 1.
2.2 Populate YAML Files (in order)
YOU MUST EDIT THE FILES DIRECTLY. DO NOT JUST OUTPUT YAML.
NEVER overwrite existing content. Before editing any YAML file:
- Read the file first to see what entries already exist
- Append new entries — do not remove or rewrite existing ones unless the user explicitly asks for changes to specific entries
- When adding threats/mitigations, continue the existing ID sequence (e.g., if T005 exists, start new threats at T006)
Edit these files using the analysis from Phase 1:
1. components.yaml
— Map real code to components
components.yamlcomponents: - id: api_backend # REQUIRED, ^[a-z][a-z0-9_]*$ description: "Express.js REST API handling user and order endpoints" type: api # frontend|api|service|database|queue|external|cache|other technology: "Node.js / Express" trust_boundary: public # public|internal|external source_paths: # OPTIONAL - glob patterns mapping to source files - "src/routes/**" - "src/middleware/**" - "src/server.ts"
Rules:
- One component per distinct architectural unit discovered in Phase 1
must mention the actual technology and what it does in this projectdescription
must reflect the real deployment (not assumed)trust_boundary
(optional) should list glob patterns for source files that belong to this component. This enables deterministic PR-to-component mapping for threat review workflows. Prefer specific globs over overly broad ones (e.g.,source_paths
oversrc/routes/**
).src/**
2. actors.yaml
— Real users and external systems
actors.yamlactors: - id: end_user # REQUIRED, ^[a-z][a-z0-9_]*$ description: "Authenticated user accessing the web dashboard"
3. data_flows.yaml
— Traced from actual code paths
data_flows.yamldata_flows: - id: df_user_to_api # REQUIRED source: end_user # must exist in actors or components destination: api_backend # must exist in actors or components data_description: "Login credentials (email + password) and session tokens" protocol: HTTPS authentication: JWT
Rules:
- Every flow must correspond to a real code path you found in Phase 1.3
must name the actual data types (not just "API calls")data_description- Include protocol and auth method from the code
4. threats/catalog.yaml
— Threats specific to THIS codebase
threats/catalog.yamlthreats: T001: # ^T\d+$ name: "SQL Injection via raw query in search endpoint" description: "The /api/search endpoint in src/routes/search.ts uses string concatenation for the WHERE clause instead of parameterized queries" severity: high # low|medium|high|critical stride: T # S|T|R|I|D|E cwe: CWE-89 suggested_mitigations: [M001] # each must exist in mitigations.yaml
CRITICAL — Threat Quality Rules:
must reference the specific component, endpoint, or module affectedname
must describe the concrete vulnerability in this codebase, not a textbook definition. Reference file paths when possible.description
must be based on actual exploitability and impact in this systemseverity- Every threat must be traceable to a component or data flow from Phase 1
STRIDE analysis — apply to each component and data flow:
- Spoofing: Can identities be faked? (check auth implementation)
- Tampering: Can data be modified? (check input validation, CSRF protection)
- Repudiation: Can actions be denied? (check audit logging)
- Information Disclosure: Can data leak? (check error handling, logging, CORS)
- Denial of Service: Can availability be impacted? (check rate limiting, resource limits)
- Elevation of Privilege: Can permissions be bypassed? (check authorization checks)
5. threats/mitigations.yaml
— Actionable controls with code references
threats/mitigations.yamlmitigations: # Simple format M001: "Use parameterized queries via Prisma ORM for all database access" # Rich format with code references (preferred — ties mitigation to implementation) M002: description: "Zod schema validation on all API request bodies" references: - file: "src/middleware/validate.ts" lines: "12-35" - file: "src/schemas/user.ts"
Rules:
- Reference actual files/lines where the mitigation is (or should be) implemented
- If the mitigation doesn't exist yet, describe it concretely enough to implement
- Use the rich format with
whenever a file location is knownreferences
6. threats/threat_actors.yaml
threats/threat_actors.yamlthreat_actors: TA001: "External attacker" # ^TA\d+$
7. features.yaml
— Features with threat-to-mitigation mapping
features.yamlfeatures: - name: "User Login" # REQUIRED goal: "Authenticate users" # REQUIRED data_flows: [df_user_to_api] # must exist in data_flows.yaml threat_actors: [TA001] # must exist in threat_actors.yaml threats: # MUST be a dict, NOT a list T001: default # inherit suggested_mitigations from catalog T002: [M003, M005] # explicit mitigation override T003: accepted # risk deliberately accepted last_updated: "2026-02-22" # set by agent to today's date reviewed_at: "2000-01-01" # SENTINEL — forces stale-review lint warning # reviewed_by: — DO NOT SET. Only a human adds this after manual review.
Threat mapping values:
— inheritdefault
fromsuggested_mitigations
(preferred when suggestions fit)threats/catalog.yaml
— explicit mitigation list (override when you need different controls)[M001, M002]
— risk deliberately accepted without mitigationaccepted
Review fields (human-only attestation):
— name/username of the human analyst who verified the threat mappings. AI agents MUST NOT set this field. Only a human adds it after manual review.reviewed_by
— date of last review (YYYY-MM-DD). AI agents MUST set this toreviewed_at
as a sentinel so that"2000-01-01"
always triggers a stale-review lint warning. The human updates this to the real date when they review.last_updated > reviewed_at
— date the feature was last created or modified (YYYY-MM-DD). AI agents SHOULD set this to today's date.last_updated- Features with
threats and noaccepted
trigger a lint warningreviewed_by
# CORRECT threats: T001: default T002: [M001, M002] T003: accepted # WRONG - will fail lint threats: [T001, T002, T003]
Phase 3 — Adding a Feature (existing .tmdd/
project)
.tmdd/When adding a feature to an existing threat model:
3.1 Analyze the feature's code impact
Before editing YAML, answer:
- What new code paths does this feature introduce? (new endpoints, new DB tables, new external calls)
- What existing components does it touch?
- What sensitive data does it handle? (PII, credentials, financial data, tokens)
- What new attack surface does it create?
3.2 Use the tmdd feature
workflow
tmdd feature# Step 1: Generate threat modeling prompt (new feature) tmdd feature "Feature Name" -d "What it does" # Step 2: Read the generated prompt # .tmdd/out/<feature_name>.threatmodel.txt # Step 3: Edit YAML files using findings from 3.1 (follow order in 3.3 below) # Step 4: Validate tmdd lint .tmdd # Step 5: Generate implementation prompt (feature now exists) tmdd feature "Feature Name"
3.3 Edit files in order
— Add new components if the feature introduces new architectural unitscomponents.yaml
— Add new actors if the feature serves new user typesactors.yaml
— Add flows for new data paths the feature createsdata_flows.yaml
— Add threats specific to the feature's code (not generic threats)threats/catalog.yaml
— Add mitigations referencing actual or planned implementation filesthreats/mitigations.yaml
— Add the feature with full threat->mitigation mappingfeatures.yaml
Phase 4 — Validation & Compilation
# Validate all cross-references tmdd lint .tmdd # Generate consolidated output tmdd compile .tmdd # Full system tmdd compile .tmdd --feature "Login" # Single feature
ID Conventions
| Type | Pattern | Example |
|---|---|---|
| Entity | | |
| Threat | | |
| Mitigation | | |
| Threat Actor | | |
| Data Flow | | |
File Structure
.tmdd/ system.yaml # System metadata actors.yaml # Who interacts with the system components.yaml # Architecture building blocks data_flows.yaml # Data movement between actors/components features.yaml # Features with threat->mitigation mappings threats/ catalog.yaml # Threat definitions (T001, T002...) mitigations.yaml # Security controls (M001, M002...) threat_actors.yaml # Adversary profiles (TA001, TA002...)
Cross-Reference Rules (enforced by lint)
must exist in actors or componentsdata_flows[].source/destination
must exist in data_flows.yamlfeatures[].data_flows[]
must exist in threat_actors.yamlfeatures[].threat_actors[]
keys must exist in threats/catalog.yamlfeatures[].threats
mitigation values must exist in threats/mitigations.yamlfeatures[].threats
must exist in mitigations.yamlcatalog[].suggested_mitigations[]
Self-Validation Checklist
Before finishing edits, verify:
- Phase 1 analysis was performed — components and data flows reflect actual code
- All IDs follow naming conventions
- Every ID referenced in features.yaml exists in its source file
- features.yaml threats is a dict (not a list)
- Threat names/descriptions reference specific components, endpoints, or files
- Mitigations reference actual or planned implementation files where possible
- data_flows source/destination exist in actors or components
- Existing entries in all YAML files were preserved (no accidental overwrites)
- Run
and fix all errorstmdd lint .tmdd