Claude-skill-registry bug-bounty-methodology
Target-agnostic bug bounty hunting methodology with parallel recon, systematic testing workflows, and vulnerability-specific exploitation guidance
git clone https://github.com/majiayu000/claude-skill-registry
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data/bug-bounty-methodology" ~/.claude/skills/majiayu000-claude-skill-registry-bug-bounty-methodology && rm -rf "$T"
skills/data/bug-bounty-methodology/SKILL.mdBug Bounty Methodology Skill
Overview
This skill provides a complete, target-agnostic bug bounty hunting methodology inspired by industry experts like Hadx and Daniel Miessler. It emphasizes systematic reconnaissance, parallel execution for efficiency, and vulnerability-specific testing workflows that apply to any web application target.
Core Philosophy
Three-Phase Approach:
- Intelligence Gathering (Passive Recon) - No direct target interaction
- Active Enumeration - Verification and endpoint discovery
- Targeted Exploitation - Vulnerability-specific testing based on findings
Key Principles:
- Target-agnostic workflows that apply universally
- Parallel agent execution for 5-7 hour tasks completed in ~1 hour wall time
- Systematic documentation of all findings
- Prioritization based on bug bounty program criteria
- Reproducible steps with exact commands
Skill Invocation Patterns
When the user requests bug bounty work, route to appropriate workflow based on intent:
Starting New Target Reconnaissance
User says: "Start bug bounty recon on [target]" or "Begin reconnaissance for [target]"
Action:
- Read
for scope and contexttargets/{target}.md - Identify required recon phases (passive, active, js-analysis, mobile)
- Propose spawning parallel agents for each phase
- Upon user approval, spawn agents via Task() tool
- Each agent executes appropriate workflow and outputs to target directory
- Synthesize findings into prioritized attack roadmap
Continuing Existing Work
User says: "Continue testing [target]" or "Resume [target] work"
Action:
- Read latest findings from
LEARNING/targets/{target}/ - Review attack roadmap or previous session notes
- Identify next priority testing phase
- Route to appropriate testing workflow
Specific Vulnerability Testing
User says: "Test GraphQL on [target]" or "Check for IDOR in [target] API"
Action:
- Load appropriate testing workflow (
,test-graphql.md
, etc.)test-rest-api.md - Read target context and discovered endpoints
- Guide through systematic testing checklist
- Document findings in target-specific location
New Target Setup
User says: "Add new target [name]" or "Set up [target] for bug bounty"
Action:
- Create new target file from
targets/template.md - Guide user through scope definition
- Help capture authentication details
- Document program-specific criteria (payout ranges, exclusions)
Workflow Routing Logic
Reconnaissance Phase
-
Passive Recon →
workflows/recon-passive.md- Subdomain enumeration (crt.sh, subfinder, amass)
- Technology fingerprinting (Wappalyzer, whatweb)
- Source code intelligence (GitHub, JavaScript analysis)
- Mobile app static analysis
-
Active Recon →
workflows/recon-active.md- Subdomain verification (httpx, dnsx)
- Web application mapping (gospider, Burp Suite)
- API endpoint discovery (fuzzing, JavaScript extraction)
- GraphQL detection and introspection
-
JavaScript Analysis →
workflows/analyze-javascript.md- Bundle extraction and beautification
- Endpoint discovery from JS
- Secret scanning (API keys, tokens)
- Client-side logic analysis
-
Mobile Analysis →
workflows/analyze-mobile.md- APK/IPA decompilation
- String extraction and analysis
- Hardcoded endpoint discovery
- Mobile-specific API differences
Testing Phase
Route based on discovered technology stack and vulnerability category:
- GraphQL endpoints →
workflows/test-graphql.md - XSS (Cross-Site Scripting) →
workflows/test-xss.md - REST APIs →
workflows/test-api.md - Authentication systems →
workflows/test-authentication.md - Payment/business logic →
workflows/test-business-logic.md - File uploads →
workflows/test-file-upload.md
Reporting Phase
- Bug report creation →
workflows/report-findings.md
Parallel Execution Model
For reconnaissance phases (2-3 hours passive + 3-4 hours active = 5-7 hours linear):
Spawn 4 Parallel Agents:
// Agent 1: Passive Reconnaissance Task({ subagent_type: "general-purpose", description: "Passive recon for target", prompt: `Execute passive reconnaissance workflow for ${target}. Read and follow: .claude/skills/bug-bounty-methodology/workflows/recon-passive.md Target context: LEARNING/targets/${target}/${target}.md Output findings to: LEARNING/targets/${target}/recon/passive-results.md Document all discovered subdomains, technologies, and intelligence.` }) // Agent 2: Active Enumeration (starts after passive completes or runs in parallel) Task({ subagent_type: "general-purpose", description: "Active recon for target", prompt: `Execute active reconnaissance workflow for ${target}. Read and follow: .claude/skills/bug-bounty-methodology/workflows/recon-active.md Target context: LEARNING/targets/${target}/${target}.md Output findings to: LEARNING/targets/${target}/recon/active-results.md Verify subdomains, map application, discover API endpoints.` }) // Agent 3: JavaScript Analysis Task({ subagent_type: "general-purpose", description: "JavaScript analysis for target", prompt: `Execute JavaScript analysis workflow for ${target}. Read and follow: .claude/skills/bug-bounty-methodology/workflows/analyze-javascript.md Target context: LEARNING/targets/${target}/${target}.md Output findings to: LEARNING/targets/${target}/analysis/js-findings.md Extract and analyze all JavaScript bundles for endpoints and secrets.` }) // Agent 4: Mobile App Analysis (if applicable) Task({ subagent_type: "general-purpose", description: "Mobile analysis for target", prompt: `Execute mobile app analysis workflow for ${target}. Read and follow: .claude/skills/bug-bounty-methodology/workflows/analyze-mobile.md Target context: LEARNING/targets/${target}/${target}.md Output findings to: LEARNING/targets/${target}/analysis/mobile-findings.md Analyze iOS and Android apps for endpoints and vulnerabilities.` })
Wall-clock time: ~45-75 minutes (vs. 5-7 hours sequential)
Synthesis and Roadmap Generation
After parallel agents complete:
- Read all output files from target directory
- Cross-reference findings between passive, active, JS, and mobile analysis
- Identify technology stack (GraphQL vs REST, auth mechanisms, frameworks)
- Map discovered endpoints to vulnerability categories
- Prioritize testing targets based on:
- Program payout ranges (Critical > High > Medium)
- Likelihood of finding (IDOR in APIs = high likelihood)
- Complexity vs. ROI (quick wins first)
- Generate attack roadmap with specific testing checklists
- Output to:
LEARNING/targets/{target}/ATTACK-ROADMAP.md
Target-Specific Intelligence
Each target has a profile in
targets/{target}.md containing:
- Program Details: URL, platform (HackerOne, Bugcrowd), response times
- Scope: In-scope assets, out-of-scope exclusions
- Testing Requirements: Required headers (X-Bug-Bounty), account setup
- Known Technology Stack: From previous recon or public knowledge
- Priority Attack Surfaces: Based on program payouts and policy
- Testing Accounts: Credentials for authenticated testing
- Previous Findings: What's already been reported (avoid duplicates)
Tools Integration
Reference
tools/reference.md for quick command syntax:
- Subdomain enumeration: subfinder, amass, assetfinder
- Verification: httpx, dnsx, subjack
- Web fuzzing: ffuf (see ffuf skill for advanced usage)
- Crawling: gospider, Burp Suite
- Mobile analysis: apktool, class-dump, Frida
- Secret scanning: truffleHog, gitleaks
Output Structure
All findings documented in vault under
LEARNING/targets/{target}/:
LEARNING/targets/{target}/ ├── {target}.md (target intelligence profile) ├── recon/ │ ├── passive-results.md │ ├── active-results.md │ └── subdomains.txt ├── analysis/ │ ├── js-findings.md │ ├── mobile-findings.md │ └── tech-stack.md ├── testing/ │ ├── graphql-tests.md │ ├── idor-tests.md │ └── auth-tests.md ├── findings/ │ └── [vulnerability-reports]/ └── ATTACK-ROADMAP.md (prioritized testing plan)
Context Management
For long reconnaissance sessions:
- Spawn parallel agents (as described above)
- Agents work independently and document findings
- Main agent synthesizes results
- If synthesis requires heavy context, use
+ read output files approach/clear - Never try to hold 5-7 hours of recon in single context window
Success Metrics
Reconnaissance complete when:
- ✅ 50+ subdomains enumerated (if applicable)
- ✅ Complete technology stack documented
- ✅ 100+ API endpoints discovered
- ✅ Attack surface mapped by vulnerability type
- ✅ Top 10 high-value targets identified
- ✅ Prioritized testing roadmap created
Testing complete when:
- ✅ All Critical/High priority targets tested
- ✅ At least one valid finding OR comprehensive testing documented
- ✅ Findings properly reported to bug bounty program
Workflow Files Reference
- Passive reconnaissance methodologyworkflows/recon-passive.md
- Active enumeration methodologyworkflows/recon-active.md
- JavaScript analysis workflowworkflows/analyze-javascript.md
- Mobile app analysis workflowworkflows/analyze-mobile.md
- GraphQL vulnerability testingworkflows/test-graphql.md
- Cross-site scripting testingworkflows/test-xss.md
- REST API security testing (IDOR, authorization)workflows/test-api.md
- Auth/authz bypass testingworkflows/test-authentication.md
- Payment and logic flaw testingworkflows/test-business-logic.md
- Upload vulnerability testingworkflows/test-file-upload.md
- Bug bounty report writingworkflows/report-findings.md
Usage Examples
Example 1: Start new target
User: "Start bug bounty recon on DoorDash" Chavvo loads bug-bounty-methodology skill → Reads targets/doordash.md → Proposes spawning 4 parallel recon agents → User approves → Agents execute workflows simultaneously → Synthesis generates ATTACK-ROADMAP.md → User reviews and chooses next phase
Example 2: Specific testing
User: "Test HubSpot GraphQL endpoint for IDOR" Chavvo loads bug-bounty-methodology skill → Reads targets/hubspot.md → Loads workflows/test-graphql.md → Reads discovered GraphQL endpoints from recon → Guides through IDOR testing checklist → Documents findings in testing/graphql-tests.md
Example 3: Resume work
User: "Continue DoorDash testing" Chavvo loads bug-bounty-methodology skill → Reads LEARNING/targets/doordash/ATTACK-ROADMAP.md → Reviews latest testing/ files → Identifies next priority item → Proposes specific testing workflow
When invoked, always:
- Understand user's intent (new recon, continue testing, specific vuln test)
- Read appropriate target context
- Route to correct workflow(s)
- Propose parallel execution for heavy recon tasks
- Document everything in target-specific structure
- Maintain target-agnostic methodology across all targets