BMAD-METHOD bmad-qa-generate-e2e-tests
Generate end to end automated tests for existing features. Use when the user says "create qa automated tests for [feature]"
git clone https://github.com/bmad-code-org/BMAD-METHOD
T=$(mktemp -d) && git clone --depth=1 https://github.com/bmad-code-org/BMAD-METHOD "$T" && mkdir -p ~/.claude/skills && cp -r "$T/src/bmm-skills/4-implementation/bmad-qa-generate-e2e-tests" ~/.claude/skills/bmad-code-org-bmad-method-bmad-qa-generate-e2e-tests && rm -rf "$T"
src/bmm-skills/4-implementation/bmad-qa-generate-e2e-tests/SKILL.mdQA Generate E2E Tests Workflow
Goal: Generate automated API and E2E tests for implemented code.
Your Role: You are a QA automation engineer. You generate tests ONLY — no code review or story validation (use the
bmad-code-review skill for that).
Conventions
- Bare paths (e.g.
) resolve from the skill root.checklist.md
resolves to this skill's installed directory (where{skill-root}
lives).customize.toml
-prefixed paths resolve from the project working directory.{project-root}
resolves to the skill directory's basename.{skill-name}
On Activation
Step 1: Resolve the Workflow Block
Run:
python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow
If the script fails, resolve the
workflow block yourself by reading these three files in base → team → user order and applying the same structural merge rules as the resolver:
— defaults{skill-root}/customize.toml
— team overrides{project-root}/_bmad/custom/{skill-name}.toml
— personal overrides{project-root}/_bmad/custom/{skill-name}.user.toml
Any missing file is skipped. Scalars override, tables deep-merge, arrays of tables keyed by
code or id replace matching entries and append new entries, and all other arrays append.
Step 2: Execute Prepend Steps
Execute each entry in
{workflow.activation_steps_prepend} in order before proceeding.
Step 3: Load Persistent Facts
Treat every entry in
{workflow.persistent_facts} as foundational context you carry for the rest of the workflow run. Entries prefixed file: are paths or globs under {project-root} — load the referenced contents as facts. All other entries are facts verbatim.
Step 4: Load Config
Load config from
{project-root}/_bmad/bmm/config.yaml and resolve:
,project_nameuser_name
,communication_languagedocument_output_languageimplementation_artifacts
as system-generated current datetimedate- YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config
{communication_language}
Step 5: Greet the User
Greet
{user_name}, speaking in {communication_language}.
Step 6: Execute Append Steps
Execute each entry in
{workflow.activation_steps_append} in order.
Activation is complete. Begin the workflow below.
Paths
=test_dir{project-root}/tests
=source_dir{project-root}
=default_output_file{implementation_artifacts}/tests/test-summary.md
Execution
Step 0: Detect Test Framework
Check project for existing test framework:
- Look for
dependencies (playwright, jest, vitest, cypress, etc.)package.json - Check for existing test files to understand patterns
- Use whatever test framework the project already has
- If no framework exists:
- Analyze source code to determine project type (React, Vue, Node API, etc.)
- Search online for current recommended test framework for that stack
- Suggest the meta framework and use it (or ask user to confirm)
Step 1: Identify Features
Ask user what to test:
- Specific feature/component name
- Directory to scan (e.g.,
)src/components/ - Or auto-discover features in the codebase
Step 2: Generate API Tests (if applicable)
For API endpoints/services, generate tests that:
- Test status codes (200, 400, 404, 500)
- Validate response structure
- Cover happy path + 1-2 error cases
- Use project's existing test framework patterns
Step 3: Generate E2E Tests (if UI exists)
For UI features, generate tests that:
- Test user workflows end-to-end
- Use semantic locators (roles, labels, text)
- Focus on user interactions (clicks, form fills, navigation)
- Assert visible outcomes
- Keep tests linear and simple
- Follow project's existing test patterns
Step 4: Run Tests
Execute tests to verify they pass (use project's test command).
If failures occur, fix them immediately.
Step 5: Create Summary
Output markdown summary:
# Test Automation Summary ## Generated Tests ### API Tests - [x] tests/api/endpoint.spec.ts - Endpoint validation ### E2E Tests - [x] tests/e2e/feature.spec.ts - User workflow ## Coverage - API endpoints: 5/10 covered - UI features: 3/8 covered ## Next Steps - Run tests in CI - Add more edge cases as needed
Keep It Simple
Do:
- Use standard test framework APIs
- Focus on happy path + critical errors
- Write readable, maintainable tests
- Run tests to verify they pass
Avoid:
- Complex fixture composition
- Over-engineering
- Unnecessary abstractions
For Advanced Features:
If the project needs:
- Risk-based test strategy
- Test design planning
- Quality gates and NFR assessment
- Comprehensive coverage analysis
- Advanced testing patterns and utilities
Install Test Architect (TEA) module: https://bmad-code-org.github.io/bmad-method-test-architecture-enterprise/
Output
Save summary to:
{default_output_file}
Done! Tests generated and verified. Validate against
./checklist.md.
On Complete
Run:
python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow.on_complete
If the resolved
workflow.on_complete is non-empty, follow it as the final terminal instruction before exiting.