EasyPlatform scan-e2e-tests
[Documentation] Scan project and populate/sync docs/project-reference/e2e-test-reference.md with E2E test architecture, page objects, step definitions, configuration, and framework patterns.
git clone https://github.com/duc01226/EasyPlatform
T=$(mktemp -d) && git clone --depth=1 https://github.com/duc01226/EasyPlatform "$T" && mkdir -p ~/.claude/skills && cp -r "$T/.claude/skills/scan-e2e-tests" ~/.claude/skills/duc01226-easyplatform-scan-e2e-tests && rm -rf "$T"
.claude/skills/scan-e2e-tests/SKILL.md<!-- SYNC:critical-thinking-mindset -->[IMPORTANT] Use
to break ALL work into small tasks BEFORE starting — including tasks for each file read. This prevents context loss from long files. For simple tasks, AI MUST ATTENTION ask user whether to skip.TaskCreate
<!-- /SYNC:critical-thinking-mindset --> <!-- SYNC:ai-mistake-prevention -->Critical Thinking Mindset — Apply critical thinking, sequential thinking. Every claim needs traced proof, confidence >80% to act. Anti-hallucination: Never present guess as fact — cite sources for every claim, admit uncertainty freely, self-check output for errors, cross-reference independently, stay skeptical of own confidence — certainty without evidence root of all hallucination.
<!-- /SYNC:ai-mistake-prevention -->AI Mistake Prevention — Failure modes to avoid on every task:
- Check downstream references before deleting. Deleting components causes documentation and code staleness cascades. Map all referencing files before removal.
- Verify AI-generated content against actual code. AI hallucinates APIs, class names, and method signatures. Always grep to confirm existence before documenting or referencing.
- Trace full dependency chain after edits. Changing a definition misses downstream variables and consumers derived from it. Always trace the full chain.
- Trace ALL code paths when verifying correctness. Confirming code exists is not confirming it executes. Always trace early exits, error branches, and conditional skips — not just happy path.
- When debugging, ask "whose responsibility?" before fixing. Trace whether bug is in caller (wrong data) or callee (wrong handling). Fix at responsible layer — never patch symptom site.
- Assume existing values are intentional — ask WHY before changing. Before changing any constant, limit, flag, or pattern: read comments, check git blame, examine surrounding code.
- Verify ALL affected outputs, not just the first. Changes touching multiple stacks require verifying EVERY output. One green check is not all green checks.
- Holistic-first debugging — resist nearest-attention trap. When investigating any failure, list EVERY precondition first (config, env vars, DB names, endpoints, DI registrations, data preconditions), then verify each against evidence before forming any code-layer hypothesis.
- Surgical changes — apply the diff test. Bug fix: every changed line must trace directly to the bug. Don't restyle or improve adjacent code. Enhancement task: implement improvements AND announce them explicitly.
- Surface ambiguity before coding — don't pick silently. If request has multiple interpretations, present each with effort estimate and ask. Never assume all-records, file-based, or more complex path.
Prerequisites: MUST ATTENTION READ before executing:
<!-- SYNC:scan-and-update-reference-doc --><!-- /SYNC:scan-and-update-reference-doc --> <!-- SYNC:output-quality-principles -->Scan & Update Reference Doc — When updating reference docs: (1) Read existing doc first. (2) Scan codebase for current state (grep/glob). (3) Diff findings vs doc content. (4) Update ONLY sections where code diverged from doc. (5) Preserve manual annotations. (6) Update metadata (date, counts). NEVER rewrite entire doc — surgical updates only.
<!-- /SYNC:output-quality-principles -->Output Quality — Reference docs are injected into AI context. Apply 10 rules: (1) No inventories/counts — AI can grep. (2) No directory trees — AI can glob. (3) No TOCs. (4) Rules > descriptions — "MUST ATTENTION use X" not "X allows you to...". (5) 1 example per pattern. (6) Tables > prose. (7) BAD/GOOD pairs: 2-3 lines each. (8) Primacy-recency anchoring — critical rules in first AND last 5 lines. (9) No checkbox checklists — bullets force reading. (10) Density target: >=8 MUST ATTENTION/NEVER/ALWAYS per 100 lines.
Quick Summary
Goal: Scan E2E test codebase and populate
docs/project-reference/e2e-test-reference.md with architecture, base classes, page objects, step definitions, configuration, and best practices. (content auto-injected by hook — check for [Injected: ...] header before reading)
Workflow:
- Read — Load current target doc, detect init vs sync mode
- Detect — Identify E2E framework(s) and tech stack
- Scan — Discover E2E patterns via parallel sub-agents
- Report — Write findings to external report file
- Generate — Build/update reference doc from report
- Verify — Validate code examples reference real files
Key Rules:
- Generic — works with any E2E framework (Selenium, Playwright, Cypress, WebdriverIO, Puppeteer, etc.)
- BDD frameworks (SpecFlow, Cucumber, Behave) are E2E — scan feature files, step definitions, contexts
- Detect framework first, then scan for framework-specific patterns
- Every code example must come from actual project files with file:line references
- Use
docs/project-config.json
section if available for project-specific pathse2eTesting
Be skeptical. Apply critical thinking, sequential thinking. Every claim needs traced proof, confidence percentages (Idea should be more than 80%).
Scan E2E Tests
Phase 0: Read & Assess
- Read
docs/project-reference/e2e-test-reference.md - Detect mode: init (placeholder) or sync (populated)
- If sync: extract existing sections and note what's already well-documented
- Read
docs/project-config.json
section if it exists — use as hints for paths and frameworke2eTesting
Phase 1: Detect E2E Framework
Detect E2E framework and tech stack from project files:
.NET / C#
# Selenium + SpecFlow (BDD) grep -r "Selenium.WebDriver\|SpecFlow" --include="*.csproj" -l find . -name "*.feature" -type f | head -10 grep -r "\[Binding\]\|\[Given\|\[When\|\[Then" --include="*.cs" -l | head -10 # Playwright .NET grep -r "Microsoft.Playwright" --include="*.csproj" -l
TypeScript / JavaScript
# Playwright ls playwright.config.* 2>/dev/null grep -l "playwright" package.json */package.json 2>/dev/null # Cypress ls cypress.config.* 2>/dev/null grep -l "cypress" package.json */package.json 2>/dev/null # WebdriverIO ls wdio.conf.* 2>/dev/null # Puppeteer grep -l "puppeteer" package.json */package.json 2>/dev/null
Python
# Selenium + Behave (BDD) grep -r "selenium\|behave" requirements*.txt setup.py pyproject.toml 2>/dev/null find . -name "*.feature" -type f | head -10 # Playwright Python grep -r "playwright" requirements*.txt pyproject.toml 2>/dev/null
Java
# Selenium + Cucumber (BDD) grep -r "selenium\|cucumber" --include="pom.xml" --include="build.gradle" -l 2>/dev/null find . -name "*.feature" -type f | head -10
Output: Detected framework(s), language, BDD framework (if any), test runner
Phase 2: Execute Scan (Parallel Sub-Agents)
Launch 3 Explore agents in parallel:
Agent 1: E2E Framework & Architecture
- Find E2E project structure (test directories, page object directories)
- Find base classes for tests and page objects
- Find DI/startup configuration for test projects
- Find WebDriver/browser management (driver creation, lifecycle, options)
- Find settings/configuration classes (URLs, credentials, timeouts)
- Count test files, feature files, page objects
Agent 2: Page Object Model & Components
- Find page object classes and their hierarchy
- Find UI component wrappers (reusable element abstractions)
- Find selector patterns (CSS, data-testid, XPath, BEM)
- Find navigation helpers (page transitions, URL routing)
- Find wait/retry patterns (explicit waits, polling, retry logic)
- Find assertion helpers and validation patterns
Agent 3: BDD & Test Patterns (if BDD detected)
- Find feature files (.feature) — count, categorize by area
- Find step definition classes — count, list patterns
- Find context/state sharing between steps (ScenarioContext, World, IBddStepsContext)
- Find hooks (Before/After scenario, BeforeAll/AfterAll)
- Find test data patterns (fixtures, factories, unique generators)
- Find test account/credential management patterns
- Find environment configuration (per-env settings, CI headless mode)
Write all findings to:
plans/reports/scan-e2e-tests-{YYMMDD}-{HHMM}-report.md
Phase 3: Generate Reference Doc
Build
docs/project-reference/e2e-test-reference.md with these sections:
Required Sections (all frameworks)
- Architecture Overview — Layer diagram, project dependencies
- Project Structure — Directory tree with annotations
- Key Dependencies — Package versions table
- Base Classes — Test/page object hierarchies with code examples
- Page Object Pattern — How to create page objects, component wrappers
- Wait & Assertion Patterns — Resilient waits, retry, assertion helpers
- Navigation & Page Discovery — URL routing, page transitions
- Configuration — Settings files, environment variants, CI setup
- Running Tests — Commands for all, filtered, headed, CI modes
- Best Practices — Project-specific conventions
Conditional Sections (framework-specific)
- BDD Pattern (SpecFlow/Cucumber/Behave) — Feature file conventions, step definitions, context sharing, tags
- Test Account System (if credential management found) — Account types, numbered variants
- Common Patterns (if shared steps/helpers found) — Login flows, error assertions, reusable steps
- Environment Variants (if multi-env found) — Abstract/concrete page pattern, env-specific configs
Section Template
Each section should include:
- Brief description of the pattern
- Code example from actual project files (with file:line reference)
- Key class/method names for searchability
Phase 4: Update project-config.json
If
docs/project-config.json exists, update/create the e2eTesting section:
{ "e2eTesting": { "framework": "<detected>", "language": "<detected>", "guideDoc": "docs/project-reference/e2e-test-reference.md", "runCommands": { ... }, "bestPractices": [ ... ], "entryPoints": [ ... ], "stats": { "featureFiles": N, "stepDefinitionFiles": N, "featureAreas": N }, "dependencies": { ... }, "architecture": { ... } } }
Phase 5: Verify
- Spot-check 3-5 code examples — do file:line references exist?
- Verify class names match actual code (grep for each)
- Verify dependency versions against .csproj / package.json / requirements.txt
- Verify file counts (feature files, step defs, page objects) are accurate
- Run schema validation if project-config.json was updated
Output
Report what changed:
- Sections created vs updated
- Framework detected and version
- File counts discovered
- Any patterns not documented (gaps)
Closing Reminders
- IMPORTANT MUST ATTENTION break work into small todo tasks using
BEFORE startingTaskCreate - IMPORTANT MUST ATTENTION search codebase for 3+ similar patterns before creating new code
- IMPORTANT MUST ATTENTION cite
evidence for every claim (confidence >80% to act)file:line - IMPORTANT MUST ATTENTION add a final review todo task to verify work quality <!-- SYNC:scan-and-update-reference-doc:reminder -->
- IMPORTANT MUST ATTENTION read existing doc first, scan codebase, diff, surgical update only. Never rewrite entire doc. <!-- /SYNC:scan-and-update-reference-doc:reminder --> <!-- SYNC:output-quality-principles:reminder -->
- IMPORTANT MUST ATTENTION follow output quality rules: no counts/trees/TOCs, rules > descriptions, 1 example per pattern, primacy-recency anchoring. <!-- /SYNC:output-quality-principles:reminder --> <!-- SYNC:critical-thinking-mindset:reminder -->
- MUST ATTENTION apply critical thinking — every claim needs traced proof, confidence >80% to act. Anti-hallucination: never present guess as fact. <!-- /SYNC:critical-thinking-mindset:reminder --> <!-- SYNC:ai-mistake-prevention:reminder -->
- MUST ATTENTION apply AI mistake prevention — holistic-first debugging, fix at responsible layer, surface ambiguity before coding, re-read files after compaction. <!-- /SYNC:ai-mistake-prevention:reminder -->