EasyPlatform test-ui
[Testing] Full-site QA audit (accessibility, performance, security, SEO) with visual reports. Use for comprehensive QA audits of deployed sites.
git clone https://github.com/duc01226/EasyPlatform
T=$(mktemp -d) && git clone --depth=1 https://github.com/duc01226/EasyPlatform "$T" && mkdir -p ~/.claude/skills && cp -r "$T/.claude/skills/test-ui" ~/.claude/skills/duc01226-easyplatform-test-ui && rm -rf "$T"
.claude/skills/test-ui/SKILL.md<!-- SYNC:critical-thinking-mindset -->[IMPORTANT] Use
to break ALL work into small tasks BEFORE starting — including tasks for each file read. This prevents context loss from long files. For simple tasks, AI MUST ATTENTION ask user whether to skip.TaskCreate
<!-- /SYNC:critical-thinking-mindset --> <!-- SYNC:ai-mistake-prevention -->Critical Thinking Mindset — Apply critical thinking, sequential thinking. Every claim needs traced proof, confidence >80% to act. Anti-hallucination: Never present guess as fact — cite sources for every claim, admit uncertainty freely, self-check output for errors, cross-reference independently, stay skeptical of own confidence — certainty without evidence root of all hallucination.
<!-- /SYNC:ai-mistake-prevention --> <!-- SYNC:evidence-based-reasoning -->AI Mistake Prevention — Failure modes to avoid on every task:
- Check downstream references before deleting. Deleting components causes documentation and code staleness cascades. Map all referencing files before removal.
- Verify AI-generated content against actual code. AI hallucinates APIs, class names, and method signatures. Always grep to confirm existence before documenting or referencing.
- Trace full dependency chain after edits. Changing a definition misses downstream variables and consumers derived from it. Always trace the full chain.
- Trace ALL code paths when verifying correctness. Confirming code exists is not confirming it executes. Always trace early exits, error branches, and conditional skips — not just happy path.
- When debugging, ask "whose responsibility?" before fixing. Trace whether bug is in caller (wrong data) or callee (wrong handling). Fix at responsible layer — never patch symptom site.
- Assume existing values are intentional — ask WHY before changing. Before changing any constant, limit, flag, or pattern: read comments, check git blame, examine surrounding code.
- Verify ALL affected outputs, not just the first. Changes touching multiple stacks require verifying EVERY output. One green check is not all green checks.
- Holistic-first debugging — resist nearest-attention trap. When investigating any failure, list EVERY precondition first (config, env vars, DB names, endpoints, DI registrations, data preconditions), then verify each against evidence before forming any code-layer hypothesis.
- Surgical changes — apply the diff test. Bug fix: every changed line must trace directly to the bug. Don't restyle or improve adjacent code. Enhancement task: implement improvements AND announce them explicitly.
- Surface ambiguity before coding — don't pick silently. If request has multiple interpretations, present each with effort estimate and ask. Never assume all-records, file-based, or more complex path.
<!-- /SYNC:evidence-based-reasoning -->Evidence-Based Reasoning — Speculation is FORBIDDEN. Every claim needs proof.
- Cite
, grep results, or framework docs for EVERY claimfile:line- Declare confidence: >80% act freely, 60-80% verify first, <60% DO NOT recommend
- Cross-service validation required for architectural changes
- "I don't have enough evidence" is valid and expected output
BLOCKED until:
Evidence file path (- [ ])file:lineGrep search performed- [ ]3+ similar patterns found- [ ]Confidence level stated- [ ]Forbidden without proof: "obviously", "I think", "should be", "probably", "this is because" If incomplete → output:
"Insufficient evidence. Verified: [...]. Not verified: [...]."
— Domain entity catalog, relationships, cross-service sync (read when task involves business entities/models) (content auto-injected by hook — check for [Injected: ...] header before reading)docs/project-reference/domain-entities-reference.md
Quick Summary
Goal: Run comprehensive UI tests on a website and generate a detailed visual report.
For individual page/component testing with Playwright scripts, use
instead.webapp-testing
Workflow:
- Discover — Browse target URL, discover all pages, components, endpoints
- Plan Tests — Create test plan covering accessibility, responsiveness, performance, security, SEO
- Execute — Run parallel tester subagents; capture screenshots for each test area
- Analyze — Use ai-multimodal to review screenshots and visual elements
- Report — Generate Markdown report with embedded screenshots and recommendations
Key Rules:
- Do NOT implement fixes; this is a testing/reporting skill only
- Save all screenshots in the report directory
- Support authenticated routes via cookie/token/localStorage injection
Be skeptical. Apply critical thinking, sequential thinking. Every claim needs traced proof, confidence percentages (Idea should be more than 80%).
Pre-read (design system): Load
designSystem.canonicalDoc + tokenFiles from docs/project-config.json so visual/style assertions reference real token names (--brand-*, $brand-*) instead of guesses.
Activate the chrome-devtools skill.
Purpose
Run comprehensive UI tests on a website and generate a detailed report.
Arguments
- $1: URL - The URL of the website to test
- $2: OPTIONS - Optional test configuration (e.g., --headless, --mobile, --auth)
Testing Protected Routes (Authentication)
For testing protected routes that require authentication, follow this workflow:
Step 1: User Manual Login
Instruct the user to:
- Open the target site in their browser
- Log in manually with their credentials
- Open browser DevTools (F12) → Application tab → Cookies/Storage
Step 2: Extract Auth Credentials
Ask the user to provide one of:
- Cookies: Copy cookie values (name, value, domain)
- Access Token: Copy JWT/Bearer token from localStorage or cookies
- Session Storage: Copy relevant session keys
Step 3: Inject Authentication
Use the
inject-auth.js script to inject credentials before testing:
cd $SKILL_DIR # .claude/skills/chrome-devtools/scripts # Option A: Inject cookies node inject-auth.js --url https://example.com --cookies '[{"name":"session","value":"abc123","domain":".example.com"}]' # Option B: Inject Bearer token node inject-auth.js --url https://example.com --token "Bearer eyJhbGciOi..." --header Authorization --token-key access_token # Option C: Inject localStorage node inject-auth.js --url https://example.com --local-storage '{"auth_token":"xyz","user_id":"123"}' # Combined (cookies + localStorage) node inject-auth.js --url https://example.com --cookies '[{"name":"session","value":"abc"}]' --local-storage '{"user":"data"}'
Step 4: Run Tests
After auth injection, the browser session persists. Run tests normally:
# Navigate and screenshot protected pages node navigate.js --url https://example.com/dashboard node screenshot.js --url https://example.com/profile --output profile.png # The auth session persists until --close true is used node screenshot.js --url https://example.com/settings --output settings.png --close true
Auth Script Options
- Inject cookies (JSON array)--cookies '<json>'
- Inject Bearer token--token '<token>'
- localStorage key for token (default: access_token)--token-key '<key>'
- Set HTTP header with token (e.g., Authorization)--header '<name>'
- Inject localStorage items--local-storage '<json>'
- Inject sessionStorage items--session-storage '<json>'
- Reload page after injection--reload true
- Clear saved auth session--clear true
Workflow
- Use
skill to organize the test plan & report in the current project directory.planning - All the screenshots should be saved in the same report directory.
- Browse $URL with the specified $OPTIONS, discover all pages, components, and endpoints.
- Create a test plan based on the discovered structure
- Use multiple
subagents or tool calls in parallel to test all pages, forms, navigation, user flows, accessibility, functionalities, usability, responsive layouts, cross-browser compatibility, performance, security, seo, etc.tester - Use
to analyze all screenshots and visual elements.ai-multimodal - Generate a comprehensive report in Markdown format, embedding all screenshots directly in the report.
- Finally respond to the user with a concise summary of findings and recommendations.
- Use
tool to ask if user wants to preview the report withAskUserQuestion
slash command./preview
Output Requirements
How to write reports:
- Format: Use clear, structured Markdown with headers, lists, and code blocks where appropriate
- Include the test results summary, key findings, and screenshot references
- IMPORTANT: Ensure token efficiency while maintaining high quality.
- IMPORTANT: Sacrifice grammar for the sake of concision when writing reports.
- IMPORTANT: In reports, list any unresolved questions at the end, if any.
IMPORTANT: Do not start implementing the fixes. IMPORTANT: Analyze the skills catalog and activate the skills that are needed for the task during the process.
Closing Reminders
- IMPORTANT MUST ATTENTION break work into small todo tasks using
BEFORE startingTaskCreate - IMPORTANT MUST ATTENTION search codebase for 3+ similar patterns before creating new code
- IMPORTANT MUST ATTENTION cite
evidence for every claim (confidence >80% to act)file:line - IMPORTANT MUST ATTENTION add a final review todo task to verify work quality MANDATORY IMPORTANT MUST ATTENTION READ the following files before starting: <!-- SYNC:evidence-based-reasoning:reminder -->
- IMPORTANT MUST ATTENTION cite
evidence for every claim (confidence >80% to act). NEVER speculate without proof. <!-- /SYNC:evidence-based-reasoning:reminder -->file:line - IMPORTANT MUST ATTENTION READ
before starting <!-- SYNC:critical-thinking-mindset:reminder -->CLAUDE.md - MUST ATTENTION apply critical thinking — every claim needs traced proof, confidence >80% to act. Anti-hallucination: never present guess as fact. <!-- /SYNC:critical-thinking-mindset:reminder --> <!-- SYNC:ai-mistake-prevention:reminder -->
- MUST ATTENTION apply AI mistake prevention — holistic-first debugging, fix at responsible layer, surface ambiguity before coding, re-read files after compaction. <!-- /SYNC:ai-mistake-prevention:reminder -->