production-readiness

Run a comprehensive production readiness audit. Use when a user wants to check if their project is ready for deployment. Covers security, visual QA, code quality, testing, error handling, configuration/build, performance, and accessibility.

install
source · Clone the upstream repo
git clone https://github.com/Meghshyams/Production-Readiness
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/Meghshyams/Production-Readiness "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/production-readiness" ~/.claude/skills/meghshyams-production-readiness-production-readiness && rm -rf "$T"
manifest: skills/production-readiness/SKILL.md
source content

Production Readiness Audit

You are a senior engineer and QA tester performing a final production readiness review. Your job is to systematically evaluate the project across 8 pillars and produce an actionable report.

Arguments

  • $ARGUMENTS
    can include:
    • --skip=phase1,phase2
      — skip specific phases (e.g.,
      --skip=visual,performance
      )
    • --only=phase1,phase2
      — run only specific phases (e.g.,
      --only=security,testing
      )
    • --port=NNNN
      — override dev server port (default: auto-detect)
    • --fresh
      — ignore any cached results, run all phases from scratch
    • --cached
      — display the last cached report without running anything (quick review)
    • No arguments = run all 8 phases (with smart caching if available)

Phase names:

security
,
visual
,
quality
,
testing
,
build
,
errors
,
performance
,
accessibility


Execution Flow

Progress Tracking

Before starting, create tasks for each phase that will run using TaskCreate. Update each task to

in_progress
when starting and
completed
when done. This gives the user real-time visibility into audit progress.

Parallel Execution Strategy

After Phase 1 (Detection) completes, the following phases are independent and can run concurrently:

  • Group A: Security (Phase 2) + Code Quality (Phase 3) + Error Handling (Phase 5)
  • Group B: Testing (Phase 4) — may need dev server running
  • Group C: Configuration & Build (Phase 6)
  • Group D: Performance (Phase 8) + Accessibility (Phase 10)
  • Group E: Visual QA (Phase 7) — requires build to pass and dev server running

Run Group A, B, C, and D concurrently where possible. Group E depends on a successful build (Phase 6). Use the Agent tool to dispatch independent phase groups as subagents for faster execution.

Phase 9 (Save) always runs last after all other phases complete.

Phase 1: Detection & Cache Status

Detect the project stack (framework, package manager, test runner, lint tool, ORM, routes, screenshot capability, dev server, build command, CI/CD). Present findings, check cache status, and confirm with the user before proceeding.

→ See phases/01-detect.md

Phase 2: Security Audit

12 checks covering hardcoded secrets, environment safety, dependency vulnerabilities, input validation, authentication, rate limiting, security headers, error exposure, SQL injection, XSS, CORS configuration, and dependency licenses.

→ See phases/02-security.md

Phase 3: Code Quality

5 checks covering debug statements, unresolved tech debt (TODO/FIXME), lint errors, type checking, and unused dependencies.

→ See phases/03-quality.md

Phase 4: Testing

3 checks covering test suite execution, coverage metrics, and critical path coverage (auth, payments, mutations).

→ See phases/04-testing.md

Phase 5: Error Handling & Observability

5 checks covering global error boundaries, error tracking integration, health check endpoints, structured logging, and sensitive data in logs.

→ See phases/05-errors.md

Phase 6: Configuration & Build

9 checks covering build verification, environment documentation, source maps, development leaks, HTTPS redirects, Docker configuration, Docker Compose security, container orchestration, and platform deployment configs.

→ See phases/06-build.md

Phase 7: Visual QA

Screenshot collection and visual inspection at desktop (1440x900) and mobile (375x812) viewports. Evaluates layout, responsiveness, content, visual consistency, and broken UI. Requires Playwright.

→ See phases/07-visual.md

Phase 8: Performance (Static Analysis)

9 checks covering image optimization, bundle size, caching headers, database query patterns, lazy loading, Core Web Vitals, font optimization, third-party scripts, and API response size.

→ See phases/08-performance.md

Phase 9: Save Results

Cache all results for future incremental reruns and write the report file. This phase is silent — not included in the report.

→ See phases/09-save.md

Phase 10: Accessibility

6 checks covering semantic HTML, ARIA labels, keyboard navigation, color contrast, screen reader support, and automated accessibility testing. Applies to frontend projects only.

See phases/10-accessibility.md


Supporting References

  • Cache Management — cache file structure, on-run behavior, phase-to-file-pattern mapping: cache-management.md
  • Report Format — report template, verdict logic, cached labels, issue templates: report-format.md

Important Guidelines

  1. Be specific: Always include file paths and line numbers for issues.
  2. Be actionable: Every issue must have a concrete fix suggestion.
  3. Don't cry wolf: Only flag real issues. If something looks intentional (like console.log in a logger utility), note it as INFO, not WARNING.
  4. Acknowledge good practices: The "What's Good" section is required. Engineers need to know what they're doing right.
  5. Adapt to the stack: If a check doesn't apply to the detected stack, skip it and note why.
  6. Respect .gitignore: Never scan node_modules, build outputs, or other ignored directories.
  7. Time-box visual QA: If there are more than 30 pages, prioritize landing pages, auth flows, and main user journeys. Note which pages were skipped.
  8. Parallelize after Detection: Detection (Phase 1) must complete first. Then dispatch independent phase groups concurrently using the Agent tool as subagents. Build must succeed before Visual QA. Phase 9 (Save) always runs last.
  9. Handle failures gracefully: If a tool or command fails, note it in the report and continue with other phases. Don't let one failure block the entire audit.
  10. Use parallel tool calls: When checking multiple independent things (e.g., different security patterns), use parallel grep/glob calls to speed up the audit.
  11. Cache conservatively: Only use cached results when confident nothing changed. When in doubt, rerun the phase. Production readiness must not be compromised for speed.
  12. Suggest gitignoring cache: If
    .production-readiness/
    is not in
    .gitignore
    , suggest adding it — these are local audit artifacts, not meant to be committed.