EasyPlatform architecture-design

[Architecture] Full solution architecture: backend + frontend patterns, design patterns, library ecosystem, CI/CD, deployment, monitoring, testing, code quality, dependency risk. Compare top 3 approaches per concern with recommendation.

install
source · Clone the upstream repo
git clone https://github.com/duc01226/EasyPlatform
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/duc01226/EasyPlatform "$T" && mkdir -p ~/.claude/skills && cp -r "$T/.claude/skills/architecture-design" ~/.claude/skills/duc01226-easyplatform-architecture-design && rm -rf "$T"
manifest: .claude/skills/architecture-design/SKILL.md
source content
<!-- SYNC:critical-thinking-mindset -->

Critical Thinking Mindset — Apply critical thinking, sequential thinking. Every claim needs traced proof, confidence >80% to act. Anti-hallucination: Never present guess as fact — cite sources for every claim, admit uncertainty freely, self-check output for errors, cross-reference independently, stay skeptical of own confidence — certainty without evidence root of all hallucination.

<!-- /SYNC:critical-thinking-mindset --> <!-- SYNC:ai-mistake-prevention -->

AI Mistake Prevention — Failure modes to avoid on every task:

  • Check downstream references before deleting. Deleting components causes documentation and code staleness cascades. Map all referencing files before removal.
  • Verify AI-generated content against actual code. AI hallucinates APIs, class names, and method signatures. Always grep to confirm existence before documenting or referencing.
  • Trace full dependency chain after edits. Changing a definition misses downstream variables and consumers derived from it. Always trace the full chain.
  • Trace ALL code paths when verifying correctness. Confirming code exists is not confirming it executes. Always trace early exits, error branches, and conditional skips — not just happy path.
  • When debugging, ask "whose responsibility?" before fixing. Trace whether bug is in caller (wrong data) or callee (wrong handling). Fix at responsible layer — never patch symptom site.
  • Assume existing values are intentional — ask WHY before changing. Before changing any constant, limit, flag, or pattern: read comments, check git blame, examine surrounding code.
  • Verify ALL affected outputs, not just the first. Changes touching multiple stacks require verifying EVERY output. One green check is not all green checks.
  • Holistic-first debugging — resist nearest-attention trap. When investigating any failure, list EVERY precondition first (config, env vars, DB names, endpoints, DI registrations, data preconditions), then verify each against evidence before forming any code-layer hypothesis.
  • Surgical changes — apply the diff test. Bug fix: every changed line must trace directly to the bug. Don't restyle or improve adjacent code. Enhancement task: implement improvements AND announce them explicitly.
  • Surface ambiguity before coding — don't pick silently. If request has multiple interpretations, present each with effort estimate and ask. Never assume all-records, file-based, or more complex path.
<!-- /SYNC:ai-mistake-prevention -->

MANDATORY IMPORTANT MUST ATTENTION use

TaskCreate
to break ALL work into small tasks BEFORE starting. MANDATORY IMPORTANT MUST ATTENTION use
AskUserQuestion
at EVERY decision point — never assume user preferences. MANDATORY IMPORTANT MUST ATTENTION research top 3 options per architecture concern, compare with evidence, present report with recommendation + confidence %.

External Memory: For complex or lengthy work (research, analysis, scan, review), write intermediate findings and final results to a report file in

plans/reports/
— prevents context loss and serves as deliverable.

Evidence Gate: MANDATORY IMPORTANT MUST ATTENTION — every claim, finding, and recommendation requires

file:line
proof or traced evidence with confidence percentage (>80% to act, <80% must verify first).

Quick Summary

Goal: Act as a solution architect — research, critically analyze, and recommend the complete technical architecture for a project or feature. Cover ALL architecture concerns: backend, frontend, design patterns, library ecosystem, testing strategy, CI/CD, deployment, monitoring, code quality, and dependency management. Produce a comprehensive comparison report with actionable recommendations.

Workflow (12 steps):

  1. Load Context — Read domain model, tech stack, business evaluation, refined PBI
  2. Derive Architecture Requirements — Map business/domain complexity to architecture constraints
  3. Backend Architecture — Research top 3 backend architecture styles + design patterns
  4. Frontend Architecture — Research top 3 frontend architecture styles + design patterns
  5. Library Ecosystem Research — Best-practice libraries per concern (validation, caching, logging, utils, etc.)
  6. Testing Architecture — Unit, integration, E2E, performance testing frameworks + strategy
  7. CI/CD & Deployment — Pipeline design, containerization, orchestration, IaC
  8. Observability & Monitoring — Logging, metrics, tracing, alerting stack
  9. Code Quality & Clean Code — Linters, analyzers, formatters, enforcement tooling
  10. Dependency Risk Assessment — Package health, obsolescence risk, maintenance cost
  11. Generate Report — Full architecture decision report with all recommendations
  12. User Validation — Present findings, ask 8-12 questions, confirm all decisions

Key Rules:

  • MANDATORY IMPORTANT MUST ATTENTION research minimum 3 options per architecture concern with web evidence
  • MANDATORY IMPORTANT MUST ATTENTION include confidence % with evidence for every recommendation
  • MANDATORY IMPORTANT MUST ATTENTION run user validation interview at end (never skip)
  • Delegate to
    solution-architect
    agent for complex architecture decisions
  • All claims must cite sources (URL, benchmark, case study, or codebase evidence)
  • Never recommend based on familiarity alone — evidence required

Be skeptical. Apply critical thinking, sequential thinking. Every claim needs traced proof, confidence percentages (Idea should be more than 80%).


Step 1: Load Context

Read artifacts from prior workflow steps (search in

plans/
and
team-artifacts/
):

  • Domain model / ERD (complexity, bounded contexts, aggregate count)
  • Tech stack decisions (confirmed languages, frameworks, databases)
  • Business evaluation (scale, constraints, compliance)
  • Refined PBI (scope, acceptance criteria)
  • Discovery interview (team skills, experience level)

Extract and summarize:

SignalValueSource
Bounded contexts...domain model
Aggregate count...domain model
Cross-context events...domain model
Confirmed tech stack...tech stack phase
Expected scale...business eval
Team architecture exp....discovery
Compliance requirements...business eval
Real-time needsYes/Norefined PBI
Integration complexityLow/Med/Highdomain model
Deployment target...business eval

Step 2: Derive Architecture Requirements

Map signals to architecture constraints:

SignalArchitecture RequirementPriority
Many bounded contextsClear module boundaries, context isolationMust
High scaleHorizontal scaling, stateless services, caching strategyMust
Complex domainRich domain model, separation of domain from infraMust
Cross-context eventsEvent-driven communication, eventual consistencyMust
Small teamLow ceremony, fewer layers, convention over configurationShould
ComplianceAudit trail, immutable events, access control layersMust
Real-timeEvent sourcing or pub/sub, WebSocket/SSE supportShould
High integration complexityAnti-corruption layers, adapter pattern, API gatewayShould

MANDATORY IMPORTANT MUST ATTENTION validate derived requirements with user via

AskUserQuestion
before proceeding.


Step 3: Backend Architecture

3A: Architecture Styles

WebSearch top 3 backend architecture styles. Candidates:

StyleBest ForResearch Focus
Clean ArchitectureComplex domains, long-lived projectsDependency rule, testability, flexibility
Hexagonal (Ports+Adapt)Integration-heavy, multiple I/O adaptersPort contracts, adapter isolation
Vertical SliceFeature-focused teams, rapid deliverySlice isolation, code locality
Modular MonolithStarting simple, eventual decompositionModule boundaries, migration path
MicroservicesLarge teams, independent deploymentService boundaries, operational overhead
CQRS + Event SourcingAudit-heavy, complex queriesRead/write separation, event store
Layered (N-Tier)Simple CRUD, small teamsLayer responsibilities, coupling risk

3B: Backend Design Patterns

Evaluate applicability per layer:

PatternLayerWhen to Apply
RepositoryData AccessAbstract data store, enable testing
CQRSApplicationSeparate read/write models, complex queries
MediatorApplicationDecouple handlers from controllers
StrategyDomain/AppMultiple interchangeable algorithms
Observer/EventsDomainCross-aggregate side effects
FactoryDomainComplex object creation with invariants
DecoratorCross-cuttingAdd behavior without modifying (logging, caching)
AdapterInfrastructureIsolate external dependencies
SpecificationDomainComposable business rules, complex filtering
Unit of WorkData AccessTransaction management across repositories
Saga/Orchestr.Cross-serviceDistributed transactions, compensating actions
OutboxMessagingReliable event publishing with DB transactions
Circuit BreakerInfrastructureExternal service resilience

For each recommended pattern, document: Apply to, Why, Example, Risk if skipped.


Step 4: Frontend Architecture

4A: Architecture Styles

WebSearch top 3 frontend architecture styles. Candidates:

StyleBest ForResearch Focus
MVVMData-binding heavy, forms-over-data appsViewModel responsibility, two-way binding
MVCServer-rendered, traditional web appsController routing, view separation
Component ArchitectureModern SPA (React, Angular, Vue)Component isolation, props/events, reuse
Reactive Store (Redux)Complex state, multi-component syncSingle source of truth, immutable state
Signal-based ReactivityFine-grained reactivity (Angular 19, Solid)Granular updates, no zone.js overhead
Micro FrontendsMultiple teams, independent deploymentModule federation, routing, shared state
Feature-based ModulesLarge monolith SPA, lazy loadingFeature boundaries, route-level splitting
Server Components (RSC)SEO, initial load performanceServer/client boundary, streaming

4B: Frontend Design Patterns

PatternLayerWhen to Apply
Container/PresentationalComponentSeparate logic from UI rendering
Reactive StoreStateCentralized state, cross-component communication
Facade ServiceServiceSimplify complex API interactions
Adapter/MapperDataTransform API response to view model
Observer (RxJS)AsyncEvent streams, real-time data, debounce/throttle
Strategy (renderers)UIConditional rendering strategies per entity type
Composite (components)UITree structures, recursive components
Command (undo/redo)UXForm wizards, canvas editors, undoable actions
Lazy LoadingPerformanceRoute/module-level code splitting
Virtual ScrollingPerformanceLarge lists, infinite scroll

Step 4B: UI System Architecture

Skip if: Backend-only project, no frontend component.

Research and recommend the project's design system architecture. Use

AskUserQuestion
for each decision.

4B-1: Styling Approach

WebSearch top 3 styling approaches for the confirmed frontend framework:

ApproachBest ForResearch Focus
Utility-first (Tailwind CSS)Rapid prototyping, design enforcementJIT, custom config, design tokens
CSS Modules / Scoped CSSComponent isolation, no global conflictsNaming, composition patterns
SCSS/SASS with BEMComplex theming, token variablesBEM methodology, mixin libraries
CSS-in-JSDynamic styling, theme providersRuntime perf, SSR support
CSS Custom PropertiesNative theming, framework-agnosticBrowser support, fallback strategy

4B-2: Design Token Strategy

DecisionOptionsDefault
Token formatCSS custom properties / JSON / SCSS variablesCSS custom properties
Token categoriesColor, spacing, typography, breakpoints, shadows, z-indexAll
Token namingSemantic (
--color-primary
) vs Functional (
--btn-bg
)
Semantic first
ThemingLight/dark toggle / Multi-brand / Single themeSingle + dark mode

4B-3: Component Library Strategy

DecisionOptionsDefault
LibraryBuild custom / Headless (Radix, Headless UI) / Full kit (MUI, Ant, PrimeNG)Based on team and timeline
Component tiersCommon → Domain-Shared → Page (per ui-wireframe-protocol)Standard 3-tier
DocumentationStorybook / Docusaurus / In-code onlyBased on team size

4B-4: Responsive Strategy

DecisionOptionsDefault
ApproachMobile-first / Desktop-first / AdaptiveMobile-first
Breakpoints320/768/1024/1280 / CustomStandard
Grid systemCSS Grid / Flexbox / Framework gridCSS Grid + Flexbox

MANDATORY IMPORTANT MUST ATTENTION validate all UI system decisions with user via

AskUserQuestion
before proceeding to Step 5.


Step 5: Library Ecosystem Research

For EACH concern below, WebSearch top 3 library options for the confirmed tech stack. Evaluate: maturity, community, bundle size, maintenance activity, license, learning curve.

Library Concerns Checklist

ConcernWhat to ResearchEvaluation Criteria
ValidationInput validation, schema validation, form validationType safety, composability, error messages
HTTP Client / API LayerREST client, GraphQL client, API code generationInterceptors, retry, caching, type generation
State ManagementGlobal store, local state, server state cachingDevTools, SSR support, bundle size
Utilities / HelpersDate/time, collections, deep clone, string manipulationTree-shakability, size, native alternatives
CachingIn-memory cache, distributed cache, HTTP cache, query cacheTTL, invalidation, persistence
LoggingStructured logging, log levels, log aggregationStructured output, transports, performance
Error HandlingGlobal error boundary, error tracking, crash reportingSource maps, breadcrumbs, alerting integration
Authentication / AuthZJWT, OAuth, RBAC/ABAC, session managementStandards compliance, SSO, token refresh
File Upload / StorageMultipart upload, cloud storage SDK, image processingStreaming, resumable, size limits
Real-timeWebSocket, SSE, SignalR, Socket.ioReconnection, scaling, protocol support
Internationalizationi18n, l10n, pluralization, date/number formattingICU support, lazy loading, extraction tools
PDF / ExportPDF generation, Excel export, CSVServer-side vs client-side, template support

Per-Library Evaluation Template

### {Concern}: Top 3 Options

| Criteria         | Option A          | Option B | Option C |
| ---------------- | ----------------- | -------- | -------- |
| GitHub Stars     | ...               | ...      | ...      |
| Last Release     | ...               | ...      | ...      |
| Bundle Size      | ...               | ...      | ...      |
| Weekly Downloads | ...               | ...      | ...      |
| License          | ...               | ...      | ...      |
| Maintenance      | Active/Slow/Stale | ...      | ...      |
| Learning Curve   | Low/Med/High      | ...      | ...      |

**Recommendation:** {Option} — Confidence: {X}%

Step 6: Testing Architecture

Research best testing tools and strategy for the confirmed tech stack:

Testing LayerWhat to ResearchTop Candidates to Compare
Unit TestingTest runner, assertion library, mocking frameworkJest/Vitest/xUnit/NUnit, mocking
Integration TestingAPI testing, DB testing, service testingSupertest, TestContainers, WebAppFactory
E2E TestingBrowser automation, BDD, visual regressionPlaywright/Cypress/Selenium, SpecFlow
Performance TestingLoad testing, stress testing, benchmarkingk6/Artillery/JMeter/NBomber, BenchmarkDotNet
Contract TestingAPI contract validation between servicesPact, Dredd, Spectral
Mutation TestingTest quality validationStryker, PITest
CoverageCode coverage collection, reporting, enforcementIstanbul/Coverlet, SonarQube
Test DataFactories, fixtures, seeders, fakersBogus/AutoFixture/Faker.js

Test Strategy Template

### Test Pyramid

- **Unit (70%):** {framework} — {what to test}
- **Integration (20%):** {framework} — {what to test}
- **E2E (10%):** {framework} — {what to test}

### Coverage Targets

- Unit: {X}% | Integration: {X}% | E2E: critical paths only
- Enforcement: {tool} in CI pipeline, fail build below threshold

Step 7: CI/CD & Deployment

Research deployment architecture and CI/CD tooling:

ConcernWhat to ResearchTop Candidates to Compare
CI/CD PlatformPipeline orchestration, parallelism, cachingGitHub Actions/Azure DevOps/GitLab CI/Jenkins
ContainerizationContainer runtime, image building, registryDocker/Podman, BuildKit, ACR/ECR/GHCR
OrchestrationContainer orchestration, service mesh, scalingKubernetes/Docker Compose/ECS/Nomad
IaC (Infra as Code)Infrastructure provisioning, drift detectionTerraform/Pulumi/Bicep/CDK
Artifact ManagementPackage registry, versioning, vulnerability scanningNuGet/npm/Artifactory/GitHub Packages
Feature FlagsProgressive rollout, A/B testing, kill switchesLaunchDarkly/Unleash/Flagsmith
Secret ManagementVault, key rotation, environment variablesAzure KeyVault/HashiCorp Vault/SOPS
Database MigrationSchema versioning, rollback, seed dataEF Migrations/Flyway/Liquibase/dbmate

Deployment Strategy Comparison

StrategyRiskDowntimeComplexityBest For
Blue-GreenLowZeroMediumCritical services
CanaryLowZeroHighGradual rollout
RollingMedZeroLowStateless services
RecreateHighYesLowDev/staging environments
Feature FlagsLowZeroMediumFeature-level control

Step 8: Observability & Monitoring

ConcernWhat to ResearchTop Candidates to Compare
Structured LoggingLog format, correlation IDs, log levels, aggregationSerilog/NLog/Winston/Pino
Log AggregationCentralized log search, dashboards, alertsELK/Loki+Grafana/Datadog/Seq
MetricsApplication metrics, custom counters, histogramsPrometheus/OpenTelemetry/App Insights
Distributed TracingRequest tracing across services, span visualizationJaeger/Zipkin/OpenTelemetry/Tempo
APMApplication performance monitoring, auto-instrumentationDatadog/New Relic/App Insights/Elastic
AlertingThreshold alerts, anomaly detection, on-call routingPagerDuty/OpsGenie/Grafana Alerting
Health ChecksLiveness, readiness, startup probesAspNetCore.Diagnostics/Terminus
Uptime MonitoringExternal availability monitoring, SLA trackingUptimeRobot/Pingdom/Checkly

Observability Decision: 3 Pillars

### Recommended Observability Stack

| Pillar   | Tool   | Why         |
| -------- | ------ | ----------- |
| Logs     | {tool} | {rationale} |
| Metrics  | {tool} | {rationale} |
| Traces   | {tool} | {rationale} |
| Alerting | {tool} | {rationale} |

Step 9: Code Quality & Clean Code Enforcement

Research and recommend tooling for automated code quality:

ConcernWhat to ResearchTop Candidates to Compare
Linter (Backend)Static analysis, code style, bug detectionRoslyn Analyzers/SonarQube/StyleCop/ReSharper
Linter (Frontend)JS/TS linting, accessibility, complexityESLint/Biome/oxlint
FormatterAuto-formatting, consistent stylePrettier/dotnet-format/EditorConfig
Code AnalyzerSecurity scanning, complexity metrics, duplicationSonarQube/CodeClimate/Codacy
Pre-commit HooksGit hooks, staged file validationHusky+lint-staged/pre-commit/Lefthook
Editor ConfigCross-IDE consistency.editorconfig/IDE-specific configs
Architecture RulesLayer dependency enforcement, naming conventionsArchUnit/NetArchTest/Dependency-Cruiser
API Design StandardsOpenAPI validation, naming, versioningSpectral/Redocly/swagger-lint
Commit ConventionsCommit message format, changelog generationCommitlint/Conventional Commits
Code Review AutomationAutomated PR review, suggestion botsDanger.js/Reviewdog/CodeRabbit

Enforcement Strategy

### Code Quality Gates

| Gate        | Tool   | Trigger        | Fail Criteria         |
| ----------- | ------ | -------------- | --------------------- |
| Pre-commit  | {tool} | git commit     | Lint errors, format   |
| PR Check    | {tool} | Pull request   | Coverage < X%, issues |
| CI Pipeline | {tool} | Push to branch | Build fail, test fail |
| Scheduled   | {tool} | Weekly/nightly | Security vulns, debt  |

Scaffold Handoff (MANDATORY — consumed by
/scaffold
)

After completing code quality research, produce this handoff table in the architecture report. The

/scaffold
skill reads this table to generate actual config files — without it, scaffold cannot auto-configure quality tooling.

### Scaffold Handoff — Tool Choices

| Concern        | Chosen Tool       | Config File | Rationale |
| -------------- | ----------------- | ----------- | --------- |
| Linter (FE)    | {tool}            | {filename}  | {why}     |
| Linter (BE)    | {tool}            | {filename}  | {why}     |
| Formatter      | {tool}            | {filename}  | {why}     |
| Pre-commit     | {tool}            | {filename}  | {why}     |
| Error handling | {pattern}         | {files}     | {why}     |
| Loading state  | {pattern}         | {files}     | {why}     |
| Docker         | {compose pattern} | {files}     | {why}     |

Also include: Error handling strategy (4-layer pattern), loading state approach (global vs per-component), and Docker profile structure. Specific tool choices →

docs/project-reference/
or
project-config.json
.


Step 10: Dependency Risk Assessment

For EVERY recommended library/package, evaluate maintenance and obsolescence risk:

Package Health Scorecard

CriteriaScore (1-5)How to Verify
Last Release Date...npm/NuGet page — stale if >12 months
Open Issues Ratio...GitHub issues open vs closed
Maintainer Count...Bus factor — single maintainer = high risk
Breaking Change Freq....Changelog — frequent major versions = churn cost
Dependency Depth...
npm ls --depth
/ dependency graph depth
Known Vulnerabilities...Snyk/npm audit/GitHub Dependabot
License Compatibility...SPDX identifier — check viral licenses (GPL)
Community Activity...Monthly commits, PR merge rate, Discord/forums
Migration Path...Can swap to alternative if abandoned?
Framework Alignment...Official recommendation by framework team?

Risk Categories

Risk LevelCriteriaAction
LowActive, >3 maintainers, recent release, no CVEsUse freely
Medium1-2 maintainers, release <6mo, minor CVEs patchedUse with monitoring plan
HighSingle maintainer, >12mo stale, open CVEsFind alternative or plan exit strategy
CriticalAbandoned, unpatched CVEs, deprecatedDO NOT USE — find replacement

Dependency Maintenance Strategy

### Recommended Practices

1. **Automated scanning:** {tool} (Dependabot/Renovate/Snyk) — weekly PR for updates
2. **Lock file strategy:** Commit lock files, pin major versions, allow patch auto-update
3. **Audit schedule:** Monthly `npm audit` / `dotnet list package --vulnerable`
4. **Vendor policy:** Max {N} dependencies per concern, prefer well-maintained alternatives
5. **Exit strategy:** For each High-risk dependency, document migration path to alternative

Step 11: Generate Report

Write report to

{plan-dir}/research/architecture-design.md
with sections:

  1. Executive summary (recommended architecture in 8-10 lines)
  2. Architecture requirements table (from Step 2)
  3. Backend architecture — style comparison + recommended patterns (Steps 3)
  4. Frontend architecture — style comparison + recommended patterns (Step 4)
  5. Library ecosystem — per-concern recommendations with alternatives (Step 5)
  6. Testing architecture — pyramid, tools, coverage targets (Step 6)
  7. CI/CD & deployment — pipeline design, deployment strategy (Step 7)
  8. Observability stack — 3 pillars + alerting (Step 8)
  9. Code quality — enforcement gates, tooling (Step 9)
  10. Dependency risk matrix — high-risk packages, mitigation (Step 10)
  11. Architecture diagram (Mermaid — showing all layers and data flow)
  12. Risk assessment for overall architecture
  13. Unresolved questions

Architecture Diagram Template

```mermaid
graph TB
    subgraph "Frontend"
        UI[SPA / Micro Frontend]
        STORE[State Management]
    end
    subgraph "API Gateway"
        GW[Gateway / BFF]
    end
    subgraph "Backend Services"
        CMD[Commands / Handlers]
        QRY[Queries / Read Models]
        SVC[Domain Services]
        ENT[Entities / Aggregates]
    end
    subgraph "Infrastructure"
        DB[(Database)]
        CACHE[(Cache)]
        MSG[Message Bus]
        SEARCH[(Search Index)]
    end
    subgraph "Observability"
        LOG[Logging]
        METRIC[Metrics]
        TRACE[Tracing]
    end
    subgraph "CI/CD"
        PIPE[Pipeline]
        REG[Container Registry]
        K8S[Orchestration]
    end
    UI --> GW --> CMD & QRY
    CMD --> SVC --> ENT --> DB
    QRY --> CACHE & SEARCH
    ENT -.-> MSG
    CMD & QRY -.-> LOG & METRIC & TRACE
    PIPE --> REG --> K8S
```

Step 12: User Validation Interview

MANDATORY IMPORTANT MUST ATTENTION present findings and ask 8-12 questions via

AskUserQuestion
:

Required Questions

  1. Backend architecture — "I recommend {style}. Agree?"
  2. Frontend architecture — "I recommend {style} with {state management}. Agree?"
  3. Design patterns — "Recommended backend patterns: {list}. Frontend patterns: {list}. Any to add/remove?"
  4. Key libraries — "For {concern}, I recommend {lib} over {alternatives}. Agree?"
  5. Testing strategy — "Test pyramid: {unit}%/{integration}%/{E2E}% using {frameworks}. Appropriate?"
  6. CI/CD — "Pipeline: {tool} with {deployment strategy}. Fits your infra?"
  7. Observability — "Monitoring stack: {logs}/{metrics}/{traces}. Sufficient?"
  8. Code quality — "Enforcement: {linter + formatter + pre-commit hooks}. Team ready?"
  9. Dependency risk — "Found {N} high-risk dependencies. Accept or find alternatives?"
  10. Complexity check — "This architecture has {N} concerns addressed. Appropriate for team size?"

Optional Deep-Dive Questions (pick 2-3)

  • "Should we use event sourcing or traditional state-based persistence?"
  • "Monolith-first or start with service boundaries?"
  • "Micro frontends or monolith SPA?"
  • "How important is framework independence for this project?"
  • "Self-hosted observability or managed SaaS?"
  • "Strict lint rules from day 1 or gradual adoption?"

After user confirms, update report with final decisions and mark as

status: confirmed
.


Best Practices Audit (applied across all steps)

Validate architecture against these principles — flag violations in report:

PrincipleCheckStatus
Single Responsibility (S)Each class/module has one reason to change✅/⚠️
Open/Closed (O)Extensible without modifying existing code✅/⚠️
Liskov Substitution (L)Subtypes substitutable for base types✅/⚠️
Interface Segregation (I)No forced dependency on unused interfaces✅/⚠️
Dependency Inversion (D)High-level modules depend on abstractions, not concretions✅/⚠️
DRYNo duplicated business logic across layers✅/⚠️
KISSSimplest architecture that meets requirements✅/⚠️
YAGNINo speculative layers or patterns for future needs✅/⚠️
Separation of ConcernsClear boundaries between domain, application, infra✅/⚠️
IoC / Dependency InjectionAll dependencies injected, no
new
in business logic
✅/⚠️
Technical AgnosticismDomain layer has zero framework/infra dependencies✅/⚠️
TestabilityArchitecture supports unit + integration testing✅/⚠️
12-Factor AppConfig in env, stateless processes, port binding✅/⚠️
Fail-FastValidate early, fail with clear errors✅/⚠️

Output

{plan-dir}/research/architecture-design.md     # Full architecture analysis report
{plan-dir}/phase-02b-architecture.md           # Confirmed architecture decisions

MANDATORY IMPORTANT MUST ATTENTION break work into small todo tasks using

TaskCreate
BEFORE starting. MANDATORY IMPORTANT MUST ATTENTION validate EVERY architecture recommendation with user via
AskUserQuestion
— never auto-decide. MANDATORY IMPORTANT MUST ATTENTION include confidence % and evidence citations for all claims. MANDATORY IMPORTANT MUST ATTENTION add a final review todo task to verify work quality.


Next Steps

MANDATORY IMPORTANT MUST ATTENTION — NO EXCEPTIONS after completing this skill, you MUST ATTENTION use

AskUserQuestion
to present these options. Do NOT skip because the task seems "simple" or "obvious" — the user decides:

  • "/plan (Recommended)" — Create implementation plan from architecture design
  • "/refine" — If need to create PBIs first
  • "Skip, continue manually" — user decides

Closing Reminders

MANDATORY IMPORTANT MUST ATTENTION break work into small todo tasks using

TaskCreate
BEFORE starting. MANDATORY IMPORTANT MUST ATTENTION validate decisions with user via
AskUserQuestion
— never auto-decide. MANDATORY IMPORTANT MUST ATTENTION add a final review todo task to verify work quality.

<!-- SYNC:critical-thinking-mindset:reminder -->
  • MUST ATTENTION apply critical thinking — every claim needs traced proof, confidence >80% to act. Anti-hallucination: never present guess as fact. <!-- /SYNC:critical-thinking-mindset:reminder --> <!-- SYNC:ai-mistake-prevention:reminder -->
  • MUST ATTENTION apply AI mistake prevention — holistic-first debugging, fix at responsible layer, surface ambiguity before coding, re-read files after compaction. <!-- /SYNC:ai-mistake-prevention:reminder -->