EasyPlatform tech-stack-research

[Architecture] Research, analyze, and compare tech stack options as a solution architect. Evaluate top 3 choices per stack layer with detailed pros/cons, team-fit scoring, and market analysis. Generate comparison report.

install
source · Clone the upstream repo
git clone https://github.com/duc01226/EasyPlatform
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/duc01226/EasyPlatform "$T" && mkdir -p ~/.claude/skills && cp -r "$T/.claude/skills/tech-stack-research" ~/.claude/skills/duc01226-easyplatform-tech-stack-research && rm -rf "$T"
manifest: .claude/skills/tech-stack-research/SKILL.md
source content
<!-- SYNC:critical-thinking-mindset -->

Critical Thinking Mindset — Apply critical thinking, sequential thinking. Every claim needs traced proof, confidence >80% to act. Anti-hallucination: Never present guess as fact — cite sources for every claim, admit uncertainty freely, self-check output for errors, cross-reference independently, stay skeptical of own confidence — certainty without evidence root of all hallucination.

<!-- /SYNC:critical-thinking-mindset --> <!-- SYNC:ai-mistake-prevention -->

AI Mistake Prevention — Failure modes to avoid on every task:

  • Check downstream references before deleting. Deleting components causes documentation and code staleness cascades. Map all referencing files before removal.
  • Verify AI-generated content against actual code. AI hallucinates APIs, class names, and method signatures. Always grep to confirm existence before documenting or referencing.
  • Trace full dependency chain after edits. Changing a definition misses downstream variables and consumers derived from it. Always trace the full chain.
  • Trace ALL code paths when verifying correctness. Confirming code exists is not confirming it executes. Always trace early exits, error branches, and conditional skips — not just happy path.
  • When debugging, ask "whose responsibility?" before fixing. Trace whether bug is in caller (wrong data) or callee (wrong handling). Fix at responsible layer — never patch symptom site.
  • Assume existing values are intentional — ask WHY before changing. Before changing any constant, limit, flag, or pattern: read comments, check git blame, examine surrounding code.
  • Verify ALL affected outputs, not just the first. Changes touching multiple stacks require verifying EVERY output. One green check is not all green checks.
  • Holistic-first debugging — resist nearest-attention trap. When investigating any failure, list EVERY precondition first (config, env vars, DB names, endpoints, DI registrations, data preconditions), then verify each against evidence before forming any code-layer hypothesis.
  • Surgical changes — apply the diff test. Bug fix: every changed line must trace directly to the bug. Don't restyle or improve adjacent code. Enhancement task: implement improvements AND announce them explicitly.
  • Surface ambiguity before coding — don't pick silently. If request has multiple interpretations, present each with effort estimate and ask. Never assume all-records, file-based, or more complex path.
<!-- /SYNC:ai-mistake-prevention -->

MANDATORY IMPORTANT MUST ATTENTION use

TaskCreate
to break ALL work into small tasks BEFORE starting. MANDATORY IMPORTANT MUST ATTENTION use
AskUserQuestion
at EVERY decision point — never assume user preferences. MANDATORY IMPORTANT MUST ATTENTION research top 3 options per stack layer, compare with evidence, present report with recommendation + confidence %.

External Memory: For complex or lengthy work (research, analysis, scan, review), write intermediate findings and final results to a report file in

plans/reports/
— prevents context loss and serves as deliverable.

Evidence Gate: MANDATORY IMPORTANT MUST ATTENTION — every claim, finding, and recommendation requires

file:line
proof or traced evidence with confidence percentage (>80% to act, <80% must verify first).

Quick Summary

Goal: Research, analyze, and compare tech stack options for each layer of the system. Act as a solution architect — derive technical requirements from business analysis, research current market, produce detailed comparison report, and present options to user for decision.

Workflow:

  1. Load Business Context — Read business evaluation, domain model, refined PBI artifacts
  2. Derive Technical Requirements — Map business needs to technical constraints
  3. Research Per Layer — WebSearch top 3 options for each stack responsibility
  4. Deep Compare — Pros/cons matrix, benchmarks, community health, team fit
  5. Score & Rank — Weighted scoring across 8 criteria
  6. Generate Report — Structured comparison report with recommendation
  7. User Validation — Present findings, ask 5-8 questions, confirm choices

Key Rules:

  • MANDATORY IMPORTANT MUST ATTENTION research minimum 3 options per stack layer
  • MANDATORY IMPORTANT MUST ATTENTION include confidence % with evidence for every recommendation
  • MANDATORY IMPORTANT MUST ATTENTION run user validation interview at end (never skip)
  • All claims must cite sources (URL, benchmark, case study)
  • Never recommend based on familiarity alone — evidence required

Be skeptical. Apply critical thinking, sequential thinking. Every claim needs traced proof, confidence percentages (Idea should be more than 80%).

Step 1: Load Business Context

Read artifacts from prior workflow steps (search in

plans/
and
team-artifacts/
):

  • Business evaluation report (viability, scale, constraints)
  • Domain model / ERD (complexity, entity count, relationships)
  • Refined PBI (acceptance criteria, scope)
  • Discovery interview notes (team skills, budget, timeline)

Extract and summarize:

SignalValueSource
Expected users...discovery interview
Domain complexityLow/Med/Highdomain model
Team skills...discovery interview
Budget constraint...business evaluation
Timeline...business evaluation
Compliance needs...business evaluation
Real-time needsYes/Norefined PBI
Integration complexityLow/Med/Highdomain model

Step 2: Derive Technical Requirements

Map business signals to technical requirements:

Business SignalTechnical RequirementPriority
High user scaleHorizontal scaling, connection poolingMust
Complex domainStrong type system, ORM with migrationsMust
Real-time featuresWebSocket/SSE support, event-driven archMust
Small teamLow learning curve, good DX, batteries-includedShould
Tight budgetOpen-source, low hosting costShould
ComplianceAudit trail, encryption, auth frameworkMust

MANDATORY IMPORTANT MUST ATTENTION validate derived requirements with user via

AskUserQuestion
before proceeding to research.

Step 3: Research Per Stack Layer

For EACH layer, research top 3 options using WebSearch (minimum 5 queries total):

Stack Layers to Evaluate

LayerExample OptionsResearch Focus
Backend Framework.NET, Node.js, Python, Go, JavaPerformance, type safety, ecosystem
Frontend FrameworkAngular, React, Vue, SvelteDX, ecosystem, hiring, enterprise fit
DatabasePostgreSQL, MongoDB, SQL ServerScale, query complexity, cost
Messaging/EventsRabbitMQ, Kafka, Redis StreamsThroughput, reliability, complexity
InfrastructureDocker+K8s, Serverless, PaaSCost, ops overhead, scaling
AuthKeycloak, Auth0, customCost, compliance, flexibility

WebSearch Queries (minimum 5 per layer)

"{option_A} vs {option_B} {current_year} comparison"
"{option} enterprise production case studies"
"{option} community size github stars"
"{option} performance benchmarks {use_case}"
"{option} security track record vulnerabilities"

Step 4: Deep Comparison Matrix

For EACH stack layer, produce a comparison table:

CriteriaOption AOption BOption CWeight
Team Fitscore + rationale......High
Scalabilityscore + rationale......High
Time-to-Marketscore + rationale......High
Ecosystem/Libsscore + rationale......Medium
Hiring Marketscore + rationale......Medium
Cost (hosting)score + rationale......Medium
Learning Curvescore + rationale......Medium
Community Healthscore + rationale......Low

Scoring: 1-5 scale. Weight: High=3x, Medium=2x, Low=1x.

Per-Option Detail Block

For each option, document:

### {Layer}: {Option Name}

**Pros:**

- {Pro 1} — {evidence/source}
- {Pro 2} — {evidence/source}
- {Pro 3} — {evidence/source}

**Cons:**

- {Con 1} — {evidence/source}
- {Con 2} — {evidence/source}

**Best suited when:** {conditions}
**Not suitable when:** {conditions}
**Production examples:** {2-3 real companies using this}

Step 5: Weighted Score & Ranking

Calculate weighted total per option per layer. Present ranking:

### {Layer} Ranking

1. **{Option A}** — Score: {X}/100 — Confidence: {Y}%
2. **{Option B}** — Score: {X}/100 — Confidence: {Y}%
3. **{Option C}** — Score: {X}/100 — Confidence: {Y}%

**Recommendation:** {Option A}
**Why:** {2-3 sentence rationale linking to team skills, scale, and constraints}

Step 6: Generate Report

Write report to

{plan-dir}/research/tech-stack-comparison.md
with:

  1. Executive summary (recommended full stack in 5 lines)
  2. Technical requirements table (from Step 2)
  3. Per-layer comparison matrices (from Step 4)
  4. Per-layer rankings with recommendations (from Step 5)
  5. Combined recommended stack diagram
  6. Risk assessment for recommended stack
  7. Alternative stack (second-best combo) for comparison
  8. Unresolved questions

Report must be <=200 lines. Use tables over prose.

Step 7: User Validation Interview

MANDATORY IMPORTANT MUST ATTENTION present findings and ask 5-8 questions via

AskUserQuestion
:

Required Questions

  1. Per-layer recommendation confirmation — "For {layer}, I recommend {option}. Agree?"
    • Options: Agree (Recommended) | Prefer {option B} | Need more research
  2. Risk tolerance — "The recommended stack has {risk}. Acceptable?"
  3. Team readiness — "Team needs to learn {X}. Training plan needed?"
  4. Budget alignment — "Estimated infra cost: ${X}/month. Within budget?"
  5. Timeline fit — "This stack enables MVP in {X} months. Acceptable?"

Optional Deep-Dive Questions (pick 2-3 based on context)

  • "Should we consider {emerging tech} for {layer}?"
  • "Any compliance requirements I haven't captured?"
  • "Preference for managed services vs self-hosted?"
  • "Monorepo or polyrepo for this team size?"

After user confirms, update report with final decisions and mark as

status: confirmed
.

Output

{plan-dir}/research/tech-stack-comparison.md    # Full comparison report
{plan-dir}/phase-02-tech-stack.md               # Final confirmed tech stack decisions

MANDATORY IMPORTANT MUST ATTENTION break work into small todo tasks using

TaskCreate
BEFORE starting. MANDATORY IMPORTANT MUST ATTENTION validate EVERY recommendation with user via
AskUserQuestion
— never auto-decide. MANDATORY IMPORTANT MUST ATTENTION include confidence % and evidence citations for all claims. MANDATORY IMPORTANT MUST ATTENTION add a final review todo task to verify work quality.


Next Steps

MANDATORY IMPORTANT MUST ATTENTION — NO EXCEPTIONS after completing this skill, you MUST ATTENTION use

AskUserQuestion
to present these options. Do NOT skip because the task seems "simple" or "obvious" — the user decides:

  • "/architecture-design (Recommended)" — Design solution architecture with chosen tech stack
  • "/plan" — If architecture already decided
  • "Skip, continue manually" — user decides

Closing Reminders

MANDATORY IMPORTANT MUST ATTENTION break work into small todo tasks using

TaskCreate
BEFORE starting. MANDATORY IMPORTANT MUST ATTENTION validate decisions with user via
AskUserQuestion
— never auto-decide. MANDATORY IMPORTANT MUST ATTENTION add a final review todo task to verify work quality.

<!-- SYNC:critical-thinking-mindset:reminder -->
  • MUST ATTENTION apply critical thinking — every claim needs traced proof, confidence >80% to act. Anti-hallucination: never present guess as fact. <!-- /SYNC:critical-thinking-mindset:reminder --> <!-- SYNC:ai-mistake-prevention:reminder -->
  • MUST ATTENTION apply AI mistake prevention — holistic-first debugging, fix at responsible layer, surface ambiguity before coding, re-read files after compaction. <!-- /SYNC:ai-mistake-prevention:reminder -->