EasyPlatform tech-stack-research
[Architecture] Research, analyze, and compare tech stack options as a solution architect. Evaluate top 3 choices per stack layer with detailed pros/cons, team-fit scoring, and market analysis. Generate comparison report.
git clone https://github.com/duc01226/EasyPlatform
T=$(mktemp -d) && git clone --depth=1 https://github.com/duc01226/EasyPlatform "$T" && mkdir -p ~/.claude/skills && cp -r "$T/.claude/skills/tech-stack-research" ~/.claude/skills/duc01226-easyplatform-tech-stack-research && rm -rf "$T"
.claude/skills/tech-stack-research/SKILL.md<!-- /SYNC:critical-thinking-mindset --> <!-- SYNC:ai-mistake-prevention -->Critical Thinking Mindset — Apply critical thinking, sequential thinking. Every claim needs traced proof, confidence >80% to act. Anti-hallucination: Never present guess as fact — cite sources for every claim, admit uncertainty freely, self-check output for errors, cross-reference independently, stay skeptical of own confidence — certainty without evidence root of all hallucination.
<!-- /SYNC:ai-mistake-prevention -->AI Mistake Prevention — Failure modes to avoid on every task:
- Check downstream references before deleting. Deleting components causes documentation and code staleness cascades. Map all referencing files before removal.
- Verify AI-generated content against actual code. AI hallucinates APIs, class names, and method signatures. Always grep to confirm existence before documenting or referencing.
- Trace full dependency chain after edits. Changing a definition misses downstream variables and consumers derived from it. Always trace the full chain.
- Trace ALL code paths when verifying correctness. Confirming code exists is not confirming it executes. Always trace early exits, error branches, and conditional skips — not just happy path.
- When debugging, ask "whose responsibility?" before fixing. Trace whether bug is in caller (wrong data) or callee (wrong handling). Fix at responsible layer — never patch symptom site.
- Assume existing values are intentional — ask WHY before changing. Before changing any constant, limit, flag, or pattern: read comments, check git blame, examine surrounding code.
- Verify ALL affected outputs, not just the first. Changes touching multiple stacks require verifying EVERY output. One green check is not all green checks.
- Holistic-first debugging — resist nearest-attention trap. When investigating any failure, list EVERY precondition first (config, env vars, DB names, endpoints, DI registrations, data preconditions), then verify each against evidence before forming any code-layer hypothesis.
- Surgical changes — apply the diff test. Bug fix: every changed line must trace directly to the bug. Don't restyle or improve adjacent code. Enhancement task: implement improvements AND announce them explicitly.
- Surface ambiguity before coding — don't pick silently. If request has multiple interpretations, present each with effort estimate and ask. Never assume all-records, file-based, or more complex path.
MANDATORY IMPORTANT MUST ATTENTION use
TaskCreate to break ALL work into small tasks BEFORE starting.
MANDATORY IMPORTANT MUST ATTENTION use AskUserQuestion at EVERY decision point — never assume user preferences.
MANDATORY IMPORTANT MUST ATTENTION research top 3 options per stack layer, compare with evidence, present report with recommendation + confidence %.
External Memory: For complex or lengthy work (research, analysis, scan, review), write intermediate findings and final results to a report file in
— prevents context loss and serves as deliverable.plans/reports/
Evidence Gate: MANDATORY IMPORTANT MUST ATTENTION — every claim, finding, and recommendation requires
proof or traced evidence with confidence percentage (>80% to act, <80% must verify first).file:line
Quick Summary
Goal: Research, analyze, and compare tech stack options for each layer of the system. Act as a solution architect — derive technical requirements from business analysis, research current market, produce detailed comparison report, and present options to user for decision.
Workflow:
- Load Business Context — Read business evaluation, domain model, refined PBI artifacts
- Derive Technical Requirements — Map business needs to technical constraints
- Research Per Layer — WebSearch top 3 options for each stack responsibility
- Deep Compare — Pros/cons matrix, benchmarks, community health, team fit
- Score & Rank — Weighted scoring across 8 criteria
- Generate Report — Structured comparison report with recommendation
- User Validation — Present findings, ask 5-8 questions, confirm choices
Key Rules:
- MANDATORY IMPORTANT MUST ATTENTION research minimum 3 options per stack layer
- MANDATORY IMPORTANT MUST ATTENTION include confidence % with evidence for every recommendation
- MANDATORY IMPORTANT MUST ATTENTION run user validation interview at end (never skip)
- All claims must cite sources (URL, benchmark, case study)
- Never recommend based on familiarity alone — evidence required
Be skeptical. Apply critical thinking, sequential thinking. Every claim needs traced proof, confidence percentages (Idea should be more than 80%).
Step 1: Load Business Context
Read artifacts from prior workflow steps (search in
plans/ and team-artifacts/):
- Business evaluation report (viability, scale, constraints)
- Domain model / ERD (complexity, entity count, relationships)
- Refined PBI (acceptance criteria, scope)
- Discovery interview notes (team skills, budget, timeline)
Extract and summarize:
| Signal | Value | Source |
|---|---|---|
| Expected users | ... | discovery interview |
| Domain complexity | Low/Med/High | domain model |
| Team skills | ... | discovery interview |
| Budget constraint | ... | business evaluation |
| Timeline | ... | business evaluation |
| Compliance needs | ... | business evaluation |
| Real-time needs | Yes/No | refined PBI |
| Integration complexity | Low/Med/High | domain model |
Step 2: Derive Technical Requirements
Map business signals to technical requirements:
| Business Signal | Technical Requirement | Priority |
|---|---|---|
| High user scale | Horizontal scaling, connection pooling | Must |
| Complex domain | Strong type system, ORM with migrations | Must |
| Real-time features | WebSocket/SSE support, event-driven arch | Must |
| Small team | Low learning curve, good DX, batteries-included | Should |
| Tight budget | Open-source, low hosting cost | Should |
| Compliance | Audit trail, encryption, auth framework | Must |
MANDATORY IMPORTANT MUST ATTENTION validate derived requirements with user via
AskUserQuestion before proceeding to research.
Step 3: Research Per Stack Layer
For EACH layer, research top 3 options using WebSearch (minimum 5 queries total):
Stack Layers to Evaluate
| Layer | Example Options | Research Focus |
|---|---|---|
| Backend Framework | .NET, Node.js, Python, Go, Java | Performance, type safety, ecosystem |
| Frontend Framework | Angular, React, Vue, Svelte | DX, ecosystem, hiring, enterprise fit |
| Database | PostgreSQL, MongoDB, SQL Server | Scale, query complexity, cost |
| Messaging/Events | RabbitMQ, Kafka, Redis Streams | Throughput, reliability, complexity |
| Infrastructure | Docker+K8s, Serverless, PaaS | Cost, ops overhead, scaling |
| Auth | Keycloak, Auth0, custom | Cost, compliance, flexibility |
WebSearch Queries (minimum 5 per layer)
"{option_A} vs {option_B} {current_year} comparison" "{option} enterprise production case studies" "{option} community size github stars" "{option} performance benchmarks {use_case}" "{option} security track record vulnerabilities"
Step 4: Deep Comparison Matrix
For EACH stack layer, produce a comparison table:
| Criteria | Option A | Option B | Option C | Weight |
|---|---|---|---|---|
| Team Fit | score + rationale | ... | ... | High |
| Scalability | score + rationale | ... | ... | High |
| Time-to-Market | score + rationale | ... | ... | High |
| Ecosystem/Libs | score + rationale | ... | ... | Medium |
| Hiring Market | score + rationale | ... | ... | Medium |
| Cost (hosting) | score + rationale | ... | ... | Medium |
| Learning Curve | score + rationale | ... | ... | Medium |
| Community Health | score + rationale | ... | ... | Low |
Scoring: 1-5 scale. Weight: High=3x, Medium=2x, Low=1x.
Per-Option Detail Block
For each option, document:
### {Layer}: {Option Name} **Pros:** - {Pro 1} — {evidence/source} - {Pro 2} — {evidence/source} - {Pro 3} — {evidence/source} **Cons:** - {Con 1} — {evidence/source} - {Con 2} — {evidence/source} **Best suited when:** {conditions} **Not suitable when:** {conditions} **Production examples:** {2-3 real companies using this}
Step 5: Weighted Score & Ranking
Calculate weighted total per option per layer. Present ranking:
### {Layer} Ranking 1. **{Option A}** — Score: {X}/100 — Confidence: {Y}% 2. **{Option B}** — Score: {X}/100 — Confidence: {Y}% 3. **{Option C}** — Score: {X}/100 — Confidence: {Y}% **Recommendation:** {Option A} **Why:** {2-3 sentence rationale linking to team skills, scale, and constraints}
Step 6: Generate Report
Write report to
{plan-dir}/research/tech-stack-comparison.md with:
- Executive summary (recommended full stack in 5 lines)
- Technical requirements table (from Step 2)
- Per-layer comparison matrices (from Step 4)
- Per-layer rankings with recommendations (from Step 5)
- Combined recommended stack diagram
- Risk assessment for recommended stack
- Alternative stack (second-best combo) for comparison
- Unresolved questions
Report must be <=200 lines. Use tables over prose.
Step 7: User Validation Interview
MANDATORY IMPORTANT MUST ATTENTION present findings and ask 5-8 questions via
AskUserQuestion:
Required Questions
- Per-layer recommendation confirmation — "For {layer}, I recommend {option}. Agree?"
- Options: Agree (Recommended) | Prefer {option B} | Need more research
- Risk tolerance — "The recommended stack has {risk}. Acceptable?"
- Team readiness — "Team needs to learn {X}. Training plan needed?"
- Budget alignment — "Estimated infra cost: ${X}/month. Within budget?"
- Timeline fit — "This stack enables MVP in {X} months. Acceptable?"
Optional Deep-Dive Questions (pick 2-3 based on context)
- "Should we consider {emerging tech} for {layer}?"
- "Any compliance requirements I haven't captured?"
- "Preference for managed services vs self-hosted?"
- "Monorepo or polyrepo for this team size?"
After user confirms, update report with final decisions and mark as
status: confirmed.
Output
{plan-dir}/research/tech-stack-comparison.md # Full comparison report {plan-dir}/phase-02-tech-stack.md # Final confirmed tech stack decisions
MANDATORY IMPORTANT MUST ATTENTION break work into small todo tasks using
TaskCreate BEFORE starting.
MANDATORY IMPORTANT MUST ATTENTION validate EVERY recommendation with user via AskUserQuestion — never auto-decide.
MANDATORY IMPORTANT MUST ATTENTION include confidence % and evidence citations for all claims.
MANDATORY IMPORTANT MUST ATTENTION add a final review todo task to verify work quality.
Next Steps
MANDATORY IMPORTANT MUST ATTENTION — NO EXCEPTIONS after completing this skill, you MUST ATTENTION use
AskUserQuestion to present these options. Do NOT skip because the task seems "simple" or "obvious" — the user decides:
- "/architecture-design (Recommended)" — Design solution architecture with chosen tech stack
- "/plan" — If architecture already decided
- "Skip, continue manually" — user decides
Closing Reminders
MANDATORY IMPORTANT MUST ATTENTION break work into small todo tasks using
TaskCreate BEFORE starting.
MANDATORY IMPORTANT MUST ATTENTION validate decisions with user via AskUserQuestion — never auto-decide.
MANDATORY IMPORTANT MUST ATTENTION add a final review todo task to verify work quality.
<!-- SYNC:critical-thinking-mindset:reminder -->
- MUST ATTENTION apply critical thinking — every claim needs traced proof, confidence >80% to act. Anti-hallucination: never present guess as fact. <!-- /SYNC:critical-thinking-mindset:reminder --> <!-- SYNC:ai-mistake-prevention:reminder -->
- MUST ATTENTION apply AI mistake prevention — holistic-first debugging, fix at responsible layer, surface ambiguity before coding, re-read files after compaction. <!-- /SYNC:ai-mistake-prevention:reminder -->