EasyPlatform pbi-challenge
[Code Quality] AI-assisted Dev BA PIC review of PBI drafts. Generates challenge prompts, flags gaps, provides actionable feedback for BA drafter revision.
git clone https://github.com/duc01226/EasyPlatform
T=$(mktemp -d) && git clone --depth=1 https://github.com/duc01226/EasyPlatform "$T" && mkdir -p ~/.claude/skills && cp -r "$T/.claude/skills/pbi-challenge" ~/.claude/skills/duc01226-easyplatform-pbi-challenge && rm -rf "$T"
.claude/skills/pbi-challenge/SKILL.md[IMPORTANT] Use
to break ALL work into small tasks BEFORE starting.TaskCreate
<!-- SYNC:critical-thinking-mindset -->Evidence Gate: MANDATORY IMPORTANT MUST ATTENTION — every claim requires
proof or traced evidence with confidence percentage (>80% to act).file:line
<!-- /SYNC:critical-thinking-mindset --> <!-- SYNC:ai-mistake-prevention -->Critical Thinking Mindset — Apply critical thinking, sequential thinking. Every claim needs traced proof, confidence >80% to act. Anti-hallucination: Never present guess as fact — cite sources for every claim, admit uncertainty freely, self-check output for errors, cross-reference independently, stay skeptical of own confidence — certainty without evidence root of all hallucination.
<!-- /SYNC:ai-mistake-prevention -->AI Mistake Prevention — Failure modes to avoid on every task:
- Check downstream references before deleting. Deleting components causes documentation and code staleness cascades. Map all referencing files before removal.
- Verify AI-generated content against actual code. AI hallucinates APIs, class names, and method signatures. Always grep to confirm existence before documenting or referencing.
- Trace full dependency chain after edits. Changing a definition misses downstream variables and consumers derived from it. Always trace the full chain.
- Trace ALL code paths when verifying correctness. Confirming code exists is not confirming it executes. Always trace early exits, error branches, and conditional skips — not just happy path.
- When debugging, ask "whose responsibility?" before fixing. Trace whether bug is in caller (wrong data) or callee (wrong handling). Fix at responsible layer — never patch symptom site.
- Assume existing values are intentional — ask WHY before changing. Before changing any constant, limit, flag, or pattern: read comments, check git blame, examine surrounding code.
- Verify ALL affected outputs, not just the first. Changes touching multiple stacks require verifying EVERY output. One green check is not all green checks.
- Holistic-first debugging — resist nearest-attention trap. When investigating any failure, list EVERY precondition first (config, env vars, DB names, endpoints, DI registrations, data preconditions), then verify each against evidence before forming any code-layer hypothesis.
- Surgical changes — apply the diff test. Bug fix: every changed line must trace directly to the bug. Don't restyle or improve adjacent code. Enhancement task: implement improvements AND announce them explicitly.
- Surface ambiguity before coding — don't pick silently. If request has multiple interpretations, present each with effort estimate and ask. Never assume all-records, file-based, or more complex path.
Quick Summary
Goal: Help Dev BA PIC (Person In Charge — the development Business Analyst responsible for technical review sign-off per squad) review BA drafters' PBI drafts by generating specific, actionable challenge prompts. AI provides analysis; human makes the decision.
Key distinction: Collaborative review tool (drafter → reviewer flow), NOT self-review (use
/refine-review for AI self-review).
Be skeptical. Apply critical thinking, sequential thinking. Every claim needs traced proof, confidence percentages (Idea should be more than 80%).
Why This Skill Exists
PBI drafts routinely pass informal review without being challenged on architectural feasibility, vague AC, missing auth scenarios, or cross-service impact. The
/refine skill generates PBIs but does not adversarially challenge them — it is a creation tool, not a review tool. The /refine-review skill provides AI self-review for the drafter, but the drafter has inherent blind spots about their own assumptions. A separate reviewer (Dev BA PIC) applying AI-assisted challenge prompts breaks the drafter's confirmation bias before grooming. This skill exists to catch gaps the drafter cannot catch themselves.
Why not just use
? /refine-review
/refine-review is run by the drafter on their own work. Even with adversarial prompts, the drafter rationalizes their own choices. pbi-challenge is invoked by a different person with a different mandate — external skepticism requires a different author, not a different tool on the same author.
Alternatives Considered
| Approach | Pros | Cons | Decision |
|---|---|---|---|
Extend with a reviewer-role flag | No new skill, single codebase | Drafter runs it themselves in practice; role separation breaks down without enforcement | Rejected — role separation requires a distinct invocation point owned by a different person |
| Fully autonomous AI verdict (no human decision) | Faster, no Dev BA PIC scheduling needed | Automation bias: AI wrong on domain specifics propagates unchecked; no human accountability for false APPROVE | Rejected — cost of false APPROVE on infeasible PBIs exceeds review time saved |
| Static DoR checklist given to Dev BA PIC (no AI) | Simple, no AI dependency | No domain entity context loading, no AC vagueness flagging; manual effort is high and inconsistent across reviewers | Rejected — AI domain lookup provides non-trivial value for cross-service entity detection |
| Async comment-thread model (AI generates questions posted as ticket comments) | Eliminates scheduling bottleneck; drafter can research before responding | Slower feedback loop; requires external ticket integration | Valid alternative for async teams; prefer if Dev BA PIC availability is chronically a bottleneck |
Risk Assessment
| Risk | Likelihood | Impact | Mitigation |
|---|---|---|---|
| Automation bias — Dev BA PIC rubber-stamps AI verdict without independent assessment | High | High | Workflow Step 7 shows challenge prompts BEFORE the verdict — Dev BA PIC forms their own view first |
| Module misdetection — AI loads wrong domain context, produces entity conflict analysis for wrong service | Medium | High | Workflow Step 2 confirms detected module with Dev BA PIC via AskUserQuestion before proceeding |
| Challenge prompts ignored — Drafter revises PBI superficially to satisfy reviewer without resolving root gaps | Medium | Medium | Decision Record includes drafter-response field; Dev BA PIC re-runs skill on revision, not just reads revised PBI |
| Suggested answers create adoption pressure — Drafter adopts suggested answer rather than reasoning independently | Medium | Medium | Suggested answers framed as "consider whether X" options, not corrections; language review in challenge prompt templates |
| 3-way BA vote deadlock — UX BA, Designer BA, Dev BA PIC all disagree | Low | Medium | Escalation path per : Engineering Manager for tech uncertainty, PO for business value |
Frontend/UI Context (if applicable)
<!-- SYNC:ui-system-context -->When this task involves frontend or UI changes,
<!-- /SYNC:ui-system-context -->UI System Context — For ANY task touching
,.ts,.html, or.scssfiles:.cssMUST ATTENTION READ before implementing:
— component base classes, stores, formsdocs/project-reference/frontend-patterns-reference.md — BEM methodology, SCSS variables, mixins, responsivedocs/project-reference/scss-styling-guide.md — design tokens, component inventory, iconsdocs/project-reference/design-system/README.mdReference
for project-specific paths.docs/project-config.json
- Component patterns:
(content auto-injected by hook — check for [Injected: ...] header before reading)docs/project-reference/frontend-patterns-reference.md - Styling/BEM guide:
docs/project-reference/scss-styling-guide.md - Design system tokens:
docs/project-reference/design-system/README.md
<!-- /SYNC:ba-team-decision-model --> <!-- SYNC:refinement-dor-checklist -->BA Team Decision Model — 2/3 majority vote: Dev BA PIC + UX BA + Designer BA per squad. 2 of 3 agree = decision final. 3-way split = escalate to full squad + Tech Leads + Engineering Manager.
Technical Veto: Dev BA PIC can unilaterally veto on: architecture feasibility, dependency correctness, cross-service impact, performance, security. CANNOT veto: UI/UX design, visual design, business value, user research.
Rules: Disagree-and-commit after vote. Grooming override requires >75% non-BA squad vote. Record decisions in PBI Validation Summary (member, role, vote, notes).
Escalation: Tech uncertainty → Engineering Manager. Business value → PO. Design feasibility → UX BA + Designer BA consensus.
<!-- /SYNC:refinement-dor-checklist -->Refinement DoR Checklist — ALL 7 criteria MUST ATTENTION pass before grooming:
- User story template — "As a {role}, I want {goal}, so that {benefit}" format
- AC testable & unambiguous — GIVEN/WHEN/THEN. No "should/might/TBD/various/appropriate". Min 3 scenarios (happy, edge, error) + 1 auth scenario
- Wireframes attached — UI features:
with wireframe + components + states + tokens. Backend-only: explicit "N/A"## UI Layout- UI design ready — Visual design + component decomposition tree. Backend-only: "N/A"
- AI pre-review passed —
or/refine-reviewreturned PASS or WARN (not FAIL)/pbi-challenge- Story points estimated — Fibonacci 1-21 + complexity (Low/Medium/High). >13 SP → recommend split
- Dependencies table complete — Dependency, Type (must-before/can-parallel/blocked-by/independent), Status
Failure fixes: Vague AC → specify exact CRUD + roles. Missing auth → add roles × CRUD table. No wireframes → UX BA creates. TBD in AC → replace with decision.
Workflow
- Locate PBI draft — Find BA drafters' draft PBI in
or path provided by userteam-artifacts/pbis/ - Load domain context — Auto-detect module from PBI content. MANDATORY: Use
to confirm detected module with Dev BA PIC before loading domain docs. Wrong module = wrong entity context = false APPROVE risk. Then load:AskUserQuestion
(entity definitions)docs/project-reference/domain-entities-reference.md- Relevant feature docs from
docs/business-features/{App}/ - Existing business rules (BR-{MOD}-XXX) from feature docs
- Technical Feasibility Analysis:
- Can described features be built with the project's architecture?
- Any domain entity conflicts? (cross-reference entity definitions)
- Any cross-service implications? (message bus events, shared data between services)
- Estimated complexity alignment (does scope match story points?)
- AC Quality Analysis:
- Vagueness detector: flag "should", "might", "TBD", "etc.", "various", "appropriate"
- Coverage check: happy path + edge case + error case + authorization scenario
- Missing scenarios: suggest specific additions based on feature type
- Cross-Cutting Concerns Check:
- Authorization section present and complete? (roles × CRUD matrix)
- Seed data requirements addressed? (or explicit "N/A")
- Data migration implications? (schema changes)
- Performance considerations? (list/grid/export features)
- UI Layout section present? If PBI involves UI: must have
per UI wireframe protocol with wireframe + components (with tiers) + states + design tokens. If backend-only: explicit "N/A". Flag missing UI visualization as a gap.## UI Layout
- Generate Challenge Prompts — Output specific, actionable questions:
- NOT vague: "needs work" or "improve AC"
- SPECIFIC: "AC #2 says 'user can filter results' — which filters exactly? Suggest: status, date range, priority"
- Present Challenge Prompts first, then AI Verdict — Output challenge prompts BEFORE the verdict to prevent automation bias. Dev BA PIC reads and forms their preliminary view, THEN sees: APPROVE / REQUEST_REVISION / ESCALATE_TO_LEAD
- Technical decisions (feasibility, dependencies, cross-service impact, security): Dev BA PIC has unilateral veto power — no 2/3 vote needed
- Non-technical decisions (UI/UX design, visual design, business value): 2/3 majority vote required (Dev BA PIC + UX BA + Designer BA per
)ba-team-decision-model
- AskUserQuestion — Dev BA PIC records their FINAL decision (APPROVE / REQUEST_REVISION / ESCALATE_TO_LEAD) in the Decision Record. This is the human decision step — NOT the workflow routing step (handled separately in Next Steps)
Output
## PBI Challenge Review **PBI:** {PBI filename} **Reviewer:** Dev BA PIC **Date:** {date} **Module:** {detected module code} ### Technical Feasibility **Status:** FEASIBLE | CONCERNS | INFEASIBLE {Analysis with evidence — cite domain entities, service boundaries, architecture constraints} ### AC Quality **Status:** GOOD | NEEDS_REVISION | POOR | AC # | Issue | Suggested Fix | | ---- | ---------------- | ------------------------- | | {#} | {specific issue} | {specific fix suggestion} | ### Cross-Cutting Concerns | Concern | Status | Issue | | -------------- | --------- | -------- | | Authorization | ✅/❌ | {detail} | | Seed Data | ✅/❌/N/A | {detail} | | Data Migration | ✅/❌/N/A | {detail} | | Performance | ✅/❌/N/A | {detail} | ### Challenge Prompts for BA Drafters 1. {Specific actionable question with suggested answer} 2. {Specific actionable question with suggested answer} 3. {Specific actionable question with suggested answer} ### AI Verdict **{APPROVE | REQUEST_REVISION | ESCALATE_TO_LEAD}** **Reason:** {evidence-based justification} **Confidence:** {X%} — {what was verified vs. what needs more investigation} ### Decision Record **Dev BA PIC Decision:** {filled after human review via AskUserQuestion} **Vote:** {approve / request-revision / escalate} **Conditions:** {if any} **Drafter Response (on revision):** {drafter's response to each challenge prompt — filled when Dev BA PIC re-runs on revised PBI} **Resolution:** {how each challenge prompt was addressed, deferred, or accepted as known risk} **Stored at:** `plans/reports/pbi-challenge-{YYMMDD}-{pbi-id}.md` (save output there for audit trail)
Key Rules
- AI provides ANALYSIS, human makes DECISION — Never auto-approve or auto-reject
- Challenge prompts must be specific — Include suggested answers, not just questions
- Domain context required — Always load entity reference + feature docs before analysis
- Technical veto scope — Dev BA PIC CAN veto: architecture feasibility, dependency correctness, cross-service impact, performance, security. CANNOT veto: UI/UX design, visual design, business value (see
§2)ba-team-decision-model-protocol.md - Evidence-based — Every concern raised must cite source (protocol section, entity definition, feature doc)
- Constructive tone — Focus on improving the PBI, not criticizing the drafters
Next Steps
MANDATORY IMPORTANT MUST ATTENTION — NO EXCEPTIONS after completing this skill, you MUST ATTENTION use
AskUserQuestion to present these options. Do NOT skip because the task seems "simple" or "obvious" — the user decides:
- "/dor-gate (Recommended)" — If APPROVE: validate DoR before grooming
- "/refine" — If REQUEST_REVISION: BA drafters revise, then re-run
/pbi-challenge - "Escalate to Engineering Manager" — If ESCALATE_TO_LEAD: document concern for technical consultation
- "Skip, continue manually" — user decides
Closing Reminders
MANDATORY IMPORTANT MUST ATTENTION break work into small todo tasks using
TaskCreate BEFORE starting.
MANDATORY IMPORTANT MUST ATTENTION validate decisions with user via AskUserQuestion — never auto-decide.
MANDATORY IMPORTANT MUST ATTENTION add a final review todo task to verify work quality.
MANDATORY IMPORTANT MUST ATTENTION READ the following files before starting:
<!-- SYNC:ui-system-context:reminder -->
- MANDATORY IMPORTANT MUST ATTENTION read frontend-patterns-reference, scss-styling-guide, design-system/README before any UI change. <!-- /SYNC:ui-system-context:reminder --> <!-- SYNC:critical-thinking-mindset:reminder -->
- MUST ATTENTION apply critical thinking — every claim needs traced proof, confidence >80% to act. Anti-hallucination: never present guess as fact. <!-- /SYNC:critical-thinking-mindset:reminder --> <!-- SYNC:ai-mistake-prevention:reminder -->
- MUST ATTENTION apply AI mistake prevention — holistic-first debugging, fix at responsible layer, surface ambiguity before coding, re-read files after compaction. <!-- /SYNC:ai-mistake-prevention:reminder -->