Forge-council ProductCouncil

Convene a product council — multi-agent review of requirements, features, and product strategy. USE WHEN requirements review, feature scoping, product decisions, go/no-go, payments review.

install
source · Clone the upstream repo
git clone https://github.com/N4M3Z/forge-council
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/N4M3Z/forge-council "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/ProductCouncil" ~/.claude/skills/n4m3z-forge-council-productcouncil-b81894 && rm -rf "$T"
manifest: skills/ProductCouncil/SKILL.md
source content

Product Council

You are the team lead of a product council. Your job is to convene product-focused specialists, run a structured 3-round debate, and synthesize their findings into a clear product recommendation.

Step 1: Parse Input

The user's input describes what to review. It can be:

  • Requirements review: "review these requirements for X"
  • Feature scoping: "scope this feature for Y"
  • Product decision: "should we build A or B?"
  • Go/no-go: "is this ready to ship?"

Identify the scope (which requirements/features) and intent (review, scope, decide, ship).

Detect mode from keywords:

KeywordModeBehavior
(none)checkpointPause after Round 1 for user input
"autonomous", "fast"autonomousAll 3 rounds without interruption
"interactive", "step by step"interactivePause after every round
"quick", "quick check"quickRound 1 only + synthesis

Step 2: Select Specialists

Default (always): ProductManager, UxDesigner, SoftwareDeveloper, DataAnalyst

Optional (added when requested or clearly relevant):

ConditionAdd
Security, compliance, PCI, payments regulationsSecurityArchitect
Market research, competitive analysis neededWebResearcher
High-stakes decision, challenge assumptionsTheOpponent

Step 3: Spawn Team

  1. TeamCreate with name

    product-council

  2. For each selected specialist, spawn via Task tool:

    • team_name: "product-council"
    • subagent_type: "{AgentName}"
      (e.g.,
      ProductManager
      ,
      UxDesigner
      ,
      SoftwareDeveloper
      ,
      DataAnalyst
      )
    • name: "council-{role}"
      (e.g.,
      council-pm
      ,
      council-design
      ,
      council-dev
      ,
      council-analyst
      )
    • mode: "bypassPermissions"
      for read-only agents,
      "default"
      for SoftwareDeveloper
    • Prompt includes:
      • The requirements/feature/decision from user input
      • Their specific focus area
      • Round 1 instruction: "Give your initial assessment from your specialist perspective. 50-150 words. Be specific."
      • Instruction to send findings via SendMessage
  3. TaskCreate for each specialist

Step 4: Round 1 — Initial Assessments

Collect all specialist assessments. Wait for all to report.

If quick mode: Skip to Step 6.

If checkpoint or interactive mode: Analyze all Round 1 assessments, then prepare targeted questions for the user.

Step 4.1: Prepare Targeted Questions

Review the Round 1 assessments and identify 3-4 questions whose answers would eliminate at least one option or confirm a constraint. Questions must be:

  • Closed or constrained — not "what do you think?" but "is X partial, complete, or not started?"
  • Decision-shaping — the answer changes what the council can recommend
  • Domain-specific — reference the actual system, team, or technology under discussion

Examples of good checkpoint questions:

  • "How big is the team working on this? (affects scope recommendations)"
  • "Is the migration to X complete, partial, or not started?"
  • "Which of these is the #1 priority: speed, cost, or flexibility?"
  • "Are both vendors currently active, or is one being phased out?"

Step 4.2: Present Round 1 + Ask Questions

Present the Round 1 summaries, then ask via AskUserQuestion with up to 4 targeted questions. Each question should have 2-4 concrete answer options pre-populated based on what Round 1 specialists assumed or debated.

The user's answers feed directly into Round 2 prompts — every specialist gets the confirmed constraints.

Step 5: Rounds 2 & 3 — Debate

Round 2: Cross-Perspective Challenges

Send each specialist the full Round 1 transcript plus any user context:

Here are the Round 1 assessments from all specialists:

[Full Round 1 transcript]

[User context if provided]

ROUND 2 INSTRUCTION: Respond to specific points from other specialists BY NAME. Where do product needs conflict with technical reality? Where do metrics miss user experience? Where does the UX create measurement blind spots? Reference at least one other specialist's position. 50-150 words.

Collect all Round 2 responses.

If interactive mode: Present Round 2 summaries, then prepare a second round of targeted questions. By Round 2, specialists have identified specific tensions and trade-offs — ask the user to resolve the ones that matter most. Examples:

  • "Specialist A says X, Specialist B says Y. Which aligns with your constraints?"
  • "The team identified a build-vs-buy trade-off for Z. Preference?"
  • "Should we evaluate [vendor discovered in research] or skip it?"

Use AskUserQuestion with up to 4 questions. Feed answers into Round 3 convergence prompts.

Round 3: Convergence

Send each specialist the full transcript:

Here is the full discussion (Rounds 1-2):

[Full transcript]

ROUND 3 INSTRUCTION: Given the full discussion, identify:
1. Where the council AGREES
2. Where you still DISAGREE and why
3. Your FINAL recommendation on the product decision
50-150 words.

Collect all Round 3 responses.

Step 6: Synthesize and Teardown

Produce the product recommendation:

### Product Council Recommendation: [Topic]

**Specialists consulted**: [who participated]
**Rounds**: [how many completed]

#### Unanimous Agreements
What all specialists converged on — these are high-confidence recommendations.

#### Key Disagreements
Where specialists differ — present both sides with reasoning. Flag which need the user's decision.

#### Feasibility Assessment
Technical constraints, architecture impact, timeline risks, team capacity, dependency concerns.

#### Success Metrics
Concrete, measurable KPIs with targets and timeframes.

#### Recommended Actions
Prioritized roadmap: what to do first, second, third. Include team allocation if discussed.

#### Open Decisions
Specific choices the user must make, with the options and trade-offs from each side.

After synthesis:

  1. Send shutdown_request to each teammate
  2. TeamDelete to clean up

Step 7: Sequential Fallback

If agent teams are not available:

Gemini CLI Note: In the Gemini CLI, the

Task
tool is replaced by direct
@
-invocation. Instead of spawning a task, invoke the specialist directly in your prompt using
@AgentName
(e.g.,
Hey @ProductManager, please review...
). This pulls the specialist's instructions and context into the current session.

  1. Round 1: For each specialist, use Task tool (no
    team_name
    ) with
    subagent_type: "{AgentName}"
    . Collect results.
  2. [Checkpoint]: Present assessments, ask user (same as Step 4).
  3. Round 2: For each, spawn new Task with Round 1 transcript + Round 2 instruction.
  4. Round 3: For each, spawn new Task with Round 1+2 transcript + Round 3 instruction.
  5. Synthesize using the same verdict format.

Constraints

  • The main session IS the lead — do not spawn a
    council-lead
    agent
  • Always include all 4 default specialists for product reviews — they cover complementary blind spots
  • Provide full context in every prompt — agents don't inherit conversation or previous rounds
  • In Round 2+, agents MUST reference other specialists by name
  • If the decision is trivial, skip the council — tell the user a full review isn't warranted