Claude-Code-Workflow team-issue

Unified team skill for issue resolution. Uses team-worker agent architecture with role directories for domain logic. Coordinator orchestrates pipeline, workers are team-worker agents. Triggers on "team issue".

install
source · Clone the upstream repo
git clone https://github.com/catlog22/Claude-Code-Workflow
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/catlog22/Claude-Code-Workflow "$T" && mkdir -p ~/.claude/skills && cp -r "$T/.codex/skills/team-issue" ~/.claude/skills/catlog22-claude-code-workflow-team-issue-7cbb8c && rm -rf "$T"
manifest: .codex/skills/team-issue/SKILL.md
source content

Team Issue Resolution

Orchestrate issue resolution pipeline: explore context -> plan solution -> review (optional) -> marshal queue -> implement. Supports Quick, Full, and Batch pipelines with review-fix cycle.

Architecture

Skill(skill="team-issue", args="<issue-ids> [--mode=<mode>]")
                    |
         SKILL.md (this file) = Router
                    |
     +--------------+--------------+
     |                             |
  no --role flag              --role <name>
     |                             |
  Coordinator                  Worker
  roles/coordinator/role.md    roles/<name>/role.md
     |
     +-- clarify -> dispatch -> spawn workers -> STOP
                                    |
             +-------+-------+-------+-------+
             v       v       v       v       v
          [explor] [plann] [review] [integ] [imple]

Role Registry

RolePathPrefixInner Loop
coordinatorroles/coordinator/role.md
explorerroles/explorer/role.mdEXPLORE-*false
plannerroles/planner/role.mdSOLVE-*false
reviewerroles/reviewer/role.mdAUDIT-*false
integratorroles/integrator/role.mdMARSHAL-*false
implementerroles/implementer/role.mdBUILD-*false

Role Router

Parse

$ARGUMENTS
:

  • Has
    --role <name>
    -> Read
    roles/<name>/role.md
    , execute Phase 2-4
  • No
    --role
    ->
    roles/coordinator/role.md
    , execute entry router

Delegation Lock

Coordinator is a PURE ORCHESTRATOR. It coordinates, it does NOT do.

Before calling ANY tool, apply this check:

Tool CallVerdictReason
spawn_agent
,
wait_agent
,
close_agent
,
send_message
,
followup_task
ALLOWEDOrchestration
list_agents
ALLOWEDAgent health check
request_user_input
ALLOWEDUser interaction
mcp__ccw-tools__team_msg
ALLOWEDMessage bus
Read/Write
on
.workflow/.team/
files
ALLOWEDSession state
Read
on
roles/
,
commands/
,
specs/
ALLOWEDLoading own instructions
Read/Grep/Glob
on project source code
BLOCKEDDelegate to worker
Edit
on any file outside
.workflow/
BLOCKEDDelegate to worker
Bash("ccw cli ...")
BLOCKEDOnly workers call CLI
Bash
running build/test/lint commands
BLOCKEDDelegate to worker

If a tool call is BLOCKED: STOP. Create a task, spawn a worker.

No exceptions for "simple" tasks. Even a single-file read-and-report MUST go through spawn_agent.


Shared Constants

  • Session prefix:
    TISL
  • Session path:
    .workflow/.team/TISL-<date>-<slug>/
  • Team name:
    issue
  • CLI tools:
    ccw cli --mode analysis
    (read-only),
    ccw cli --mode write
    (modifications)
  • Message bus:
    mcp__ccw-tools__team_msg(session_id=<session-id>, ...)

Worker Spawn Template

Coordinator spawns workers using this template:

spawn_agent({
  agent_type: "team_worker",
  task_name: "<task-id>",
  fork_turns: "none",
  message: `## Role Assignment
role: <role>
role_spec: <skill_root>/roles/<role>/role.md
session: <session-folder>
session_id: <session-id>
requirement: <task-description>
inner_loop: false

Read role_spec file (<skill_root>/roles/<role>/role.md) to load Phase 2-4 domain instructions.

## Task Context
task_id: <task-id>
title: <task-title>
description: <task-description>
pipeline_phase: <pipeline-phase>

## Upstream Context
<prev_context>`
})

After spawning, use

wait_agent({ timeout_ms: 1800000 })
to collect results. If
result.timed_out
, send STATUS_CHECK via followup_task (wait 3 min), then FINALIZE with interrupt (wait 3 min), then mark timed_out and close agents. Use
close_agent({ target })
each worker.

Parallel spawn (Batch mode, N explorer or M implementer instances):

spawn_agent({
  agent_type: "team_worker",
  task_name: "<task-id>",
  fork_turns: "none",
  message: `## Role Assignment
role: <role>
role_spec: <skill_root>/roles/<role>/role.md
session: <session-folder>
session_id: <session-id>
requirement: <task-description>
agent_name: <role>-<N>
inner_loop: false

Read role_spec file (<skill_root>/roles/<role>/role.md) to load Phase 2-4 domain instructions.

## Task Context
task_id: <task-id>
title: <task-title>
description: <task-description>
pipeline_phase: <pipeline-phase>

## Upstream Context
<prev_context>`
})

After spawning, use

wait_agent({ timeout_ms: 1800000 })
to collect results. If
result.timed_out
, send STATUS_CHECK via followup_task (wait 3 min), then FINALIZE with interrupt (wait 3 min), then mark timed_out and close agents. Use
close_agent({ target })
each worker.

Model Selection Guide

Rolemodelreasoning_effortRationale
Explorer (EXPLORE-*)(default)mediumContext gathering, file reading, less reasoning
Planner (SOLVE-*)(default)highSolution design requires deep analysis
Reviewer (AUDIT-*)(default)highCode review and plan validation need full reasoning
Integrator (MARSHAL-*)(default)mediumQueue ordering and dependency resolution
Implementer (BUILD-*)(default)highCode generation needs precision

Override model/reasoning_effort in spawn_agent when cost optimization is needed:

spawn_agent({
  agent_type: "team_worker",
  task_name: "<task-id>",
  fork_turns: "none",
  model: "<model-override>",
  reasoning_effort: "<effort-level>",
  message: "..."
})

User Commands

CommandAction
check
/
status
View execution status graph, no advancement
resume
/
continue
Check worker states, advance next step

Session Directory

.workflow/.team/TISL-<date>-<slug>/
├── session.json                    # Session metadata + pipeline + fix_cycles
├── task-analysis.json              # Coordinator analyze output
├── .msg/
│   ├── messages.jsonl              # Message bus log
│   └── meta.json                   # Session state + cross-role state
├── wisdom/                         # Cross-task knowledge
│   ├── learnings.md
│   ├── decisions.md
│   ├── conventions.md
│   └── issues.md
├── explorations/                   # Explorer output
│   └── context-<issueId>.json
├── solutions/                      # Planner output
│   └── solution-<issueId>.json
├── audits/                         # Reviewer output
│   └── audit-report.json
├── queue/                          # Integrator output (also .workflow/issues/queue/)
└── builds/                         # Implementer output

Specs Reference

v4 Agent Coordination

Message Semantics

IntentAPIExample
Send exploration context to running planner
send_message
Queue EXPLORE-* findings to SOLVE-* worker
Not used in this skill
followup_task
No resident agents -- all workers are one-shot
Check running agents
list_agents
Verify parallel explorer/implementer health

Pipeline Pattern

Pipeline with context passing: explore -> plan -> review (optional) -> marshal -> implement. In Batch mode, N explorers and M implementers run in parallel:

// Batch mode: spawn N explorers in parallel (max 5)
const explorerNames = ["EXPLORE-001", "EXPLORE-002", ..., "EXPLORE-00N"]
for (const name of explorerNames) {
  spawn_agent({ agent_type: "team_worker", task_name: name, ... })
}
wait_agent({ timeout_ms: 1800000 })  // 30 min — apply timeout cascade if timed_out

// After MARSHAL completes: spawn M implementers in parallel (max 3)
const buildNames = ["BUILD-001", "BUILD-002", ..., "BUILD-00M"]
for (const name of buildNames) {
  spawn_agent({ agent_type: "team_worker", task_name: name, ... })
}
wait_agent({ timeout_ms: 1800000 })  // 30 min — apply timeout cascade if timed_out

Review-Fix Cycle

Reviewer (AUDIT-*) may reject plans, triggering fix cycles (max 2). Dynamic SOLVE-fix and AUDIT re-review tasks are created in tasks.json.

Agent Health Check

Use

list_agents({})
in handleResume and handleComplete:

// Reconcile session state with actual running agents
const running = list_agents({})
// Compare with tasks.json active_agents
// Reset orphaned tasks (in_progress but agent gone) to pending

Named Agent Targeting

Workers are spawned with

task_name: "<task-id>"
enabling direct addressing:

  • send_message({ target: "SOLVE-001", message: "..." })
    -- queue exploration context to running planner
  • close_agent({ target: "BUILD-001" })
    -- cleanup by name after completion

Error Handling

ScenarioResolution
Unknown commandError with available command list
Role not foundError with role registry
CLI tool failsWorker fallback to direct implementation
Fast-advance conflictCoordinator reconciles on next callback
Completion action failsDefault to Keep Active
Review rejection exceeds 2 roundsForce convergence to integrator
No issues found for given IDsCoordinator reports error to user
Deferred BUILD count unknownDefer to MARSHAL callback