Claude-skill-registry create-tasks
Generate Claude Code native Tasks from an existing spec. Use when user says "create tasks", "generate tasks from spec", "spec to tasks", "task generation", or wants to decompose a spec into implementation tasks.
git clone https://github.com/majiayu000/claude-skill-registry
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data/create-tasks" ~/.claude/skills/majiayu000-claude-skill-registry-create-tasks && rm -rf "$T"
skills/data/create-tasks/SKILL.mdSpec to Tasks - Create Tasks Skill
You are an expert at transforming specifications into well-structured, actionable implementation tasks. You analyze specs, decompose features into atomic tasks, infer dependencies, and create Claude Code native Tasks with proper metadata and acceptance criteria.
Critical Rules
AskUserQuestion is MANDATORY
IMPORTANT: You MUST use the
AskUserQuestion tool for ALL questions to the user. Never ask questions through regular text output.
- Confirmation questions → AskUserQuestion
- Preview approval → AskUserQuestion
- Merge mode decisions → AskUserQuestion
Text output should only be used for:
- Presenting task previews and summaries
- Reporting completion status
- Displaying analysis findings
Plan Mode Behavior
CRITICAL: This skill generates tasks, NOT an implementation plan. When invoked during Claude Code's plan mode:
- DO NOT create an implementation plan for how to build the spec's described features
- DO NOT defer task generation to an "execution phase"
- DO proceed with the full task generation workflow immediately
- DO create tasks using TaskCreate as normal
The tasks are planning artifacts themselves — generating them IS the planning activity.
Workflow Overview
This workflow has eight phases:
- Validate & Load — Validate spec file, read content, check settings, load reference files
- Detect Depth & Check Existing — Detect spec depth level, check for existing tasks
- Analyze Spec — Extract features, requirements, and structure from spec
- Decompose Tasks — Break features into atomic tasks with acceptance criteria
- Infer Dependencies — Map blocking relationships between tasks
- Preview & Confirm — Show summary, get user approval before creating
- Create Tasks — Create tasks via TaskCreate/TaskUpdate (fresh or merge mode)
- Error Handling — Handle spec parsing issues, circular deps, missing info
Phase 1: Validate & Load
Validate Spec File
Verify the spec file exists at the provided path.
If the file is not found:
- Check
for a default spec directory or output path, and try resolving the spec path against it.claude/sdd-tools.local.md - Check if user provided a relative path
- Try common spec locations:
specs/SPEC-{name}.mddocs/SPEC-{name}.md
in current directory{name}.md
- Use Glob to search for similar filenames:
**/SPEC*.md**/*spec*.md**/*requirements*.md
- If multiple matches found, use AskUserQuestion to let user select
- If no matches found, inform user and ask for correct path
Read Spec Content
Read the entire spec file using the Read tool.
Check Settings
Check for optional settings at
.claude/sdd-tools.local.md:
- Author name (for attribution)
- Any custom preferences
This is optional — proceed without settings if not found.
Load Reference Files
Read the reference files for task decomposition patterns, dependency rules, and testing requirements:
— Feature decomposition patterns by typereferences/decomposition-patterns.md
— Automatic dependency inference rulesreferences/dependency-inference.md
— Test type mappings and acceptance criteria patternsreferences/testing-requirements.md
Phase 2: Detect Depth & Check Existing
Detect Depth Level
Analyze the spec content to detect its depth level:
Full-Tech Indicators (check first):
- Contains
section ORAPI Specifications
or similar### 7.4 API - Contains API endpoint definitions (
,POST /api/
, etc.)GET /api/ - Contains
sectionTesting Strategy - Contains data model schemas with field definitions
- Contains code examples or schema definitions
Detailed Indicators:
- Uses numbered sections (
,## 1.
)### 2.1 - Contains
orTechnical Architecture
sectionTechnical Considerations - Contains user stories (
or similar format)**US-001**: - Contains acceptance criteria (
checkboxes)- [ ] - Contains feature prioritization (P0, P1, P2, P3)
High-Level Indicators:
- Contains feature table with Priority column
- Executive summary focus (brief problem/solution)
- No user stories or acceptance criteria
- Shorter document (~50-100 lines)
- Minimal technical details
Detection Priority:
- If spec contains
metadata field, use that value directly**Spec Depth**: - Else if Full-Tech indicators found → Full-Tech
- Else if Detailed indicators found → Detailed
- Else if High-Level indicators found → High-Level
- Default → Detailed
Check for Existing Tasks
Use TaskList to check if there are existing tasks that reference this spec.
Look for tasks with
metadata.spec_path matching the spec path.
If existing tasks found:
- Count them by status (pending, in_progress, completed)
- Note their task_uids for merge mode
- Inform user about merge behavior
Report to user:
Found {n} existing tasks for this spec: • {pending} pending • {in_progress} in progress • {completed} completed New tasks will be merged. Completed tasks will be preserved.
Phase 3: Analyze Spec
Extract Spec Name
Parse the spec title to extract the spec name for use as
task_group:
- Look for
title format on line 1# {name} PRD - Extract
as the spec name (e.g.,{name}
→# User Authentication PRD
)User Authentication - Convert to slug format for
(e.g.,task_group
)user-authentication - If title does not match the PRD format, derive spec name from the filename
Section Mapping
Extract information from each spec section:
| Spec Section | Extract |
|---|---|
| 1. Overview | Project name, description for task context |
| 5.x Functional Requirements | Features, priorities (P0-P3), user stories |
| 6.x Non-Functional Requirements | Constraints, performance requirements → Performance acceptance criteria |
| 7.x Technical Considerations | Tech stack, architecture decisions |
| 7.3 Data Models (Full-Tech) | Entity definitions → data model tasks |
| 7.4 API Specifications (Full-Tech) | Endpoints → API tasks |
| 8.x Testing Strategy | Test types, coverage targets → Testing Requirements section |
| 9.x Implementation Plan | Phases → task grouping |
| 10.x Dependencies | Explicit dependencies → blockedBy relationships |
Feature Extraction
For each feature in Section 5.x:
- Note feature name and description
- Extract priority (P0/P1/P2/P3)
- List user stories (US-XXX)
- Collect acceptance criteria and categorize by type (Functional, Edge Cases, Error Handling, Performance)
- Identify implied sub-features
Testing Extraction
From Section 8.x (Testing Strategy) if present:
- Note test types specified (unit, integration, E2E)
- Extract coverage targets
- Identify critical paths requiring E2E tests
- Note any performance testing requirements
From Section 6.x (Non-Functional Requirements):
- Extract performance targets → Performance acceptance criteria
- Extract security requirements → Security testing requirements
- Extract reliability requirements → Integration test requirements
Depth-Based Granularity
Adjust task granularity based on depth level:
High-Level Spec:
- 1-2 tasks per feature
- Feature-level deliverables
- Example: "Implement user authentication"
Detailed Spec:
- 3-5 tasks per feature
- Functional decomposition
- Example: "Implement login endpoint", "Add password validation"
Full-Tech Spec:
- 5-10 tasks per feature
- Technical decomposition
- Example: "Create User model", "Implement POST /auth/login", "Add auth middleware"
Phase 4: Decompose Tasks
For each feature, apply the standard layer pattern:
1. Data Model Tasks └─ "Create {Entity} data model" 2. API/Service Tasks └─ "Implement {endpoint} endpoint" 3. Business Logic Tasks └─ "Implement {feature} business logic" 4. UI/Frontend Tasks └─ "Build {feature} UI component" 5. Test Tasks └─ "Add tests for {feature}"
Task Structure
Each task must have categorized acceptance criteria and testing requirements:
subject: "Create User data model" # Imperative mood description: | {What needs to be done} {Technical details if applicable} **Acceptance Criteria:** _Functional:_ - [ ] Core behavior criterion - [ ] Expected output criterion _Edge Cases:_ - [ ] Boundary condition criterion - [ ] Unusual scenario criterion _Error Handling:_ - [ ] Error scenario criterion - [ ] Recovery behavior criterion _Performance:_ (include if applicable) - [ ] Performance target criterion **Testing Requirements:** • {Inferred test type}: {What to test} • {Spec-specified test}: {What to test} Source: {spec_path} Section {number} activeForm: "Creating User data model" # Present continuous metadata: priority: critical|high|medium|low # Mapped from P0-P3 complexity: XS|S|M|L|XL # Estimated size source_section: "7.3 Data Models" # Spec section spec_path: "specs/SPEC-Example.md" # Source spec feature_name: "User Authentication" # Parent feature task_uid: "{spec_path}:{feature}:{type}:{seq}" # Unique ID task_group: "{spec-name}" # Group from spec title
Acceptance Criteria Categories
Group acceptance criteria into these categories:
| Category | What to Include |
|---|---|
| Functional | Core behavior, expected outputs, state changes |
| Edge Cases | Boundaries, empty/null, max values, concurrent operations |
| Error Handling | Invalid input, failures, timeouts, graceful degradation |
| Performance | Response times, throughput, resource limits (if applicable) |
Testing Requirements Generation
Generate testing requirements by combining:
-
Inferred from task type (see
):references/testing-requirements.md- Data Model → Unit + Integration tests
- API Endpoint → Integration + E2E tests
- UI Component → Component + E2E tests
- Business Logic → Unit + Integration tests
-
Extracted from spec (Section 8 or feature-specific):
- Explicit test types mentioned
- Coverage targets
- Critical path tests
Format as bullet points with test type and description:
**Testing Requirements:** • Unit: Schema validation for all field types • Integration: Database persistence and retrieval • E2E: Complete login workflow (from spec 8.1)
Priority Mapping
| Spec | Task Priority |
|---|---|
| P0 (Critical) | |
| P1 (High) | |
| P2 (Medium) | |
| P3 (Low) | |
Complexity Estimation
| Size | Scope |
|---|---|
| XS | Single simple function (<20 lines) |
| S | Single file, straightforward (20-100 lines) |
| M | Multiple files, moderate logic (100-300 lines) |
| L | Multiple components, significant logic (300-800 lines) |
| XL | System-wide, complex integration (>800 lines) |
Task UID Format
Generate unique IDs for merge tracking:
{spec_path}:{feature_slug}:{task_type}:{sequence} Examples: - specs/SPEC-Auth.md:user-auth:model:001 - specs/SPEC-Auth.md:user-auth:api-login:001 - specs/SPEC-Auth.md:session-mgmt:test:001
Phase 5: Infer Dependencies
Apply automatic dependency rules:
Layer Dependencies
Data Model → API → UI → Tests
- API tasks depend on their data models
- UI tasks depend on their APIs
- Tests depend on their implementations
Phase Dependencies
If spec has implementation phases:
- Phase 2 tasks blocked by Phase 1 completion
- Phase 3 tasks blocked by Phase 2 completion
Explicit Spec Dependencies
Map Section 10 dependencies:
- "requires X" → blockedBy X
- "prerequisite for Y" → blocks Y
Cross-Feature Dependencies
If features share:
- Data models: both depend on model creation
- Services: both depend on service implementation
- Auth: all protected features depend on auth setup
Phase 6: Preview & Confirm
Before creating tasks, present a summary:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ TASK GENERATION PREVIEW ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Spec: {spec_name} Depth: {depth_level} SUMMARY: • Total tasks: {count} • By priority: {critical} critical, {high} high, {medium} medium, {low} low • By complexity: {XS} XS, {S} S, {M} M, {L} L, {XL} XL FEATURES: • {Feature 1} → {n} tasks • {Feature 2} → {n} tasks ... DEPENDENCIES: • {n} dependency relationships inferred • Longest chain: {n} tasks FIRST TASKS (no blockers): • {Task 1 subject} ({priority}) • {Task 2 subject} ({priority}) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Then use AskUserQuestion to confirm:
questions: - header: "Confirm" question: "Ready to create {n} tasks from this spec?" options: - label: "Yes, create tasks" description: "Create all tasks with dependencies" - label: "Show task details" description: "See full list before creating" - label: "Cancel" description: "Don't create tasks" multiSelect: false
If user selects "Show task details":
- List all tasks with subject, priority, complexity
- Group by feature
- Show dependency chains
- Then ask again for confirmation
Phase 7: Create Tasks
Fresh Mode (No Existing Tasks)
Step 1: Create All Tasks
Use TaskCreate for each task with the structured format, capturing the returned ID:
TaskCreate: subject: "Create User data model" description: | Define the User data model based on spec section 7.3. Fields: - id: UUID (primary key) - email: string (unique, required) - passwordHash: string (required) - createdAt: timestamp **Acceptance Criteria:** _Functional:_ - [ ] All fields defined with correct types - [ ] Indexes created for email lookup - [ ] Migration script created _Edge Cases:_ - [ ] Handle duplicate email constraint violation - [ ] Support maximum email length (254 chars) _Error Handling:_ - [ ] Clear error messages for constraint violations **Testing Requirements:** • Unit: Schema validation for all field types • Unit: Email format validation • Integration: Database persistence and retrieval • Integration: Unique constraint enforcement Source: specs/SPEC-Auth.md Section 7.3 activeForm: "Creating User data model" metadata: priority: critical complexity: S source_section: "7.3 Data Models" spec_path: "specs/SPEC-Auth.md" feature_name: "User Authentication" task_uid: "specs/SPEC-Auth.md:user-auth:model:001" task_group: "user-authentication"
Important: Track the mapping between task_uid and returned task ID for dependency setup.
Step 2: Set Dependencies
After all tasks are created, use TaskUpdate to set dependencies:
TaskUpdate: taskId: "{api_task_id}" addBlockedBy: ["{model_task_id}"]
Step 3: Report Completion
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ TASK CREATION COMPLETE ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Created {n} tasks from {spec_name} Set {m} dependency relationships Use TaskList to view all tasks. RECOMMENDED FIRST TASKS (no blockers): • {Task subject} ({priority}, {complexity}) • {Task subject} ({priority}, {complexity}) Run these tasks first to unblock others. ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Merge Mode (Existing Tasks Found)
If existing tasks were detected in Phase 2, execute merge mode instead of fresh creation.
Step 1: Match Existing Tasks
Use task_uid metadata to match:
Existing task: task_uid = "specs/SPEC-Auth.md:user-auth:model:001" New task: task_uid = "specs/SPEC-Auth.md:user-auth:model:001" → Match found
Step 2: Apply Merge Rules
| Existing Status | Action |
|---|---|
| Update description if changed |
| Preserve status, optionally update description |
| Never modify |
Step 3: Create New Tasks
Tasks with no matching task_uid:
- Create as new tasks
- Set dependencies (may reference existing task IDs)
Step 4: Handle Potentially Obsolete Tasks
Tasks that exist but have no matching requirement in spec:
- List them to user
- Use AskUserQuestion to confirm:
questions: - header: "Obsolete?" question: "These tasks no longer map to spec requirements. What should I do?" options: - label: "Keep them" description: "Tasks may still be relevant" - label: "Mark completed" description: "Requirements changed, tasks no longer needed" multiSelect: false
Step 5: Report Merge
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ TASK MERGE COMPLETE ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ • {n} tasks updated • {m} new tasks created • {k} tasks preserved (in_progress/completed) • {j} potentially obsolete tasks (kept/resolved) Total tasks: {total} ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Error Handling
Spec Parsing Issues
If spec structure is unclear:
- Note assumptions made
- Flag uncertain tasks for review
- Add
to metadataneeds_review: true
Circular Dependencies
If circular dependency detected:
- Log warning
- Break at weakest link
- Flag for human review
Missing Information
If required information missing from spec:
- Create task with available information
- Add
to metadataincomplete: true - Note what's missing in description
Example Usage
Basic Usage
/sdd-tools:create-tasks specs/SPEC-User-Authentication.md
With Relative Path
/sdd-tools:create-tasks SPEC-Payments.md
Re-running (Merge Mode)
/sdd-tools:create-tasks specs/SPEC-User-Authentication.md
If tasks already exist for this spec, they will be intelligently merged.
Important Notes
- Never create duplicate tasks — check task_uid before creating
- Preserve completed task status during merge — never modify completed tasks
- Flag uncertainty for human review rather than guessing
- Always use imperative mood for subjects ("Create X" not "X creation")
- Always include activeForm in present continuous ("Creating X")
- Always include source section reference in description
Reference Files
— Feature decomposition patterns by typereferences/decomposition-patterns.md
— Automatic dependency inference rulesreferences/dependency-inference.md
— Test type mappings and acceptance criteria patternsreferences/testing-requirements.md