Claude-skill-registry execute-tasks
This skill should be used when executing tasks from ai-state/active/tasks.yaml sequentially. It loads tasks, gathers context, implements features with phase-appropriate testing, updates task status in tasks.yaml, organizes tests into ai-state/regressions/ folders, and logs all operations to operations.log. Use after write-plan creates tasks.yaml or when resuming development work.
git clone https://github.com/majiayu000/claude-skill-registry
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data/execute-tasks" ~/.claude/skills/majiayu000-claude-skill-registry-execute-tasks && rm -rf "$T"
skills/data/execute-tasks/SKILL.mdExecute-Tasks Skill
Purpose
Sequentially execute tasks from
ai-state/active/tasks.yaml with proper context gathering, phase-appropriate testing, automatic test organization, status tracking, and comprehensive logging.
When to Use
Use this skill when:
- After
creates tasks.yaml/write-plan - Starting development work
- Resuming work after breaks
- Processing task backlog
- Executing sprint work
CRITICAL: Logging Requirements
BEFORE YOU DO ANYTHING ELSE, LOG IT!
Use the unified logging system via
.claude\scripts\log_task.bat (Windows) or log_operation.py:
# Simple format (Windows): .claude\scripts\log_task.bat TYPE TASK_ID "message" # Full format (cross-platform): python .claude/scripts/log_operation.py --type TYPE --task TASK_ID "message"
Required logging for EVERY task:
- Log "Step X: [Step Name]" BEFORE starting each major step (1-8)
- Log narrative BEFORE executing each substep that modifies state
- Log all mandatory operation events (task.started, context.gathered, etc.)
Example logging sequence for Step 2:
# Log step header .claude\scripts\log_task.bat narrative task-001-fastapi-setup "Step 2: Gather Context" # Log substep 2.1 .claude\scripts\log_task.bat narrative task-001-fastapi-setup "Step 2.1: Load relevant files" # Log substep 2.2 .claude\scripts\log_task.bat narrative task-001-fastapi-setup "Step 2.2: Save context snapshot" # Log narrative before action (substep 2.3) .claude\scripts\log_task.bat narrative task-001-fastapi-setup "Gathering context from backend standards and checking existing project files" # Log completion (substep 2.4) .claude\scripts\log_task.bat context task-001-fastapi-setup
DO NOT skip these logs! They provide visibility into your execution process.
See
for complete reference..claude/scripts/LOGGING_GUIDE.md
Step-by-Step Execution Process
For each task with
status: pending in tasks.yaml, follow these steps in exact order:
Step 1: Load and Prepare Task
FIRST: Log this step header
.claude\scripts\log_task.bat narrative {task_id} "Step 1: Load and Prepare Task"
1.1 Read the task and sprint context:
- Read
ai-state/active/tasks.yaml - Extract top-level: phase
- Navigate into
array, find current sprintsprints - Extract sprint context: sprint, milestone, complexity_range
- Find first task with
within the sprint's tasksstatus: pending - Extract task fields: id, status, complexity, context, who, where, what, how, goal, check, close
1.2 Update status to in_progress:
- Use Edit tool to change task status from
topending
in tasks.yamlin_progress - Save the file
1.3 Log task start:
.claude\scripts\log_task.bat started {task_id} "{what}"
1.4 Create progress tracking:
- Use TodoWrite to create entry for this task with status
in_progress
Step 2: Gather Context
FIRST: Log this step header
.claude\scripts\log_task.bat narrative {task_id} "Step 2: Gather Context"
2.1 Load relevant files:
.claude\scripts\log_task.bat narrative {task_id} "Step 2.1: Load relevant files"
- Read files mentioned in task's
fieldwhere - Read standards from
if referenced inai-state/standards/{standard}.mdhow - Search for related code in codebase
2.2 Save context snapshot:
.claude\scripts\log_task.bat narrative {task_id} "Step 2.2: Save context snapshot"
- Create
with:ai-state/contexts/{sprint}/{task_id}.md- Full task specification
- Relevant code snippets
- Standards being followed
- Dependencies noted
- NOTE: Contexts are organized by sprint folder for better organization
2.3 Log narrative action:
.claude\scripts\log_task.bat narrative {task_id} "Gathering context from backend standards and checking existing project files"
2.4 Log context gathering:
.claude\scripts\log_task.bat context {task_id}
Step 3: Execute Implementation
FIRST: Log this step header
.claude\scripts\log_task.bat narrative {task_id} "Step 3: Execute Implementation"
3.1 Implement the feature:
- Follow the
description exactlywhat - Achieve the
criteriagoal - Reference standards from
fieldhow - Write files to locations in
fieldwhere
3.2 Use appropriate tools:
- Write tool for new files
- Edit tool for existing files
- Follow coding standards from the
fieldhow
3.3 Log narrative action:
.claude\scripts\log_task.bat narrative {task_id} "Creating requirements.txt and main.py with FastAPI initialization"
3.4 Log implementation:
.claude\scripts\log_task.bat implemented {task_id}
Step 4: Write Phase-Appropriate Tests
FIRST: Log this step header
.claude\scripts\log_task.bat narrative {task_id} "Step 4: Write Phase-Appropriate Tests"
4.1 Determine required test count: Read
.khujta/phase.json and write tests based on phase:
- Prototype: 2 tests (smoke + happy path)
- MVP: 4 tests (smoke, happy, error, auth)
- Growth: 5 tests (+ edge cases)
- Scale: 6-8 tests as needed
4.2 Create test file:
- Name:
(or appropriate for stack)test_{component}.py - Location: Project root directory (will move later)
- Use Write tool to create file
4.3 Write test cases:
- Test 1 (smoke): Basic functionality from
check.valid - Test 2 (happy path): Main use case from goal
- Additional tests per phase requirements
- Cover
scenarioscheck.error
4.4 Log narrative action:
.claude\scripts\log_task.bat narrative {task_id} "Writing 2 tests (smoke + happy path) for FastAPI server startup validation"
4.5 Log test creation:
.claude\scripts\log_task.bat tests-written {task_id} "2 tests"
Step 5: Run Tests and Verify
FIRST: Log this step header
.claude\scripts\log_task.bat narrative {task_id} "Step 5: Run Tests and Verify"
5.1 Execute tests:
- Use Bash tool:
pytest test_{component}.py -v - Capture output
5.2 If tests FAIL:
- Debug the issue
- Fix implementation or test
- Re-run tests
- Repeat until ALL tests pass
- DO NOT proceed until tests pass
5.3 Log test results:
.claude\scripts\log_task.bat tests-passed {task_id} "2/2 passing"
Step 6: Organize Tests into Regressions
FIRST: Log this step header
.claude\scripts\log_task.bat narrative {task_id} "Step 6: Organize Tests into Regressions"
6.1 Determine destination:
- Use task's
field (backend/frontend/database/devops)context - Destination:
ai-state/regressions/{context}/
6.2 Create directory if needed:
- Use Bash:
mkdir -p ai-state/regressions/{context}
6.3 Log narrative action:
.claude\scripts\log_task.bat narrative {task_id} "Moving tests to ai-state/regressions/backend/ directory for organization"
6.4 Move test file:
- Use Bash:
mv test_{component}.py ai-state/regressions/{context}/
6.5 Verify tests still work:
- Run:
pytest ai-state/regressions/{context}/test_{component}.py -v - Update imports if paths changed
6.6 Log organization:
.claude\scripts\log_task.bat tests-organized {task_id} "ai-state/regressions/{context}/"
Step 7: Complete Task
7.1 Verify completion criteria:
- Check task meets the
criteriaclose - Verify all
requirements passedcheck
7.2 Update status to completed:
- Use Edit tool to change status from
toin_progress
in tasks.yamlcompleted - Save tasks.yaml
7.3 Log completion:
.claude\scripts\log_task.bat completed {task_id} "{close}"
7.4 Update progress tracking:
- Use TodoWrite to mark task as completed
Step 8: Proceed to Next Task
8.1 Check sprint boundary:
- Read tasks.yaml and identify current sprint (sprint with in_progress or just-completed tasks)
- Check if current sprint has any remaining
tasksstatus: pending - If current sprint is complete BUT next sprint has pending tasks:
- STOP EXECUTION - Do not proceed to next sprint automatically
- Show message: "Sprint {N} complete. Run
again to start Sprint {N+1}"/core:execute-tasks - This allows human review of sprint completion before starting next sprint
8.2 Find next task in CURRENT sprint:
- Look for next task with
in the current sprint onlystatus: pending - Check task's
field for dependencieswhen - If dependencies not met, skip to next available task in current sprint
8.3 Repeat process:
- If pending task found in current sprint: Return to Step 1 with the new task
- If no pending tasks in current sprint: Proceed to Step 9
8.4 When current sprint complete:
- If no more pending tasks in current sprint, proceed to Step 9: Sprint Completion
- Do NOT automatically continue to next sprint
Step 9: Sprint Completion (When Current Sprint Done)
TRIGGER: When no more
pending tasks remain in the current sprint
9.1 Verify sprint completion:
- Count tasks with
in current sprintstatus: completed - Count tasks with
or failed in current sprintstatus: blocked - Confirm all tasks in current sprint have a final status (no pending remain)
9.2 Run sprint completion script:
python .claude/scripts/complete_sprint.py
This automatically generates:
- Test runner scripts:
andai-state/regressions/{context}/run.batrun.sh - Sprint report:
ai-state/reports/{sprint}-report.md - Test results:
ai-state/reports/{sprint}-test-results.json - Human docs:
ai-state/human-docs/{sprint}-summary.md - Operations log entry: Sprint completion logged
9.3 Verify outputs:
- Check that all regression tests passed
- Review sprint report for completeness
- Confirm human docs are stakeholder-ready
9.4 Check for next sprint:
- If more sprints exist in tasks.yaml with pending tasks:
- Display message: "✅ Sprint {N} complete! Review reports in ai-state/reports/ and ai-state/human-docs/"
- Display message: "📋 Sprint {N+1} is ready. Run
when ready to start."/core:execute-tasks - STOP - Do not continue automatically
- If no more sprints:
- Display message: "🎉 All sprints complete! Project ready for next phase."
NOTE: Sprint completion runs in ALL phases including prototype. It's no longer just for Growth/Scale phases.
Required Log Entries Per Task
Every task execution MUST create these log entries in
ai-state/operations.log:
Mandatory Operation Logs (7 minimum)
[timestamp] [phase] [sprint] task.started: {task_id} - {what}[timestamp] [phase] [sprint] context.gathered: {task_id}[timestamp] [phase] [sprint] implementation.complete: {task_id}[timestamp] [phase] [sprint] tests.written: {task_id} - {count} tests[timestamp] [phase] [sprint] tests.passed: {task_id} - {count}/{count} passing[timestamp] [phase] [sprint] tests.organized: {task_id} → ai-state/regressions/{context}/[timestamp] [phase] [sprint] task.completed: {task_id} - {close}
Narrative Logs (4 minimum - logged BEFORE executing each major step)
Before Step 2:
[timestamp] [phase] [sprint] narrative: {task_id} - {description of what you're about to do}
Before Step 3: [timestamp] [phase] [sprint] narrative: {task_id} - {description of implementation about to execute}
Before Step 4: [timestamp] [phase] [sprint] narrative: {task_id} - {description of tests about to write}
Before Step 6: [timestamp] [phase] [sprint] narrative: {task_id} - {description of test organization}
Examples of narrative logs:
- "narrative: task-001 - Gathering context from backend standards and checking existing files"
- "narrative: task-001 - Creating requirements.txt and main.py with FastAPI initialization"
- "narrative: task-001 - Writing 2 tests (smoke + happy path) for server startup validation"
- "narrative: task-001 - Moving tests to ai-state/regressions/backend/ directory"
Total minimum log entries per task: 11 (7 mandatory operations + 4 narratives)
Log Format:
: ISO 8601 format (YYYY-MM-DDTHH:MM:SSZ)[timestamp]
: Phase from top-level tasks.yaml (e.g., prototype, mvp)[phase]
: Sprint name from sprints array (e.g., sprint-1)[sprint]
: Task identifier (e.g., task-001-fastapi-setup){task_id}- Additional context as specified per entry
Phase Requirements Reference
Prototype (0-100 users, 30 min to ship):
- Tests: 2 (smoke + happy path)
- Coverage: 40%
- Quality: 6.0/10
MVP (100-1K users, 1-2 hours):
- Tests: 4 (+ errors + auth)
- Coverage: 60%
- Quality: 7.0/10
Growth (1K-10K users, 2-4 hours):
- Tests: 5 (+ edge cases)
- Coverage: 70%
- Quality: 7.5/10
Scale (10K+ users, 4-8 hours):
- Tests: 6-8 as needed
- Coverage: 80%
- Quality: 8.0/10
Error Handling
If implementation fails:
- Log error to operations.log:
[timestamp] [epic] [sprint] task.failed: {task_id} - {error} - Update status to
in tasks.yamlblocked - Create note in
ai-state/debt/{task_id}-failure.md - Skip to next task
If tests fail after 3 attempts:
- Log:
[timestamp] [epic] [sprint] tests.failed: {task_id} - {failure_details} - Mark task as
blocked - Save test output to
ai-state/debt/{task_id}-test-failures.txt - Skip to next task
If dependencies missing:
- Log:
[timestamp] [epic] [sprint] task.blocked: {task_id} - waiting for {dependency} - Update status to
blocked - Skip to next available task
Files Modified
Reads from:
ai-state/active/tasks.yaml.khujta/phase.jsonai-state/standards/{standard}.md- Files mentioned in task's
where
Writes to:
(append all operations)ai-state/operations.log
(update status fields)ai-state/active/tasks.yaml
(save context)ai-state/contexts/{sprint}/{task_id}.md
(move test files)ai-state/regressions/{context}/- Implementation files per task's
fieldwhere
(on sprint completion)ai-state/reports/{sprint}-*.md
(on sprint completion)ai-state/human-docs/{sprint}-*.md
Tools Required
- Read: Load tasks, context, standards
- Write: Create new implementation files, tests, context docs
- Edit: Modify existing files, update tasks.yaml status
- Bash: Run tests, create directories, move files
- TodoWrite: Track real-time progress
Success Criteria
A successful execution produces:
✅ All pending tasks now
completed or blocked with documented reason
✅ All tests passing and organized in ai-state/regressions/{context}/
✅ Complete operations.log with minimum 11 entries per task (7 operations + 4 narratives)
✅ Updated tasks.yaml with all status changes
✅ Context preserved in ai-state/contexts/{sprint}/{task_id}.md for each task
✅ TodoWrite list reflects final state
✅ Sprint completion reports generated (when all tasks done)
Critical Rules
- Sequential only: Execute one task at a time, never skip ahead
- Status updates required: Always update tasks.yaml before and after each task
- Tests must pass: Never mark task completed if tests are failing
- Tests must be organized: Always move tests to ai-state/regressions/
- Logging is mandatory: Every operation AND narrative must be logged to operations.log (min 11 per task)
- Narrative before action: Always log narrative description BEFORE executing each major step
- TodoWrite for visibility: Use TodoWrite for real-time progress tracking
- Phase/Sprint context: Always extract and include phase/sprint in ALL log entries
Example Log Entries
For task-001-fastapi-setup in prototype phase, sprint-1:
[2025-11-07T01:35:00Z] [prototype] [sprint-1] task.started: task-001-fastapi-setup - Setup FastAPI project structure [2025-11-07T01:35:10Z] [prototype] [sprint-1] narrative: task-001-fastapi-setup - Gathering context from backend standards and checking existing project files [2025-11-07T01:35:15Z] [prototype] [sprint-1] context.gathered: task-001-fastapi-setup [2025-11-07T01:36:00Z] [prototype] [sprint-1] narrative: task-001-fastapi-setup - Creating requirements.txt and main.py with FastAPI initialization [2025-11-07T01:36:30Z] [prototype] [sprint-1] implementation.complete: task-001-fastapi-setup [2025-11-07T01:36:45Z] [prototype] [sprint-1] narrative: task-001-fastapi-setup - Writing 2 tests (smoke + happy path) for server startup validation [2025-11-07T01:37:00Z] [prototype] [sprint-1] tests.written: task-001-fastapi-setup - 2 tests [2025-11-07T01:37:10Z] [prototype] [sprint-1] tests.passed: task-001-fastapi-setup - 2/2 passing [2025-11-07T01:37:15Z] [prototype] [sprint-1] narrative: task-001-fastapi-setup - Moving tests to ai-state/regressions/backend/ directory [2025-11-07T01:37:20Z] [prototype] [sprint-1] tests.organized: task-001-fastapi-setup → ai-state/regressions/backend/ [2025-11-07T01:37:25Z] [prototype] [sprint-1] task.completed: task-001-fastapi-setup - Server running on localhost:8000
This format enables easy filtering:
- All operations for this phasegrep "prototype" operations.log
- All operations for this sprintgrep "sprint-1" operations.log
- All operations for specific taskgrep "task-001" operations.log
- All completed tasksgrep "task.completed" operations.log
- All narrative descriptions showing what was donegrep "narrative:" operations.log