Claude-night-market project-execution
Execute implementation plans with progress tracking, checkpoint validation, and quality gates.
git clone https://github.com/athola/claude-night-market
T=$(mktemp -d) && git clone --depth=1 https://github.com/athola/claude-night-market "$T" && mkdir -p ~/.claude/skills && cp -r "$T/plugins/attune/skills/project-execution" ~/.claude/skills/athola-claude-night-market-project-execution && rm -rf "$T"
plugins/attune/skills/project-execution/SKILL.mdWhen To Use
- After planning phase completes
- Ready to implement tasks
- Need systematic execution with tracking
- Want checkpoint-based validation
- Executing task lists with dependencies
- Monitoring progress and velocity
When NOT To Use
- No implementation plan exists (use
first)Skill(attune:project-planning) - Still planning or designing (complete planning phase before execution)
- Single isolated task (execute directly without framework overhead)
- Exploratory coding or prototyping (use focused development instead)
Integration
With superpowers:
- Uses
for systematic executionSkill(superpowers:executing-plans) - Uses
for issue resolutionSkill(superpowers:systematic-debugging) - Uses
for validationSkill(superpowers:verification-before-completion) - Uses
for TDD workflowSkill(superpowers:test-driven-development)
Without superpowers:
- Standalone execution framework
- Built-in checkpoint validation
- Progress tracking patterns
Execution Framework
Pre-Execution Phase
Actions:
- Load implementation plan
- Validate project initialized
- Check dependencies installed
- Review task dependency graph
- Identify starting tasks (no dependencies)
Validation:
- ✅ Plan file exists and is valid
- ✅ Project structure initialized
- ✅ Git repository configured
- ✅ Development environment ready
Task Execution Loop
For each task in dependency order:
1. PRE-TASK - Verify dependencies complete - Review acceptance criteria - Create feature branch (optional) - Set up task context 2. IMPLEMENT (TDD Cycle) - Write failing test (RED) - Implement minimal code (GREEN) - Refactor for quality (REFACTOR) - Repeat until all criteria met 3. VALIDATE - All tests passing? - All acceptance criteria met? - Code quality checks pass? - Documentation updated? 4. CHECKPOINT - Mark task complete IMMEDIATELY (do NOT batch) - Update execution state - Report progress - Identify blockers
Task Completion Discipline: Always call
TaskUpdate(taskId: "X", status: "completed") right after finishing each task—never defer completions to end of session.
Verification: Run
pytest -v to verify tests pass.
Post-Execution Phase
Actions:
- Verify all tasks complete
- Run full test suite
- Check code quality metrics
- Generate completion report
- Prepare for deployment/release
Terminal Phase Notice
This is the final phase of the attune workflow. No auto-continuation occurs after execution completes. The workflow terminates here. Unlike brainstorming, specification, and planning phases, execution does NOT auto-invoke any subsequent phase.
Task Execution Pattern
TDD Workflow
RED Phase:
# Write test that fails def test_user_authentication(): user = authenticate("user@example.com", "password") assert user.is_authenticated # Run test → FAILS (feature not implemented)
Verification: Run
pytest -v to verify tests pass.
GREEN Phase:
# Implement minimal code to pass def authenticate(email, password): # Simplest implementation user = User.find_by_email(email) if user and user.check_password(password): user.is_authenticated = True return user return None # Run test → PASSES
Verification: Run
pytest -v to verify tests pass.
REFACTOR Phase:
# Improve code quality def authenticate(email: str, password: str) -> Optional[User]: """Authenticate user with email and password.""" user = User.find_by_email(email) if user is None: return None if not user.check_password(password): return None user.mark_authenticated() return user # Run test → STILL PASSES
Verification: Run
pytest -v to verify tests pass.
Checkpoint Validation
Quality Gates:
- [ ] All acceptance criteria met - [ ] All tests passing (unit + integration) - [ ] Code linted (no warnings) - [ ] Type checking passes (if applicable) - [ ] Documentation updated - [ ] No regression in other components
Verification: Run
pytest -v to verify tests pass.
Automated Checks:
# Run quality gates make lint # Linting passes make typecheck # Type checking passes make test # All tests pass make coverage # Coverage threshold met
Verification: Run
pytest -v to verify tests pass.
Progress Tracking
Execution State
Save to
.attune/execution-state.json:
{ "plan_file": "docs/implementation-plan.md", "started_at": "2026-01-02T10:00:00Z", "last_checkpoint": "2026-01-02T14:30:22Z", "current_sprint": "Sprint 1", "current_phase": "Phase 1", "tasks": { "TASK-001": { "status": "complete", "started_at": "2026-01-02T10:05:00Z", "completed_at": "2026-01-02T10:50:00Z", "duration_minutes": 45, "acceptance_criteria_met": true, "tests_passing": true }, "TASK-002": { "status": "in_progress", "started_at": "2026-01-02T14:00:00Z", "progress_percent": 60, "blocker": null } }, "metrics": { "tasks_complete": 15, "tasks_total": 40, "completion_percent": 37.5, "velocity_tasks_per_day": 3.2, "estimated_completion_date": "2026-02-15" }, "blockers": [] }
Verification: Run
pytest -v to verify tests pass.
Progress Reports
Daily Standup:
# Daily Standup - [Date] ## Yesterday - ✅ [Task] ([duration]) - ✅ [Task] ([duration]) ## Today - 🔄 [Task] ([progress]%) - 📋 [Task] (planned) ## Blockers - [Blocker] or None ## Metrics - Sprint progress: [X/Y] tasks ([%]%) - [Status message]
Verification: Run the command with
--help flag to verify availability.
Sprint Report:
# Sprint [N] Progress Report **Dates**: [Start] - [End] **Goal**: [Sprint objective] ## Completed ([X] tasks) - [Task list] ## In Progress ([Y] tasks) - [Task] ([progress]%) ## Blocked ([Z] tasks) - [Task]: [Blocker description] ## Burndown - Day 1: [N] tasks remaining - Day 5: [M] tasks remaining ([status]) - Estimated completion: [Date] ([delta]) ## Risks - [Risk] or None identified
Verification: Run the command with
--help flag to verify availability.
Blocker Management
Blocker Detection
Common Blockers:
- Failing tests that can't be fixed quickly
- Missing dependencies or APIs
- Technical unknowns requiring research
- Resource unavailability
- Scope ambiguity
Systematic Debugging
When blocked, apply debugging framework:
- Reproduce: Create minimal reproduction case
- Hypothesize: Generate possible causes
- Test: Validate hypotheses one by one
- Resolve: Implement fix or workaround
- Document: Record solution for future
Escalation
When to escalate:
- Blocker persists > 2 hours
- Requires architecture change
- Impacts critical path
- Needs stakeholder decision
Escalation format:
## Blocker: [TASK-XXX] - [Issue] **Symptom**: [What's happening] **Impact**: [Which tasks/timeline affected] **Attempted Solutions**: 1. [Solution 1] - [Result] 2. [Solution 2] - [Result] **Recommendation**: [Proposed path forward] **Decision Needed**: [What needs to be decided]
Verification: Run the command with
--help flag to verify availability.
Quality Assurance
Definition of Done
Task is complete when:
- ✅ All acceptance criteria met
- ✅ All tests written and passing
- ✅ Code reviewed (self or peer)
- ✅ Linting passes with no warnings
- ✅ Type checking passes (if applicable)
- ✅ Documentation updated
- ✅ No known regressions
- ✅ Deployed to staging (if applicable)
Testing Strategy
Test Pyramid:
**Verification:** Run `pytest -v` to verify tests pass. /\ /E2E\ Few, slow, expensive /------\ / INT \ Some, moderate speed /----------\ / UNIT \ Many, fast, cheap
Verification: Run the command with
--help flag to verify availability.
Per Task:
- Unit tests: Test individual functions/classes
- Integration tests: Test component interactions
- E2E tests: Test complete user flows (for user-facing features)
Velocity Tracking
Burndown Metrics
Track daily:
- Tasks remaining
- Story points remaining
- Days left in sprint
- Velocity (tasks or points per day)
Formulas:
**Verification:** Run `pytest -v` to verify tests pass. Velocity = Tasks completed / Days elapsed Estimated completion = Tasks remaining / Velocity On track? = Estimated completion <= Sprint end date
Verification: Run the command with
--help flag to verify availability.
Velocity Adjustments
If ahead of schedule:
- Pull in stretch tasks
- Add technical debt reduction
- Improve test coverage
- Enhance documentation
If behind schedule:
- Identify causes (blockers, underestimation)
- Reduce scope (drop low-priority tasks)
- Increase focus (reduce distractions)
- Request help or extend timeline
Related Skills
- Execution framework (if available)Skill(superpowers:executing-plans)
- Debugging (if available)Skill(superpowers:systematic-debugging)
- TDD (if available)Skill(superpowers:test-driven-development)
- Validation (if available)Skill(superpowers:verification-before-completion)
- Full lifecycle orchestrationSkill(attune:mission-orchestrator)
Related Agents
- Task execution agentAgent(attune:project-implementer)
Related Commands
- Invoke this skill/attune:execute
- Execute specific task/attune:execute --task [ID]
- Resume from checkpoint/attune:execute --resume
Mission Report
At mission completion, produce a Mission Report using the template from
references/mission-report.md. The report documents:
- Mission identification: Links to brief, spec, plan
- Duration: Start, end, total time
- Outcome: success | partial | failed
- Delivered artifacts: Files created/modified/deleted
- Decisions: Key choices with rationale
- Validation evidence: Tests, reviews, demos
- Follow-ups: Recommended next steps
See
references/mission-report.md for the full template and
example reports for successful, partial, and failed missions.
Examples
See
/attune:execute command documentation for complete examples.