Agentic-tictactoe subsection-implementation
Complete workflow for implementing a single subsection from the implementation plan. Orchestrates all project skills to ensure consistent, high-quality implementation with full test coverage. Use when implementing any subsection (e.g., 5.1.1, 5.2.3) to automate the entire workflow from requirements to commit.
git clone https://github.com/arun-gupta/agentic-tictactoe
T=$(mktemp -d) && git clone --depth=1 https://github.com/arun-gupta/agentic-tictactoe "$T" && mkdir -p ~/.claude/skills && cp -r "$T/.claude/skills/subsection-implementation" ~/.claude/skills/arun-gupta-agentic-tictactoe-subsection-implementation && rm -rf "$T"
.claude/skills/subsection-implementation/SKILL.mdSubsection Implementation Workflow
This skill provides a complete, automated workflow for implementing a single subsection from
docs/implementation-plan.md. It orchestrates all project skills to ensure consistent, high-quality implementation.
Quick Start
To implement a subsection (e.g., 5.1.1):
/subsection-implementation 5.1.1
This will execute the entire workflow automatically.
Complete Workflow
Step 0: Clear Context (Fresh Start)
Goal: Start with a clean slate to avoid context pollution and ensure focused implementation.
Actions:
- Summarize current conversation state if needed
- Clear conversation history
- Focus only on the subsection to implement
- Load only relevant documentation and code
Why: Long conversations accumulate context that may interfere with focused implementation. Starting fresh ensures clarity.
Step 1: Read Subsection Requirements
Goal: Understand what needs to be implemented.
Actions:
- Read
for the specified subsectiondocs/implementation-plan.md - Extract:
- Subsection title and description
- Implementation notes
- Files to create/modify
- Subsection tests (acceptance criteria)
- Spec references
- Identify dependencies on previous subsections
- Verify prerequisites are complete
Output: Clear understanding of requirements and test expectations.
Step 2: Create Implementation Plan
Goal: Break down the subsection into actionable tasks.
Actions:
- Use
to track implementation tasks:TaskCreate- Read and understand requirements - Implement code changes - Write subsection tests - Run quality checks - Update documentation - Commit and push - Mark tasks as
when startingin_progress - Mark tasks as
when donecompleted
Pattern: Follow
@skills/phase-implementation/SKILL.md workflow
Step 3: Implement Code
Goal: Write production code following project patterns.
Actions:
- Create/modify files as specified in subsection
- Follow project conventions:
- Type hints on all functions
- Docstrings (Google style)
- Error handling with custom error codes (see
)@skills/error-handling/SKILL.md - API endpoints follow pattern (see
)@skills/api-endpoint-implementation/SKILL.md
- Import required dependencies
- Handle edge cases
- Add logging where appropriate
Patterns:
- Domain models: Use Pydantic v2
- Services: Return
types with error codesResult[T] - API endpoints: Use FastAPI with proper status codes
- Error handling: Follow
@skills/error-handling/SKILL.md
Update task: Mark "Implement code changes" as
completed
Step 4: Write Tests
Goal: Verify implementation meets all subsection test requirements.
Actions:
- Read subsection tests from implementation plan
- Write tests following
:@skills/test-writing/SKILL.md- Test file:
tests/unit/<module>/test_<component>.py - Test class:
class Test<ComponentName> - Test method format:
def test_subsection_X_Y_Z_requirement(self) -> None: - Include type annotations:
-> None
- Test file:
- Cover all subsection test cases listed in plan
- Test success paths and error cases
- Use descriptive test names
- Follow Arrange-Act-Assert pattern
Example:
def test_subsection_5_1_1_calls_llm_when_enabled(self) -> None: """Test subsection 5.1.1: Scout calls LLM when enabled.""" # Arrange scout = ScoutAgent(llm_enabled=True) game_state = create_test_game_state() # Act result = scout.analyze(game_state) # Assert assert result.success is True assert result.data.llm_used is True
Update task: Mark "Write subsection tests" as
completed
Step 5: Run Quality Checks
Goal: Ensure code meets all quality standards before committing.
Actions:
- Run linters:
ruff check src/ tests/ black --check src/ tests/ - Run type checker:
mypy --strict --explicit-package-bases src/ - Run tests:
pytest tests/ -v - Check coverage (if applicable):
pytest tests/ --cov=src --cov-report=term-missing
CRITICAL: All checks must pass before proceeding. If any fail:
- Fix the issues immediately
- Re-run all checks
- Do not proceed until everything is green ✅
Pattern: Follow
@skills/pre-commit-validation/SKILL.md
Update task: Mark "Run quality checks" as
completed
Step 6: Update Documentation
Goal: Mark subsection as complete and document implementation.
Actions:
- Update
:docs/implementation-plan.md- Add ✅ to subsection title
- Add Implementation Notes section with key details:
**Implementation Notes:** - [Key implementation detail 1] - [Key implementation detail 2] - [Any deviations from plan] - Add Subsection Tests section with ✅:
**Subsection Tests** ✅: - ✅ [Test 1 description] - ✅ [Test 2 description] - Add Test Coverage section:
**Test Coverage** ✅: - **Subsection Tests**: ✅ X tests implemented and passing - **Test File**: ✅ `tests/unit/<module>/test_<component>.py`
- Update any related documentation (API docs, architecture docs, etc.)
Update task: Mark "Update documentation" as
completed
Step 7: Commit and Push
Goal: Save work with a clear, descriptive commit message.
Actions:
- Stage all changes:
git add <files> - Create commit following
:@skills/commit-format/SKILL.md<type>(<scope>): <Subsection X.Y.Z> - <description> - [Implementation detail 1] - [Implementation detail 2] Tests: - [List of subsection tests added] Files: - <file1>: <description> - <file2>: <description> - Common commit types:
: New feature/functionalityfeat
: Bug fixfix
: Test-only changestest
: Code restructuringrefactor
: Documentation onlydocs
Example:
feat(agents): Subsection 5.1.1 - Scout LLM enhancement - Integrate Pydantic AI for board analysis - Add LLM fallback to rule-based logic - Implement retry mechanism (3 attempts with exponential backoff) - Add LLM call metadata logging Tests: - ✅ 8 subsection tests covering LLM integration - ✅ Test LLM calls, fallback, retries, and logging Files: - src/agents/scout.py: Add LLM integration with fallback - tests/unit/agents/test_scout_llm.py: Add subsection tests - docs/implementation-plan.md: Mark 5.1.1 as complete
- Push to remote:
git push origin main
CRITICAL: Only commit after:
- ✅ All tests pass
- ✅ All linters pass
- ✅ Type checker passes
- ✅ Documentation updated
- ✅ Implementation verified to work
Update task: Mark "Commit and push" as
completed
Workflow Summary
1. Clear Context → Start fresh 2. Read Requirements → Understand subsection 3. Create Plan → Track with TaskCreate 4. Implement Code → Follow project patterns 5. Write Tests → Verify requirements 6. Run Quality Checks → Ensure standards (MUST PASS) 7. Update Docs → Mark complete with notes 8. Commit & Push → Save work with clear message
Subsection Format
Subsections are identified by their numbering in the implementation plan:
- Phase.Section.Subsection: e.g.,
,5.1.1
,5.2.36.0.1 - Title: Descriptive name (e.g., "Scout LLM Enhancement")
- Tests: Listed as "Subsection Tests" in the plan
- Implementation Notes: Key details about implementation
Quality Gates
Before committing, all of these MUST be ✅:
- ✅ All subsection tests pass
- ✅ All existing tests still pass (no regressions)
- ✅ Linters pass (ruff, black)
- ✅ Type checker passes (mypy --strict)
- ✅ Documentation updated with ✅ markers
- ✅ Implementation notes added to plan
- ✅ Commit message follows format
If any quality gate fails: Fix issues and re-validate. Do not proceed.
Best Practices
- One subsection at a time: Never implement multiple subsections in one session
- Clear context first: Always start fresh to avoid confusion
- Follow the plan: Implement exactly what's specified in the subsection
- Test thoroughly: Cover all listed subsection tests
- Document clearly: Add implementation notes explaining key decisions
- Verify before commit: Run all quality checks and ensure they pass
- Commit atomically: One subsection = one commit
- Push immediately: Share work with team right away
Common Edge Cases
When Subsection Tests Fail
- Review test expectations in implementation plan
- Debug implementation vs. requirements
- Fix code or update requirements (if plan was wrong)
- Document any deviations in Implementation Notes
- Re-run all tests
- Only proceed when all tests pass
When Implementation Differs from Plan
- Update the implementation plan to match reality
- Document why the change was necessary in Implementation Notes
- Ensure subsection tests still cover the requirements
- Verify acceptance criteria are still met
When Tests Reveal Design Issues
- Fix the design issue first
- Update implementation plan if requirements change
- Update tests to match new design
- Document lessons learned in Implementation Notes
When Prerequisites Are Missing
- Stop immediately
- Identify which previous subsections must be completed first
- Implement prerequisites in order
- Return to original subsection after prerequisites are done
Skills Integration
This skill orchestrates these existing skills:
- @skills/phase-implementation/SKILL.md: Overall implementation workflow
- @skills/test-writing/SKILL.md: Test patterns and conventions
- @skills/commit-format/SKILL.md: Commit message format
- @skills/error-handling/SKILL.md: Error handling patterns
- @skills/api-endpoint-implementation/SKILL.md: API endpoint patterns
- @skills/pre-commit-validation/SKILL.md: Quality checks before commit
Follow all patterns from these skills during implementation.
Example: Complete Workflow for Subsection 5.1.1
User:
/subsection-implementation 5.1.1
AI Response:
Step 0: Clear Context
"Starting fresh implementation of subsection 5.1.1..."
Step 1: Read Requirements
"Reading docs/implementation-plan.md section 5.1.1..."
- Title: Scout LLM Enhancement
- Files: src/agents/scout.py
- Tests: 8 subsection tests listed
Step 2: Create Plan
"Creating task list..." [Creates TaskCreate with 6 tasks]
Step 3: Implement Code
"Implementing Scout LLM enhancement..." [Modifies src/agents/scout.py with LLM integration]
Step 4: Write Tests
"Writing subsection tests..." [Creates tests/unit/agents/test_scout_llm.py with 8 tests]
Step 5: Run Quality Checks
ruff check src/ tests/ # ✅ Pass black --check src/ tests/ # ✅ Pass mypy --strict src/ # ✅ Pass pytest tests/ -v # ✅ 8/8 passed
Step 6: Update Documentation
[Updates docs/implementation-plan.md]:
- Adds ✅ to subsection 5.1.1
- Adds Implementation Notes
- Marks Subsection Tests as ✅
Step 7: Commit & Push
git add src/agents/scout.py tests/unit/agents/test_scout_llm.py docs/implementation-plan.md git commit -m "feat(agents): Subsection 5.1.1 - Scout LLM enhancement - Integrate Pydantic AI for board analysis - Add LLM fallback to rule-based logic - Implement retry mechanism (3 attempts) - Add LLM call metadata logging Tests: - ✅ 8 subsection tests covering LLM integration Files: - src/agents/scout.py: Add LLM integration with fallback - tests/unit/agents/test_scout_llm.py: Add subsection tests - docs/implementation-plan.md: Mark 5.1.1 as complete" git push origin main
✅ Subsection 5.1.1 Complete!
Usage
Basic Usage
Implement a specific subsection:
/subsection-implementation 5.1.1
Advanced Usage
With specific branch:
/subsection-implementation 5.1.1 --branch feature/scout-llm
With dry-run (show plan without executing):
/subsection-implementation 5.1.1 --dry-run
Troubleshooting
"Subsection not found"
- Check that subsection exists in
docs/implementation-plan.md - Ensure proper format (e.g.,
, not5.1.1
)5-1-1
"Prerequisites not complete"
- Check that previous subsections are marked ✅
- Implement prerequisites first
"Tests failing"
- Review subsection test requirements in plan
- Debug implementation vs. requirements
- Fix issues before proceeding
"Quality checks failing"
- Run checks individually to identify issue
- Fix linting/type errors
- Re-run until all checks pass