Awesome-omni-skill AILANG Sprint Executor
Execute approved sprint plans with test-driven development, continuous linting, progress tracking, and pause points. Use when user says "execute sprint", "start sprint", or wants to implement an approved sprint plan.
git clone https://github.com/diegosouzapw/awesome-omni-skill
T=$(mktemp -d) && git clone --depth=1 https://github.com/diegosouzapw/awesome-omni-skill "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data-ai/ailang-sprint-executor-majiayu000" ~/.claude/skills/diegosouzapw-awesome-omni-skill-ailang-sprint-executor && rm -rf "$T"
skills/data-ai/ailang-sprint-executor-majiayu000/SKILL.mdAILANG Sprint Executor
Execute an approved sprint plan with continuous progress tracking, testing, and documentation updates.
Quick Start
Most common usage:
# User says: "Execute the sprint plan in design_docs/20251019/M-S1.md" # This skill will: # 1. Validate prerequisites (tests pass, linting clean) # 2. Create TodoWrite tasks for all milestones # 3. Execute each milestone with test-driven development # 4. Run checkpoint after each milestone (tests + lint) # 5. Update CHANGELOG and sprint plan progressively # 6. Pause after each milestone for user review
When to Use This Skill
Invoke this skill when:
- User says "execute sprint", "start sprint", "begin implementation"
- User has an approved sprint plan ready to implement
- User wants guided execution with built-in quality checks
- User needs progress tracking and pause points
Core Principles
- Test-Driven: All code must pass tests before moving to next milestone
- Lint-Clean: All code must pass linting before moving to next milestone
- Document as You Go: Update CHANGELOG.md and sprint plan progressively
- Pause for Breath: Stop at natural breakpoints for review and approval
- Track Everything: Use TodoWrite to maintain visible progress
- DX-First: Improve AILANG development experience as we go - make it easier next time
Multi-Session Continuity (NEW)
Sprint execution can now span multiple Claude Code sessions!
Based on Anthropic's long-running agent patterns, sprint-executor implements the "Coding Agent" pattern:
-
Session Startup Routine: Every session starts with
session_start.sh- Checks working directory
- Reads JSON progress file (
).ailang/state/sprint_<id>.json - Reviews recent git commits
- Validates tests pass
- Prints "Here's where we left off" summary
-
Structured Progress Tracking: JSON file tracks state
- Features with
(follows "constrained modification" pattern)passes: true/false/null - Velocity metrics updated automatically
- Clear checkpoint messages
- Session timestamps
- Features with
-
Pause and Resume: Work can be interrupted at any time
- Status saved to JSON:
,not_started
,in_progress
,pausedcompleted - Next session picks up exactly where you left off
- No loss of context or progress
- Status saved to JSON:
For JSON schema details, see resources/json_progress_schema.md
Documentation URLs
When adding error messages, help text, or documentation links in code:
Website: https://sunholo-data.github.io/ailang/
Documentation Source: The website documentation lives in this repo at
docs/
- Markdown files:
(guides, reference, etc.)docs/docs/ - Static assets:
docs/static/ - Docusaurus config:
docs/docusaurus.config.js
Common Documentation Paths:
- Language syntax:
/docs/reference/language-syntax - Module system:
/docs/guides/module_execution - Getting started:
/docs/guides/getting-started - REPL guide:
/docs/guides/getting-started#repl - Implementation status:
/docs/reference/implementation-status
Full URL Example:
https://sunholo-data.github.io/ailang/docs/reference/language-syntax
Best Practices:
- Check that documentation URLs actually exist before using them in error messages or help text
- Look in
to verify the file exists locallydocs/docs/ - Use
orls docs/docs/reference/
to find available pagesls docs/docs/guides/
Available Scripts
scripts/session_start.sh <sprint_id>
scripts/session_start.sh <sprint_id>NEW: Resume sprint execution across multiple sessions.
Usage:
# Start or resume a sprint .claude/skills/sprint-executor/scripts/session_start.sh M-S1
What it does:
- Implements "Session Startup Routine" from Anthropic article
- Checks pwd (working directory)
- Loads JSON progress file (
).ailang/state/sprint_<id>.json - Reviews recent git commits (last 3)
- Runs tests to verify clean state
- Shows feature progress summary (complete/in-progress/pending)
- Displays velocity metrics
- Prints "Here's where we left off" message
When to use:
- ALWAYS at the start of EVERY session continuing a sprint
- First thing after user says "continue sprint" or "resume M-S1"
- Provides context for multi-session work
Exit codes:
- Sprint ready to continue0
- Progress file not found or tests failing1
scripts/validate_prerequisites.sh
scripts/validate_prerequisites.shValidate prerequisites before starting sprint execution.
Usage:
.claude/skills/sprint-executor/scripts/validate_prerequisites.sh
Output:
Validating sprint prerequisites... 1/4 Checking working directory... ✓ Working directory clean 2/4 Checking current branch... ✓ On branch: dev 3/4 Running tests... ✓ All tests pass 4/4 Running linter... ✓ Linting passes ✓ All prerequisites validated! Ready to start sprint execution.
Exit codes:
- All prerequisites pass0
- One or more prerequisites fail1
scripts/milestone_checkpoint.sh <milestone_name>
scripts/milestone_checkpoint.sh <milestone_name>Run checkpoint after completing a milestone.
Usage:
.claude/skills/sprint-executor/scripts/milestone_checkpoint.sh "M-S1.1: Parser foundation"
Output:
Running checkpoint for: M-S1.1: Parser foundation 1/3 Running tests... ✓ Tests pass 2/3 Running linter... ✓ Linting passes 3/3 Files changed in this milestone... internal/parser/parser.go | 125 ++++++++++++++++++ internal/parser/parser_test.go | 89 +++++++++++++ 2 files changed, 214 insertions(+) ✓ Milestone checkpoint passed! Ready to proceed to next milestone.
Execution Flow
Phase 0: Session Resumption (for continuing sprints)
If this is NOT the first session for this sprint:
# ALWAYS run session_start.sh first! .claude/skills/sprint-executor/scripts/session_start.sh <sprint-id>
This will:
- Load JSON progress file
- Show what's complete, in-progress, and pending
- Verify tests pass before continuing
- Print "Here's where we left off" summary
Then skip to Phase 2 to continue with the next milestone.
Phase 1: Initialize Sprint (first session only)
1. Read Sprint Plan
- Parse sprint plan document (e.g.,
)design_docs/20251019/M-S1.md - Load JSON progress file (
).ailang/state/sprint_<id>.json - Extract all milestones and tasks from JSON
- Note dependencies and acceptance criteria
- Identify estimated LOC and duration
2. Validate Prerequisites
Use the validation script:
.claude/skills/sprint-executor/scripts/validate_prerequisites.sh
Manual checks:
- Working directory clean:
git status --short - Current tests pass:
make test - Current linting passes:
make lint - On correct branch (usually
)dev
If validation fails:
- Fix issues before starting
- Don't proceed with dirty working directory
- Don't start with failing tests or linting
3. Create Todo List
Use TodoWrite to create tasks:
- Extract all milestones from sprint plan
- Mark first milestone as
in_progress - Keep remaining tasks as
pending - This provides real-time progress visibility
4. Initial Status Update
- Update sprint plan with "🔄 In Progress" status
- Add start timestamp
- Commit sprint plan update (optional)
5. Initial DX Review
- Review what tasks we're about to do
- Consider what tools/helpers would make this sprint easier
- Small DX improvements (<30 min): Add them to the milestone plan immediately
- Examples: Helper functions, test utilities, debug flags, make targets
- Large DX improvements (>30 min): Create design doc in
design_docs/planned/vX_Y/m-dx*.md- Examples: New skill, major refactor, architectural change
- Document DX improvement decisions in sprint plan
Phase 2: Execute Milestones
For each milestone in the sprint:
Step 1: Pre-Implementation
- Mark milestone as
in TodoWritein_progress - Review milestone goals and acceptance criteria
- Identify files to create/modify
- Estimate LOC if not already specified
Step 2: Implement
During implementation, think about DX:
- Write implementation code following the task breakdown
- Follow design patterns from sprint plan
- Add inline comments for complex logic
- Keep functions small and focused
DX-aware implementation:
- If you're writing boilerplate, could it be a helper function?
- If you're debugging something, could a debug flag help?
- If you're looking things up repeatedly, should it be documented?
- If an error message confused you, would it confuse others?
- If a test is verbose, could test helpers make it cleaner?
Examples:
// ❌ Before DX thinking if p.Errors() != nil { // Manually check each error... } // ✅ After DX thinking - Add helper AssertNoErrors(t, p) // Helper added for reuse // ❌ Before DX thinking // Manually inspecting tokens with fmt.Printf fmt.Printf("cur=%v peek=%v\n", p.curToken, p.peekToken) // ✅ After DX thinking - Add debug mode // DEBUG_PARSER=1 automatically traces token flow // ❌ Before DX thinking return fmt.Errorf("parse error") // ✅ After DX thinking - Actionable error return fmt.Errorf("parse error at line %d: expected RPAREN after argument list, got %s. See docs/guides/parser_development.md#common-issues", p.curToken.Line, p.curToken.Type)
When to act on DX ideas:
- 🟢 Quick (<15 min): Do it now as part of this milestone
- 🟡 Medium (15-30 min): Note in TODO list, do at end of milestone if time allows
- 🔴 Large (>30 min): Note for design doc in reflection step
Step 3: Write Tests
⚠️ TDD REMINDER (M-TESTING Learning):
Consider writing tests BEFORE or ALONGSIDE implementation for:
- Complex algorithms (shrinking, generators, property evaluation)
- API integration (using unfamiliar packages like internal/eval)
- Error handling paths (multiple failure modes)
Benefits of TDD/Test-First:
- Discover API issues earlier (before writing 500 lines)
- Better design from testability constraints
- Catch bugs in development, not at checkpoint
- Example: Day 7 wrote 530 lines → 23 API errors. Tests first would find these at ~50 lines.
Standard Testing:
- Create/update test files (*_test.go)
- Aim for comprehensive coverage (all acceptance criteria)
- Include edge cases and error conditions
- Test both success and failure paths
Parser tests (M-DX9):
- Use helpers from
:internal/parser/test_helpers.go
- Check for parser errorsAssertNoErrors(t, p)
- Check integer literalsAssertLiteralInt(t, expr, 42)
- Check identifiersAssertIdentifier(t, expr, "name")
- Check function callsAssertFuncCall(t, expr)- See full list in internal/parser/test_helpers.go
- Reference docs/guides/parser_development.md for test patterns
- Common gotchas documented in internal/ast/ast.go (e.g., int64 vs int)
Step 4: Verify Quality
Run checkpoint script:
.claude/skills/sprint-executor/scripts/milestone_checkpoint.sh "Milestone name"
Manual verification:
make test # MUST PASS make lint # MUST PASS
CRITICAL: If tests or linting fail, fix immediately before proceeding.
Step 5: Update Documentation
Update CHANGELOG.md:
- What was implemented
- LOC counts (implementation + tests)
- Key design decisions
- Files modified/created
Create/update example files (CRITICAL - ALWAYS REQUIRED):
- Every new language feature MUST have a corresponding example file
- Create
for the new featureexamples/feature_name.ail - Include comprehensive examples showing all capabilities
- Add comments explaining the behavior and expected output
- ⚠️ Test that examples actually work:
ailang run examples/feature_name.ail - ⚠️ Add warning headers to examples that don't work yet (use
)make flag-broken - Document example files created in CHANGELOG.md
- See CLAUDE.md "IMPORTANT: Example Files Required" section
Update sprint plan (markdown):
- Mark milestone as ✅
- Add actual LOC vs estimated
- Note any deviations from plan
- List example files created/updated
NEW: Update JSON progress file:
# Update feature status in .ailang/state/sprint_<id>.json # Using jq for safe atomic updates: SPRINT_ID="M-S1" FEATURE_ID="M-S1.1" ACTUAL_LOC=214 # Mark feature as passing jq --arg id "$FEATURE_ID" \ --argjson passes true \ --argjson loc "$ACTUAL_LOC" \ --arg completed "$(date -u +"%Y-%m-%dT%H:%M:%SZ")" \ '(.features[] | select(.id == $id) | .passes) = $passes | (.features[] | select(.id == $id) | .actual_loc) = $loc | (.features[] | select(.id == $id) | .completed) = $completed' \ ".ailang/state/sprint_${SPRINT_ID}.json" > /tmp/sprint_update.json mv /tmp/sprint_update.json ".ailang/state/sprint_${SPRINT_ID}.json" # Update velocity metrics # (This can be automated - calculate from completed features)
Important: Only update the
passes, actual_loc, completed, and notes fields. Do NOT modify description or acceptance_criteria (follows "constrained modification" pattern).
Step 6: DX Reflection
After each milestone, reflect on the development experience:
Ask yourself:
- What was painful during this milestone?
- What took longer than expected due to tooling gaps?
- What did we have to lookup multiple times?
- What errors/bugs could better tooling prevent?
Categorize DX improvements:
🟢 Quick wins (<15 min) - Do immediately:
- Add helper function to reduce boilerplate
- Add debug flag for better visibility
- Improve error message with actionable suggestion
- Add make target for common workflow
- Document pattern in code comments
🟡 Medium improvements (15-30 min) - Add to current sprint if time allows:
- Create test utility package
- Add validation script
- Improve CLI flag organization
- Add comprehensive examples
🔴 Large improvements (>30 min) - Create design doc:
- New skill for complex workflow
- Major architectural change
- New developer tool or subsystem
- Significant codebase reorganization
Document in milestone summary:
## DX Improvements (Milestone X) ✅ **Applied**: Added `AssertNoErrors(t, p)` test helper (5 min) 📝 **Deferred**: Created M-DX10 design doc for parser AST viewer tool (estimated 2 hours) 💡 **Considered**: Better REPL error messages (added to backlog)
Step 7: Pause for Breath
After each milestone:
- Show summary of what was completed
- Show current sprint progress (X of Y milestones done)
- Show velocity (LOC/day vs planned)
- Show DX improvements made/planned
- Ask user: "Ready to continue to next milestone?" or "Need to review/adjust?"
- If user says "pause" or "stop", save current state and exit gracefully
Phase 3: Finalize Sprint
When all milestones are complete:
1. Final Testing
make test # Full test suite make lint # All linting make test-coverage-badge # Coverage check
2. Documentation Review
- Verify CHANGELOG.md is complete
- Verify example files created and tested (CRITICAL)
- Every new feature should have
examples/feature_name.ail - Run
to check all examplesmake verify-examples - Check that new examples are documented in CHANGELOG.md
- Every new feature should have
- Verify sprint plan shows all milestones as ✅
- Update sprint plan with final metrics:
- Total LOC (actual vs estimated)
- Total time (actual vs estimated)
- Velocity achieved
- Test coverage achieved
- Example files created (list them)
- Any deviations from plan
3. Final Commit
git commit -m "Complete sprint: <sprint-name> Milestones completed: - <Milestone 1>: <LOC> - <Milestone 2>: <LOC> Total: <actual-LOC> LOC in <actual-time> Velocity: <LOC/day> Test coverage: <percentage>"
4. Summary Report
- Show sprint completion summary
- Compare planned vs actual (LOC, time, milestones)
- Highlight any issues or deviations
- Suggest next steps (new sprint, release, etc.)
5. DX Impact Summary
Consolidate all DX improvements made during sprint:
## DX Improvements Summary (Sprint M-XXX) ### Applied During Sprint ✅ **Test Helpers** (Day 2, 10 min): Added `AssertNoErrors()` and `AssertLiteralInt()` helpers - Impact: Reduced test boilerplate by ~30% - Files: internal/parser/test_helpers.go ✅ **Debug Flag** (Day 4, 5 min): Added `DEBUG_PARSER=1` for token tracing - Impact: Eliminated 2 hours of token position debugging - Files: internal/parser/debug.go ✅ **Make Target** (Day 6, 3 min): Added `make update-golden` for parser test updates - Impact: Simplified golden file workflow - Files: Makefile ### Design Docs Created 📝 **M-DX10**: Parser AST Viewer Tool (estimated 2 hours) - Rationale: Spent 45 min manually inspecting AST structures - Expected ROI: Save ~30 min per future parser sprint - File: design_docs/planned/v0_4_0/m-dx10-ast-viewer.md 📝 **M-DX11**: Unified Error Message System (estimated 4 hours) - Rationale: Error messages inconsistent across lexer/parser/type checker - Expected ROI: Easier debugging for AI and humans - File: design_docs/planned/v0_4_0/m-dx11-error-system.md ### Considered But Deferred 💡 **REPL history search**: Nice-to-have, low impact vs effort 💡 **Syntax highlighting**: Human-focused, AILANG is AI-first 💡 **Auto-completion**: Deferred until reflection system complete ### Total DX Investment This Sprint - Time spent: 18 min (quick wins) - Time saved: ~3 hours (estimated, based on future sprint projections) - Design docs: 2 (total estimated effort: 6 hours for future sprints) - **Net impact**: Positive ROI even in current sprint
Key Questions for Future:
- Which DX improvements should be prioritized next?
- Are there patterns in pain points (e.g., parser work always needs better debugging)?
- Should any DX improvements be added to "Definition of Done" for future sprints?
DX Improvement Patterns
Common DX improvements to watch for during sprints:
1. Repetitive Boilerplate → Helper Functions
Signals:
- Copying/pasting the same test setup code
- Same validation logic repeated across functions
- Common error handling patterns duplicated
Quick fixes (5-15 min):
- Extract to helper function in same package
- Add to
file*_helpers.go - Document with usage example
- Add tests for helper if complex
Example: M-DX9 added
AssertNoErrors(t, p) after noticing parser test boilerplate.
2. Hard-to-Debug Issues → Debug Flags
Signals:
- Adding temporary
statementsfmt.Printf() - Manually tracing execution flow
- Repeatedly inspecting internal state
Quick fixes (5-10 min):
- Add
environment variable checkDEBUG_<SUBSYSTEM>=1 - Gate debug output behind flag (zero overhead when off)
- Document in CLAUDE.md or code comments
Example: M-DX9 added
DEBUG_PARSER=1 for token flow tracing.
3. Manual Workflows → Make Targets
Signals:
- Running multi-step commands repeatedly
- Forgetting command flags or order
- Different team members using different commands
Quick fixes (3-5 min):
- Add
with clear namemake <target> - Document what it does in
make help - Show example usage in relevant docs
Example:
make update-golden for parser test golden files.
4. Confusing APIs → Documentation
Signals:
- Looking up API signatures multiple times
- Trial-and-error with function arguments
- Grep-diving to understand usage
Quick fixes (10-20 min):
- Add package-level godoc with examples
- Document common patterns in CLAUDE.md
- Add usage examples to function comments
- Create
target if missingmake doc PKG=<package>
Example: M-TESTING documented common API patterns in CLAUDE.md.
5. Poor Error Messages → Actionable Errors
Signals:
- Error doesn't explain what went wrong
- No suggestion for how to fix
- Missing context (line numbers, file names)
Quick fixes (5-15 min):
- Add context to error message
- Suggest fix or workaround
- Link to documentation if relevant
- Include values that triggered error
Example:
// ❌ Before return fmt.Errorf("parse error") // ✅ After return fmt.Errorf("parse error at %s:%d: expected RPAREN, got %s. Did you forget to close the argument list? See: https://sunholo-data.github.io/ailang/docs/guides/parser_development#common-issues", p.filename, p.curToken.Line, p.curToken.Type)
6. Painful Testing → Test Utilities
Signals:
- Verbose test setup/teardown
- Repeated value construction
- Brittle test assertions
Quick fixes (10-20 min):
- Create test helper package (e.g.,
)testctx/ - Add value constructors (e.g.,
,MakeString()
)MakeInt() - Add assertion helpers (e.g.,
)AssertNoErrors()
Example: M-DX1 added
testctx package for builtin testing.
DX ROI Calculator
When deciding whether to implement a DX improvement:
Time saved per use × Expected uses = Total savings If Total savings > Implementation time + Maintenance → DO IT
Examples:
- Helper function: 2 min × 20 uses = 40 min saved, costs 10 min → ROI = 4x ✅
- Debug flag: 15 min × 5 uses = 75 min saved, costs 8 min → ROI = 9x ✅
- Documentation: 5 min × 30 uses = 150 min saved, costs 20 min → ROI = 7.5x ✅
- New skill: 30 min × 2 uses = 60 min saved, costs 120 min → ROI = 0.5x ❌ (create design doc for later)
Note: ROI compounds over time as more developers/sprints benefit!
Key Features
Continuous Testing
- Run
after every file changemake test - Never proceed if tests fail
- Show test output for visibility
- Track test count increase
Parser test best practices (M-DX9):
- Use test helpers from
for cleaner assertionsinternal/parser/test_helpers.go - Print errors BEFORE
or uset.Fatalf()
helperAssertNoErrors(t, p) - Reference docs/guides/parser_development.md for patterns
- See internal/ast/ast.go comments for AST usage examples
Continuous Linting
- Run
after implementationmake lint - Fix linting issues immediately
- Use
for formatting issuesmake fmt - Verify with
make fmt-check
Progress Tracking
- TodoWrite shows real-time progress
- Sprint plan updated at each milestone
- CHANGELOG.md grows incrementally
- Git commits create audit trail
Implementation Status Tracking (M-TESTING Learning):
When creating stubs for progressive development, document them explicitly in milestone summaries:
## Implementation Status (Milestone X Complete) ✅ **Complete**: CLI parsing, file walking, reporter integration ⏳ **Stubbed**: Test execution (returns skip for now) 📋 **Next**: Wire up pipeline/eval integration (Day X+1) **Stub Locations** (for handoff/continuation): - cmd/ailang/test.go:127 (executeUnitTest) - Returns skip - cmd/ailang/test.go:139 (executePropertyTest) - Returns skip
Why this matters:
- Clear handoff points between milestones
- No surprises about what's functional vs stubbed
- Easy to find what needs wiring in next milestone
- Validates progressive development strategy
Pause Points
- After each milestone completion
- When tests fail (fix before continuing)
- When linting fails (fix before continuing)
- When user requests "pause"
- When encountering unexpected issues
Error Handling
- If tests fail: Show output, ask how to fix, don't proceed
- If linting fails: Show output, ask how to fix, don't proceed
- If implementation unclear: Ask for clarification, don't guess
- If milestone takes much longer than estimated: Pause and reassess
Parser debugging (M-DX9, v0.3.21):
- Use
to trace token flowDEBUG_PARSER=1 ailang run test.ail - Use
to trace delimiter matching (nested braces, match expressions)DEBUG_DELIMITERS=1 ailang run test.ail - Enhanced error messages now show context depth and suggest DEBUG_DELIMITERS=1 for deep nesting
- Check docs/guides/parser_development.md for troubleshooting
- Common issues documented in CLAUDE.md "Parser Developer Experience Guide" section
Resources
Parser Development Tools (M-DX9)
For parser-related sprints, use these M-DX9 tools:
-
Comprehensive Guide: docs/guides/parser_development.md
- Quick start with example (adding new expression type)
- Token position convention (AT vs AFTER) - prevents 30% of bugs
- Common AST types reference
- Parser patterns (delimited lists, optional sections, precedence)
- Test infrastructure guide
- Debug tools reference
- Common gotchas and troubleshooting
-
Test Helpers: internal/parser/test_helpers.go
- 15 helper functions for cleaner parser tests
- Check for parser errorsAssertNoErrors(t, p)
- Check literalsAssertLiteralInt/String/Bool/Float(t, expr, value)
- Check identifiersAssertIdentifier(t, expr, name)
- Check structuresAssertFuncCall/List/ListLength(t, expr)
- Check declarationsAssertDeclCount/FuncDecl/TypeDecl(t, file, ...)- All helpers call
for clean stack tracest.Helper()
-
Debug Tooling: internal/parser/debug.go, internal/parser/delimiter_trace.go
environment variable for token flow tracingDEBUG_PARSER=1- Shows ENTER/EXIT with cur/peek tokens for parseExpression, parseType
- Zero overhead when disabled
- Example:
DEBUG_PARSER=1 ailang run test.ail
NEW v0.3.21: Delimiter Stack Tracer
environment variable for delimiter matching tracingDEBUG_DELIMITERS=1- Shows opening/closing of
{
with context (match, block, case, function)} - Visual indentation shows nesting depth
- Detects delimiter mismatches and shows expected vs actual
- Shows stack state on errors
- Example:
DEBUG_DELIMITERS=1 ailang run test.ail - Use when: Debugging nested match expressions, finding unmatched braces, understanding complex nesting
-
Enhanced Error Messages (v0.3.21): internal/parser/parser_error.go
- Context-aware hints for delimiter errors
- Shows nesting depth when inside nested constructs
- Suggests DEBUG_DELIMITERS=1 for deep nesting issues
- Specific guidance for
,}
,)
errors] - Actionable workarounds (simplify nesting, use let bindings)
-
AST Usage Examples: internal/ast/ast.go
- Comprehensive documentation on 6 major AST types
- Usage examples for Identifier, Literal, Lambda, FuncCall, List, FuncDecl
- ⚠️ CRITICAL: int64 vs int gotcha prominently documented
- Common parser patterns for each type
-
Quick Reference: CLAUDE.md "Parser Developer Experience Guide" section
- Token position convention
- Common AST types
- Quick token lookup
- Parsing optional sections pattern
- Test error printing pattern
When to use these tools:
- ✅ Any sprint touching
codeinternal/parser/ - ✅ Any sprint adding new expression/statement/type syntax
- ✅ Any sprint modifying AST nodes
- ✅ When encountering token position bugs
- ✅ When writing parser tests
Impact: M-DX9 tools reduce parser development time by 30% by eliminating token position debugging overhead.
Pattern Matching Pipeline (M-DX10)
For pattern matching sprints (adding/fixing patterns), understand the 4-layer pipeline:
Pattern changes propagate through parser → elaborator → type checker → evaluator. Each layer transforms the pattern representation.
The 4-Layer Pipeline
1. Parser (internal/parser/parser_pattern.go)
- Input: Source syntax (e.g.,
,::(x, rest)
,(a, b)
)[] - Output: AST pattern nodes (
,ast.ConstructorPattern
,ast.TuplePattern
)ast.ListPattern - Role: Recognize pattern syntax and build AST
- Example:
→::(x, rest)ast.ConstructorPattern{Name: "::", Patterns: [x, rest]}
2. Elaborator (internal/elaborate/patterns.go)
- Input: AST patterns
- Output: Core patterns (
,core.ConstructorPattern
,core.TuplePattern
)core.ListPattern - Role: Convert surface syntax to core representation
- ⚠️ Special cases: Some AST patterns transform differently in Core!
ConstructorPattern →::
(M-DX10)ListPattern{Elements: [head], Tail: tail}- Why: Lists are
at runtime, notListValue
with constructorsTaggedValue
3. Type Checker (internal/types/patterns.go)
- Input: Core patterns
- Output: Pattern types, exhaustiveness checking
- Role: Infer pattern types, check coverage
- Example:
→::(x: int, rest: List[int])List[int]
4. Evaluator (internal/eval/eval_patterns.go)
- Input: Core patterns + runtime values
- Output: Pattern match success/failure + bindings
- Role: Runtime pattern matching against values
- ⚠️ CRITICAL: Pattern type must match Value type!
matchesListPatternListValue
matchesConstructorPatternTaggedValue
matchesTuplePatternTupleValue- Mismatch = pattern never matches!
Cross-References in Code
Each layer has comments pointing to the next layer:
// internal/parser/parser_pattern.go case lexer.DCOLON: // Parses :: pattern syntax // See internal/elaborate/patterns.go for elaboration to Core // internal/elaborate/patterns.go case *ast.ConstructorPattern: if p.Name == "::" { // Special case: :: elaborates to ListPattern // See internal/eval/eval_patterns.go for runtime matching } // internal/eval/eval_patterns.go case *core.ListPattern: // Matches against ListValue at runtime // If pattern type doesn't match value type, match fails
Common Pattern Gotchas
1. Two-Phase Fix Required (M-DX10 Lesson)
- Symptom: Parser accepts pattern, but runtime never matches
- Cause: Parser fix alone isn't enough - elaborator also needs fixing
- Solution: Check elaborator transforms pattern correctly for runtime
- Example:
parsed as::
, but must elaborate toConstructorPatternListPattern
2. Pattern Type Mismatch
- Symptom: Pattern looks correct but never matches any value
- Cause: Pattern type doesn't match value type in evaluator
- Debug: Check
inmatchPattern()
- does pattern type match value type?eval_patterns.go
3. Special Syntax Requires Special Elaboration
- Symptom: Standard elaboration doesn't work for custom syntax
- Solution: Add special case in elaborator (like
→::
)ListPattern - When: Syntax sugar, built-in constructors, or ML-style patterns
When to Use This Guide
Use when:
- ✅ Adding new pattern syntax (e.g.,
,::
, guards)@ - ✅ Fixing pattern matching bugs
- ✅ Understanding why patterns don't match at runtime
- ✅ Debugging elaboration or evaluation of patterns
Quick checklist for pattern changes:
- Parser: Does
recognize the syntax?parsePattern() - Elaborator: Does it transform to correct Core pattern type?
- Type Checker: Does pattern type inference work?
- Evaluator: Does pattern type match value type at runtime?
Impact: Understanding this pipeline prevents two-phase fix discoveries and reduces pattern debugging time by 50%.
Common API Patterns (M-TESTING Learnings)
⚠️ ALWAYS check
before grepping or guessing APIs!make doc PKG=<package>
Quick API Lookup
# Find constructor signatures make doc PKG=internal/testing | grep "NewCollector" # Output: func NewCollector(modulePath string) *Collector # Find struct fields make doc PKG=internal/ast | grep -A 20 "type FuncDecl" # Shows: Tests []*TestCase, Properties []*Property
Common Constructors
| Package | Constructor | Signature | Notes |
|---|---|---|---|
| | Takes module path | M-TESTING |
| | No arguments | Surface → Core |
| | Takes Core prog + imports | Type inference |
| | No arguments | Dictionary linking |
| | Takes lexer instance | Parser |
| | Takes EffContext | Core evaluator |
Common API Mistakes
Test Collection (M-TESTING):
// ✅ CORRECT collector := testing.NewCollector("module/path") suite := collector.Collect(file) for _, test := range suite.Tests { ... } // Tests is the slice! // ❌ WRONG collector := testing.NewCollector(file, modulePath) // Wrong arg order! for _, test := range suite.Tests.Cases { ... } // No .Cases field!
String Formatting:
// ✅ CORRECT name := fmt.Sprintf("test_%d", i+1) // ❌ WRONG - Produces "\x01" not "1"! name := "test_" + string(rune(i+1)) // BUG!
Field Access:
// ✅ CORRECT funcDecl.Tests // []*ast.TestCase funcDecl.Properties // []*ast.Property // ❌ WRONG funcDecl.InlineTests // Doesn't exist! Use .Tests
API Discovery Workflow
(~30 sec) ← Start here!make doc PKG=<package>- Check source file if you know location (
)grep "^func New" file.go - Check test files for usage examples (
)grep "NewCollector" *_test.go - Read docs/guides/ for complex workflows
Time savings: 80% reduction (5-10 min → 30 sec per lookup)
Full reference: See CLAUDE.md "Common API Patterns" section
DX Quick Reference
See
for quick reference card on DX improvements. Use during sprint execution to:resources/dx_quick_reference.md
- Quickly decide whether to implement a DX improvement (decision matrix)
- Identify common DX patterns and their fixes
- Calculate ROI for improvements
- Use reflection questions after each milestone
- Apply documentation templates
Developer Tools Reference
See
for comprehensive reference of all available make targets, ailang commands, scripts, and workflows. Load this when you need to:resources/developer_tools.md
- Know which test targets to use
- Update golden files after parser changes
- Verify stdlib changes
- Run evals or compare baselines
- Troubleshoot build/test/lint issues
- Find the right tool for any development task
Milestone Checklist
See
for complete step-by-step checklist per milestone.resources/milestone_checklist.md
Prerequisites
- Working directory should be clean (or have only sprint-related changes)
- Current branch should be
(or specified in sprint plan)dev - All existing tests must pass before starting
- All existing linting must pass before starting
- Sprint plan must be approved and documented
Failure Recovery
If Tests Fail During Sprint
- Show test failure output
- Ask user: "Tests failing. Options: (a) fix now, (b) revert change, (c) pause sprint"
- Don't proceed until tests pass
If Linting Fails During Sprint
- Show linting output
- Try auto-fix:
make fmt - If still failing, ask user for guidance
- Don't proceed until linting passes
If Implementation Blocked
- Show what's blocking progress
- Ask user for guidance or clarification
- Consider simplifying the approach
- Document the blocker in sprint plan
If Velocity Much Lower Than Expected
- Pause and reassess after 2-3 milestones
- Calculate actual velocity
- Propose: (a) continue as-is, (b) reduce scope, (c) extend timeline
- Update sprint plan with revised estimates
Progressive Disclosure
This skill loads information progressively:
- Always loaded: This SKILL.md file (YAML frontmatter + execution workflow)
- Execute as needed: Scripts in
directory (validation, checkpoints)scripts/ - Load on demand:
(detailed checklist)resources/milestone_checklist.md
Scripts execute without loading into context window, saving tokens while ensuring quality.
Notes
- This skill is long-running - expect it to take hours or days
- Pause points are built in - you're not locked into finishing
- Sprint plan is the source of truth - but reality may require adjustments
- Git commits create a reversible audit trail
- TodoWrite provides real-time visibility into progress
- Test-driven development is non-negotiable - tests must pass