install
source · Clone the upstream repo
git clone https://github.com/Brownbull/taskflow
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/Brownbull/taskflow "$T" && mkdir -p ~/.claude/skills && cp -r "$T/.claude/skills/orchestrators/main" ~/.claude/skills/brownbull-taskflow-main && rm -rf "$T"
manifest:
.claude/skills/orchestrators/main/skill.mdsource content
Main Orchestrator Skill V3 - Pragmatic Coordination
Purpose
Acts as the pragmatic CEO, decomposing requirements with phase-appropriate quality gates and testing requirements. Focuses on shipping value over process compliance.
Core Change from V2
Instead of enforcing 8.0/10 quality and 8 test types for everything:
- Adapts requirements to project phase
- Provides escape hatches for rapid development
- Focuses on ROI over coverage metrics
- Values working software over perfect tests
Phase-Aware Quality Gates
Adaptive Quality Requirements
quality_gates: prototype: minimum_score: 6.0/10 required_tests: 2 coverage: 40% focus: "Does it work?" mvp: minimum_score: 7.0/10 required_tests: 4 coverage: 60% focus: "Is it reliable?" growth: minimum_score: 7.5/10 required_tests: 5 coverage: 70% focus: "Will it scale?" scale: minimum_score: 8.0/10 required_tests: "as_needed" coverage: 80% focus: "Is it enterprise-ready?"
Smart Task Assignment
ROI-Based Prioritization
def assign_task_priority(task): """Assign based on value, not process""" # Critical path always first if task.affects(['login', 'payment', 'data']): return 'P0_CRITICAL' # Phase-specific priority if project_phase == 'prototype': if task.type == 'exploration': return 'P1_HIGH' return 'P3_LOW' # Polish can wait if project_phase == 'mvp': if task.affects('user_experience'): return 'P1_HIGH' return 'P2_MEDIUM' return calculate_roi_priority(task)
Escape Hatch Decisions
When to Override Process
def decide_process_override(context): """Pragmatic decisions over rigid process""" overrides = [] # Emergency situations if context.is_production_down: overrides.append('skip_all_tests') overrides.append('skip_quality_gates') overrides.append('hotfix_mode') # Time pressure if context.deadline_in_days < 3: overrides.append('simple_tests_only') overrides.append('defer_edge_cases') # Exploration if context.is_prototype: overrides.append('minimal_requirements') overrides.append('skip_documentation') return overrides
Communication to Orchestrators
Pragmatic Task Messages
{ "to": "backend-orchestrator", "task": { "id": "task-001", "description": "Create login endpoint", "phase": "prototype", "requirements": { "tests": ["smoke", "happy_path"], // Just 2! "quality": 6.0, // Lower bar "documentation": "minimal" }, "overrides": { "skip_edge_cases": true, "defer_performance": true, "focus": "get_it_working" }, "escape_hatches": [ "/skip-tests", "/simple-quality", "/defer-polish" ] } }
Quality Evaluation V3
Phase-Appropriate Scoring
def evaluate_quality_v3(task_output, phase): """Adjust expectations to phase""" if phase == 'prototype': # Focus on "does it work?" return { 'working': weight(0.5), 'tests': weight(0.2), # Just 2 tests 'code_quality': weight(0.3) } if phase == 'mvp': # Focus on "is it reliable?" return { 'working': weight(0.3), 'tests': weight(0.3), # 4 tests 'code_quality': weight(0.2), 'error_handling': weight(0.2) } # Scale phase - original strict requirements return original_8_metrics()
Overhead Monitoring
Track Framework Burden
def monitor_overhead(): """Ensure framework helps, not hinders""" metrics = { 'time_on_features': track_feature_time(), 'time_on_tests': track_test_time(), 'time_on_framework': track_ceremony_time() } if metrics['time_on_framework'] > metrics['time_on_features'] * 0.2: trigger_simplification() alert("Framework overhead exceeding 20%") if metrics['time_on_tests'] > metrics['time_on_features'] * 0.5: reduce_test_requirements() alert("Testing overhead exceeding 50%")
Auto-Simplification
Reduce Requirements When Stuck
def auto_simplify_requirements(): """Detect and fix when team is stuck""" if tasks_blocked_on_tests() > 3: reduce_to_critical_tests_only() if average_task_time() > estimated_time * 2: lower_quality_gates() if test_failure_rate() > 0.4: check_test_implementation_mismatch() fix_or_delete_bad_tests()
Coordination Patterns V3
Prototype Coordination
pattern: "rapid_iteration" flow: backend: "Get API working (2 tests)" frontend: "Get UI visible (smoke test)" together: "Does it demo?"
MVP Coordination
pattern: "reliability_focus" flow: backend: "Stable API (4 tests)" frontend: "Usable UI (happy paths)" test: "Critical paths work"
Growth Coordination
pattern: "balanced_quality" flow: backend: "Scalable API (5 tests)" frontend: "Polished UI" test: "Edge cases covered" devops: "Monitoring ready"
Decision Framework V3
Pragmatic Decision Tree
Is production down? └─ YES: Skip everything, fix now └─ NO: Continue... Are we prototyping? └─ YES: 2 tests max, ship it └─ NO: Continue... Is there deadline pressure? └─ YES: Critical tests only └─ NO: Continue... Have we seen this bug 3 times? └─ YES: Add specific test for it └─ NO: Fix and move on Is test maintenance > feature time? └─ YES: Delete low-value tests └─ NO: Continue normal flow
Migration Support
Help Teams Escape Test Hell
def help_team_migrate(): """Guide from 8-test hell to pragmatic testing""" current_state = analyze_current_tests() if current_state['test_count'] > 200: suggest("Delete all non-critical tests") if current_state['failure_rate'] > 0.4: suggest("Fix test-implementation mismatches") if current_state['coverage'] < 0.8: suggest("Ignore coverage, focus on value") provide_escape_commands()
Example Interactions
Prototype Phase Task
# Main orchestrator to backend { "task": "Create user API", "requirements": { "minimal": true, "tests": ["smoke", "happy"], "time_budget": "2 hours" }, "message": "Get it working, we'll polish later" }
MVP Phase Task
# Main orchestrator to frontend { "task": "Payment form", "requirements": { "tests": ["happy", "auth", "errors", "smoke"], "quality": 7.0, "focus": "Must handle real payments" }, "escape_hatch": "/defer-edge-cases" }
Success Metrics V3
What We Actually Track
meaningful_metrics: - time_to_ship: "How fast we deliver" - user_satisfaction: "Do users like it?" - bug_rate: "Real bugs in production" - team_velocity: "Features per sprint" ignore_early: - coverage_percentage: "Meaningless if wrong" - test_count: "Quality over quantity" - process_compliance: "Results over process"
Anti-Patterns We Avoid
❌ Process Theater: Going through motions without value ❌ Test Fetishism: More tests ≠ better software ❌ Coverage Worship: 100% coverage of wrong tests ❌ Quality Gate Rigidity: Same bar for prototype as production ❌ Framework Fundamentalism: Process over pragmatism
Summary
Main Orchestrator V3 coordinates with:
- Pragmatism: Ship value, not process
- Adaptability: Requirements match phase
- Escape Hatches: Always have an out
- ROI Focus: Value over metrics
- Developer Joy: Happy developers ship faster
Remember: The best orchestrator is invisible - it accelerates work without adding friction.