Awesome-omni-skill task-work
Work on a kspec task with proper lifecycle - verify, start, note, submit, PR, complete.
git clone https://github.com/diegosouzapw/awesome-omni-skill
T=$(mktemp -d) && git clone --depth=1 https://github.com/diegosouzapw/awesome-omni-skill "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/tools/task-work-majiayu000" ~/.claude/skills/diegosouzapw-awesome-omni-skill-task-work && rm -rf "$T"
skills/tools/task-work-majiayu000/SKILL.mdBase directory for this skill: /home/chapel/Projects/kynetic-spec/.claude/skills/task-work
Task Work Session
Structured workflow for working on tasks. Full lifecycle from start through PR merge.
Quick Start
# Start the workflow kspec workflow start @task-work-session kspec workflow next --input task_ref="@task-slug"
When to Use
- Starting work on a ready task
- Ensuring consistent task lifecycle
- When you need to track progress with notes
Inherit Existing Work First
Before starting new work, check for existing in-progress or pending_review tasks.
kspec session start # Shows active work at the top
Priority order:
- pending_review - PR awaiting merge, highest priority
- in_progress - Work already started, continue it
- ready (pending) - New work to start
Always inherit existing work unless user explicitly says otherwise. If there's an in_progress task, pick it up and continue. If there's a pending_review task, check the PR status and push it to completion.
Only start new work when:
- No in_progress or pending_review tasks exist
- User explicitly tells you to work on something else
- User says to ignore the existing work
This prevents orphaned work and ensures tasks get completed.
Task States
pending → in_progress → pending_review → completed
→ in_progress (working on it)task start
→ pending_review (code done, PR created, awaiting merge)task submit
→ completed (PR merged)task complete
Workflow Overview
10 steps for full task lifecycle:
- Check Existing Work - Inherit in_progress or pending_review tasks first
- Choose Task - Select from ready tasks (if no existing work)
- Verify Not Done - Check git history, existing code
- Start Task - Mark in_progress
- Work & Note - Add notes during work
- Commit - Ensure changes committed with trailers
- Submit Task - Mark pending_review
- Create PR - Use /pr skill
- PR Merged - Wait for review and merge
- Complete Task - Mark completed after merge
Key Commands
# See available tasks kspec tasks ready # Get task details kspec task get @task-slug # Start working (in_progress) kspec task start @task-slug # Add notes as you work kspec task note @task-slug "What you're doing..." # Submit for review (pending_review) - code done, PR ready kspec task submit @task-slug # Complete after PR merged (completed) kspec task complete @task-slug --reason "Summary of what was done"
Verification Step
Before starting, check if work might already be done - but always validate yourself:
# Check git history for related work git log --oneline --grep="feature-name" git log --oneline -- path/to/relevant/files # If code/tests exist, VERIFY they actually work: npm test -- --grep "relevant-tests" # Review code against acceptance criteria # Check coverage is real, not just test.skip()
Notes Are Context, Not Proof
Task notes provide historical context, but never trust notes as proof of completion. If a task is in the queue, there's a reason - validate independently:
- "Already implemented" → Run the tests yourself. Do they pass? Do they cover the ACs?
- "Tests exist but skip in CI" → That's a gap to fix, not a reason to mark complete
- "Work done in PR #X" → Verify the PR was merged AND the work is correct
Treat verification like a code review: check the actual code and tests against the acceptance criteria. Don't rubber-stamp based on notes.
What "Already Implemented" Actually Requires
To mark a task complete as "Already implemented", you must:
- Run the tests and see them pass (not skip)
- Verify AC coverage - each acceptance criterion has a corresponding test
- Check the implementation matches what the spec requires
If tests are skipped, broken, or missing coverage - the task is NOT done. Fix the gaps.
Scope Expansion During Work
Tasks describe expected outcomes, not rigid boundaries. During work, you may discover:
-
Tests need implementation: A "testing" task may reveal missing functionality. Implementing that functionality is in scope - the goal is verified behavior, not just test files.
-
Implementation needs tests: An "implementation" task includes proving it works. Add tests.
-
DoD constraints are hard requirements: If the task notes include Definition of Done criteria, those are not suggestions. Never produce deliverables that violate DoD.
When to Expand vs Escalate
Expand scope (do it yourself) when:
- The additional work is clearly implied by the goal
- It's proportional to the original task (not 10x larger)
- You have the context to do it correctly
Escalate (ask user) when:
- Scope expansion is major (testing task becomes architecture redesign)
- You're uncertain about the right approach
- DoD is ambiguous and requires judgment calls
Anti-patterns to Avoid
-
as a deliverable: Never usetest.skip()
to document missing functionality unless explicitly approved by user. Skipped tests give false coverage and fail the goal of verification.test.skip() -
Literal task title interpretation: "Add tests for X" means "ensure X is verified." If X doesn't exist, implement it first.
-
Checkbox completion: Completing something is not the goal. Completing the right thing is. If you can't achieve the actual goal, ask for guidance rather than delivering a hollow artifact.
-
Trusting notes without validation: Notes saying "already done" or "tests exist" are not proof. Run the tests. Check the code. Verify against ACs. If a task is in the ready queue, assume there's unfinished work until you prove otherwise.
-
"Skipped in CI" as acceptable: Tests that skip in CI are gaps, not completed work. Either fix the CI issue or document why it's acceptable (with user approval).
-
Automation mode shortcuts: Automation mode means "make good decisions autonomously" - the same decisions a skilled human would make. It does NOT mean take shortcuts, skip hard problems, or produce placeholder deliverables.
Notes Best Practices
Add notes during work, not just at the end:
- When you discover something unexpected
- When you make a design decision
- When you encounter a blocker
- When you complete a significant piece
Good notes help future sessions understand context:
# Good: explains decision kspec task note @task "Using retry with exponential backoff. Chose 3 max retries based on API rate limits." # Bad: no context kspec task note @task "Done"
Commit Format
Include task trailer in commits:
feat: add user authentication Implemented JWT-based auth with refresh tokens. Sessions expire after 24h. Task: @task-add-auth Spec: @auth-feature Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This enables
kspec log @task to find related commits.
Submit vs Complete
Submit (
task submit):
- Use when code is done and you're creating a PR
- Task moves to
pending_review - Indicates "ready for review, not yet merged"
Complete (
task complete):
- Use only after PR is merged to main
- Task moves to
completed - Indicates "work is done and shipped"
- Use
to complete from any state (for cleanup or stuck tasks)--force
Why this matters:
- Tracks tasks awaiting merge separately from done tasks
won't show pending_review tasks as availablekspec tasks ready- Gives accurate picture of what's in progress vs awaiting review
After Completion
After completing a task:
- Check if other tasks were unblocked:
kspec tasks ready - Consider starting the next task
- If work revealed new tasks/issues, add to inbox
Integration with Other Workflows
- Before submit: Consider
for quality check/local-review - After submit: Use
to create PR/pr - For merge: Use
workflow@pr-review-merge - After merge: Complete the task
Loop Mode
You are running in autonomous loop mode. Every action is logged and audited. Work quality matters - automation mode means making the same decisions a skilled human would make, not taking shortcuts.
kspec workflow start @task-work-loop
Accountability
Loop mode is NOT a free pass to:
- Skip hard problems because "they require human interaction"
- Mark tasks as needs_review without actually attempting the work
- Estimate time and bail before writing code
- Assume something won't work without trying it
You are accountable for real progress. All notes, commands, and decisions are recorded. A human will review this session. Do the work properly.
Workflow Steps
-
Get eligible tasks
kspec tasks ready --eligible -
Select task (priority order):
- First: any
task (continue existing work)in_progress - Then: tasks that unblock others (high impact)
- Finally: highest priority ready task (lowest number)
- First: any
-
Verify work is needed
- Check git history for related commits
- Read existing implementation if files exist
- If already done:
and EXITkspec task complete @task --reason "Already implemented"
-
Start and implement
kspec task start @task # Do the work kspec task note @task "What you did..." -
Commit and submit
git add <files> && git commit -m "feat/fix: description Task: @task-slug" kspec task submit @task -
Create PR and stop responding
/prAfter PR created, stop responding (do NOT call any more commands). Ralph automatically:
- Sends the reflection prompt to you
- Processes pending_review tasks via subagent
- Continues to the next iteration with remaining tasks
Do NOT call
- creating a PR completes ONE task, not the loop.end-loop
Tasks Requiring Services
Many tasks require running services (daemons, servers, databases). You can and must handle these. See AGENTS.md for project-specific commands.
The pattern is always:
- Start the service
- Wait for it to be ready
- Do the work
- Clean up
Do NOT mark tasks as needs_review just because they require a running service. Start it. Run the tests. Fix failures. That's the job.
When Tests Fail
Test failures are part of the work, not a reason to stop.
- Read the full error - not just the assertion, the whole output
- Check logs - server logs, daemon logs, build output
- Isolate - run just the failing test to iterate faster
- Fix and retry - make changes, run again
- Repeat until tests pass
After 3 genuine attempts with different fixes, add a note documenting what you tried and what you learned. Then continue to the next task.
Exit Conditions
Normal exit: Stop responding. After creating a PR (or blocking a task), simply stop responding. Ralph continues automatically — it checks for remaining eligible tasks at the start of each iteration and exits the loop itself when none remain. You do not need to manage loop termination.
Important: "No eligible tasks" means ALL eligible tasks are done or blocked, not just the one you're working on. If one task is blocked, check for others before stopping.
Turn Completion vs Loop End
Turn Completion (normal): Stop responding. Ralph continues automatically. Your turn ends when you stop sending tool calls. Ralph then:
- Sends the reflection prompt
- Processes pending_review tasks via subagent
- Continues to the next iteration
Loop End (explicit signal):
kspec ralph end-loop - ends ALL remaining iterations.
Only use when kspec tasks ready --eligible returns empty AND you've verified
no more work is possible.
Common Mistake: Calling
end-loop after creating a PR.
- Creating a PR = one task's code done
- Other ready tasks may still exist
- Ralph continues automatically to work on them
Blocking vs Ending the Loop
When you hit a genuine blocker on a task, the correct pattern is:
- Attempt the work first - actually try to solve the problem
- Block the specific task - not the whole loop
- MUST run
- its output is authoritativekspec tasks ready --eligible - If tasks remain: work on the next one. If empty: stop responding — ralph exits automatically.
Trust the YAML state. If a task's
depends_on is empty, it has no dependencies. If kspec tasks ready --eligible lists a task, it IS eligible and ready to work on. Do not invent blocking relationships based on perceived connections between tasks, PRs in CI, or other inferred state. The command output is the source of truth.
| Situation | Action | Correct Response |
|---|---|---|
| Task needs external input | | Block task, continue to next |
| Task has spec gap | | Block task, continue to next |
| Task requires architectural decision | | Block task, mark needs_review, continue |
| Task is complex/difficult | DO THE WORK | Complexity is not a blocker |
| Tests are failing | FIX THEM | Debug and resolve |
| Service needs to be running | START IT | See "Tasks Requiring Services" |
| No more eligible tasks exist | Stop responding | Ralph auto-exits when no tasks remain |
# Pattern: When you hit a genuine blocker mid-task kspec task note @task "Attempted X, Y, Z. Blocked because: [external reason]" kspec task block @task --reason "Requires architectural decision on X" kspec task set @task --automation needs_review # MUST check for other work — command output is authoritative kspec tasks ready --eligible # If tasks exist: pick one and continue # If empty: stop responding — ralph exits the loop automatically
Explicit Loop End Signal (Rare Escape Hatch)
Ralph automatically exits the loop when no eligible tasks remain — you do not need to signal this. In almost all cases, simply stop responding and let ralph manage loop termination.
end-loop exists only for situations where work is stalling across multiple iterations with no productive progress — something ralph's automatic task checks cannot detect. If you are uncertain whether to end, default to stopping rather than calling end-loop.
# Only if work is genuinely stalling across iterations kspec ralph end-loop --reason "No progress across N iterations: [description]"
Before calling
, you MUST:end-loop
- Run
and confirm it returns emptykspec tasks ready --eligible - Verify the stall is real, not just one difficult task
Do NOT call
:end-loop
- After creating a PR → ralph continues automatically
- When one task is blocked → block it and check for others
- When a task is hard → do the work
- When
still shows tasks → work on themkspec tasks ready --eligible
What Is NOT an Exit Condition
These are NOT reasons to exit, block, or mark needs_review:
- "Task seems complex" - do it anyway
- "Might take multiple iterations" - that's fine, iterate
- "Requires daemon" - start the daemon
- "E2E tests need running" - run them
- "Estimated time exceeds iteration" - estimates are not work, do the work
What to do instead: Push through. Complexity and difficulty are expected. Block tasks only for genuine external blockers (human decision needed, spec gap, external dependency). If you can theoretically solve it with more effort, it's not blocked - keep working.
Key Behaviors
- Only
tasks are considered. Automation eligibility is determined solely by theautomation: eligible
field, not by task type, title, or description. If a task is markedautomation
, work on it — do not re-triage based on whether it looks like a "design task" or any other category.eligible - Verification still performed (prevent duplicate work)
- Decisions auto-resolve without prompts
- PR review handled externally by ralph (not this workflow)
- All actions are logged - work as if being watched