Commonly-used-high-value-skills hermes-graphify-gsd-nonintrusive-workflow
Use when integrating Hermes Agent, graphify, and GSD into a local development workflow without modifying upstream repositories, especially when the user wants upgrade-safe wrappers, project-level workflow scripts, graph-aware planning, and a reusable setup that survives future upstream updates.
git clone https://github.com/seaworld008/Commonly-used-high-value-skills
T=$(mktemp -d) && git clone --depth=1 https://github.com/seaworld008/Commonly-used-high-value-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/ai-agent-platform/hermes-graphify-gsd-nonintrusive-workflow" ~/.claude/skills/seaworld008-commonly-used-high-value-skills-hermes-graphify-gsd-nonintrusive-wor-56003c && rm -rf "$T"
skills/ai-agent-platform/hermes-graphify-gsd-nonintrusive-workflow/SKILL.mdHermes + graphify + GSD Non-Intrusive Workflow
Overview
Use this skill to build an upgrade-safe local AI development workflow that combines:
- Hermes Agent for orchestration, memory, and execution
- graphify for codebase graph recall and low-cost refresh
- GSD for planning, phase management, and execution cadence
If the repo already has this workflow and the task is now about runtime diagnosis, writer ownership, stale cron/state/lease cleanup, or handoff/operator recovery, switch to
hermes-graphify-gsd-runtime-operator.
Core rule: do not modify upstream Hermes, graphify, or GSD repository code unless the user explicitly wants to contribute upstream. Prefer thin wrappers, project-local scripts, and repo documentation.
Important prerequisite:
- This skill assumes Hermes is already installed in an online-capable environment.
- If
is missing, stop and ask the user to install Hermes first.hermes - Do not auto-install Hermes from this skill.
- However, on first-time bootstrap, this skill should automatically install or upgrade the latest graphify and GSD, then configure them globally.
When to Use
Use this skill when the user asks for any of these:
- "把 Hermes + graphify + GSD 接起来"
- non-intrusive integration
- upgrade-safe local AI coding workflow
- reusable project bootstrap for graph-aware planning
- wrappers around Hermes / graphify / GSD instead of patching upstream
- project-level workflow entrypoints like
ai-workflow.sh
Do not use this skill when the user wants to directly change Hermes, graphify, or GSD upstream source behavior. In that case, work in the relevant upstream repo instead.
Design Principles
- Non-intrusive first
- Do not patch Hermes upstream repo for local workflow glue
- Do not patch installed graphify package for convenience
- Do not patch GSD installer/source just to fit one project
- Depend on stable entrypoints Prefer these interfaces:
hermespython -m graphifynode <get-shit-done>/sdk/dist/cli.js
- Keep adaptation thin Allowed adaptation layers:
- shell wrappers in
~/.local/bin/ - repo-local scripts under
scripts/ - docs in
,README.md
,AGENTS.mddocs/ - local gitignored workflow state like
and.planning/graphify-out/
- Make upgrade cost local If upstream changes, update wrappers/templates first. Avoid spreading compatibility logic across many repos.
Recommended Architecture
Layer 1 — upstream tools
- Hermes installation and config
- graphify Python package / CLI entrypoint
- GSD runtime and SDK installation
First-time bootstrap policy
When this skill is used on a fresh machine:
- Check
command -v hermes - If Hermes is missing, stop and instruct the user to install Hermes manually first
- If Hermes exists, automatically install or upgrade graphify and GSD to latest stable upstream entrypoints
- Configure graphify globally for Hermes
- Configure GSD globally for the target runtime, defaulting to Codex unless the user specifies another runtime
Recommended commands:
# graphify — latest package, then global Hermes integration # prefer a Python that actually has pip available; Hermes venv python is a valid fallback PY_BIN="${PYTHON_BIN:-$HOME/.hermes/hermes-agent/venv/bin/python3}" [ -x "$PY_BIN" ] || PY_BIN="$(command -v python3)" if "$PY_BIN" -c 'import sys; print(int(sys.prefix != sys.base_prefix))' 2>/dev/null | grep -q '^1$'; then "$PY_BIN" -m pip install -U graphifyy else "$PY_BIN" -m pip install --user -U graphifyy fi ~/.local/bin/graphify install --platform hermes || graphify install --platform hermes # GSD — latest global runtime + SDK npx -y get-shit-done-cc@latest --codex --global --sdk
Notes:
- graphify's current PyPI package name is
, while the CLI remainsgraphifyygraphify - graphify version warnings are global across installed platforms, not Hermes-only; if
still warns after updating Hermes, also check other installed platform copies such asgraphify --help
and rerun~/.claude/skills/graphify/.graphify_version
theregraphify install --platform <platform> - for GSD,
is the default global baseline for this workflow; choose another runtime only when the user explicitly wants it--codex --global --sdk - if a repo also needs local
files, do a second repo-local install later during project integration.codex/
Layer 2 — local wrappers
Create thin wrappers when needed:
~/.local/bin/graphify~/.local/bin/gsd-sdk
Purpose:
- normalize invocation
- discover interpreters/paths
- avoid upstream edits
Layer 3 — project integration
Per repo, add:
scripts/graphify-sync.sh- optional
scripts/ai-workflow.sh
guidanceAGENTS.md
workflow sectionREADME.md
entries for.gitignore
and.planning/graphify-out/
Optional Layer 4 — autonomous continuation loop
When the user wants the repo to keep advancing with minimal manual prompting, add a repo-local auto-continue layer:
- lightweight Git hooks for event triggers (
, optionallypost-commit
)post-merge - a periodic reconciler (
or systemd timer) that re-checks progress every N minutescron - a single-runner lock to prevent concurrent agent executions
- a project-level completion sentinel written only after full verification succeeds
- evidence docs that record the final verification command and output
Recommended responsibilities:
- hook: only enqueue or trigger; keep it lightweight
- cron/timer: watchdog + periodic reconciliation; restart the runner if needed
- runner: read planning/graph context, continue work, update docs, run focused verification, and only attempt final completion when the project is actually done
- per-trigger loop: let one trigger run multiple internal passes before giving up, so the workflow does not stop after one small task when more scoped work remains
- completion gate: a dedicated script that runs the full verification command and writes the sentinel/evidence only on success
Recommended repo-local files:
scripts/hermes-auto-continue-config.shscripts/hermes-auto-continue-status.shscripts/hermes-auto-continue-trigger.sh
(manual no-commit checkpoint trigger)scripts/hermes-auto-continue-checkpoint.sh
(generate the last-run summary artifact)scripts/hermes-auto-continue-summary.shscripts/hermes-auto-continue-task-board-init.shscripts/hermes-auto-continue-task-board-status.shscripts/hermes-auto-continue-task-board-update.shscripts/hermes-auto-continue-task-board-complete-if-ready.shscripts/hermes-auto-continue-task-board-sync-docs.shscripts/hermes-auto-continue-resume-if-ready.shscripts/hermes-auto-continue-notify.shscripts/hermes-gsd-phase-engine-status.shscripts/hermes-gsd-next-state.shscripts/hermes-gsd-sync-runtime-mirror.shscripts/hermes-graphify-strategy-hints.shscripts/hermes-auto-continue-learnings.shscripts/hermes-auto-continue-mark-complete.shscripts/install-hermes-auto-continue-cron.sh.husky/post-commit- optional
.husky/post-merge
Recommended optional relay artifacts:
.planning/auto-continue-last-summary.md.planning/task-board.json.planning/auto-gsd-next-state.json.planning/notifications/.planning/learnings/.planning/skill-candidates/- optional explicit delivery env vars such as:
HERMES_AUTO_CONTINUE_NOTIFY_DELIVERHERMES_AUTO_CONTINUE_NOTIFY_SCHEDULEHERMES_AUTO_CONTINUE_NOTIFY_COMMANDHERMES_AUTO_CONTINUE_NOTIFY_EVENTS
- recommended runtime tuning env vars such as:
HERMES_AUTO_CONTINUE_MAX_PASSES_PER_TRIGGERHERMES_AUTO_CONTINUE_PASS_IDLE_SECONDSHERMES_AUTO_CONTINUE_CRON_SCHEDULE
Reality-tested runtime contract
When this workflow grows into an autonomous repo-local runtime, prefer these additional constraints by default:
- main project repo = primary writer execution surface
- do not assume a separate sandbox/worktree should be the canonical writer
- treat extra worktrees as read-only analysis or temporary experiments unless they are rebuilt into a complete project environment and explicitly promoted
Recommended operator contract:
- maintain a project-level writer lease, state file, handoff file, and planning mirror under a shared state dir
- expose a repo-local doctor/operator surface such as:
./scripts/ai-workflow.sh doctor./scripts/ai-workflow.sh gsd-doctor./scripts/ai-workflow.sh gsd-next-state./scripts/ai-workflow.sh gsd-skill-show <name>./scripts/ai-workflow.sh gsd-workflow-show <name>./scripts/ai-workflow.sh graphify-hints./scripts/ai-workflow.sh auto-learnings <event> [title] [detail]./scripts/ai-workflow.sh auto-status./scripts/ai-workflow.sh auto-progress./scripts/ai-workflow.sh auto-runner-show./scripts/ai-workflow.sh auto-execution-surface-show./scripts/ai-workflow.sh auto-workflow-state-show./scripts/ai-workflow.sh auto-handoff-show./scripts/ai-workflow.sh auto-resume-if-ready
- the bundled
in this skill now exposes those subcommands and delegates to the repo-local auto-continue scriptstemplates/ai-workflow.sh - use the operator commands as the primary fact source before assuming the runtime is healthy
Recommended execution-surface guard for any repo allowed to write:
- require at least:
package.jsonpnpm-lock.yaml
or the repo's real backend rootsrc-tauri/.planning/STATE.md- executable
scripts/graphify-sync.sh
,doctor
, and cron/timer install paths should all refuse to proceed on an incomplete execution surfacetrigger- if temporary experiments must bypass the guard, require an explicit override env var and treat it as an exception path, not the normal workflow
Recommended writer-surface contract:
- define a primary root for the project
- compute and expose:
writer_eligibleprimary_root_matchwriter_recommended
- only allow runtime-binding commands or cron installation on
writer_recommended=yes
Core rule:
- Do not stop because one small task or one local checklist is done.
- Stop only when a project-level completion sentinel exists and still matches the current HEAD/worktree state.
Recommended machine-readable planning contract:
- keep human-readable docs (
,PROJECT.md
,REQUIREMENTS.md
,ROADMAP.md
)STATE.md - also keep a machine-readable task board at
.planning/task-board.json - also keep a machine-readable GSD lifecycle mirror at
.planning/auto-gsd-next-state.json - sync that GSD lifecycle mirror into the broader runtime mirror so operator views and notifications share one coherent state model
- if local GSD is installed under
, treat GSD workflow files as the lifecycle source of truth and use the task board as execution cache.codex/get-shit-done/ - the autonomous runner should prefer:
- GSD phase truth (
, discuss / plan / execute / verify workflow docs when available)gsd-next
taskin_progress- highest-priority executable
todo - documented fallback to
/REQUIREMENTS.md
only when the board is missing or staleROADMAP.md
- GSD phase truth (
- expose task-board operator commands for:
- initialization
- current/next task inspection
- claiming the next task
- status transitions (
,todo
,in_progress
,blocked
,done
)dropped - appending notes and acceptance evidence
evaluation before marking a task donecomplete-if-ready- syncing the machine task board back into managed sections of
andSTATE.mdROADMAP.md
- every task should ideally include:
idtitlestatusprioritydepends_onacceptanceartifactsblocked_bylast_updated
- a strong default is: tasks should become
only through a lightweight completion gate that checks dependencies, acceptance evidence, and artifact existencedone
Trigger semantics note:
- The default repo-local auto-continue loop described here is code-event driven + periodic reconciliation, not chat-turn driven.
- Typical immediate triggers are:
, optionalpost-commit
, explicit manual checkpoint scripts, and periodicpost-merge
/timer reconciliation.cron - A strong default is: one trigger may run multiple internal passes until completion is reached, a structured handoff becomes active, or the pass budget is exhausted.
- A normal assistant reply ending does not automatically create a new trigger event unless your wrapper explicitly does so.
- If the user expects "回复一停就继续跑", add a lightweight non-commit checkpoint trigger (for example
) at agreed milestone boundaries rather than assuming message completion will fire hooks.scripts/hermes-auto-continue-checkpoint.sh - If the user also expects autonomous run summaries to return to chat, do not assume the local shell knows the current conversation origin. Repo-local scripts run without current-chat delivery context, so reliable auto-delivery requires an explicit target (for example
,discord:chat_id
, or another concrete deliver string).telegram:chat_id:thread_id - A practical pattern is: write
after each run, then create a one-shot Hermes cron notification job only when an explicit deliver target is configured..planning/auto-continue-last-summary.md - A robust default is: always write notifications to a local outbox under
, then optionally invoke an external delivery command when.planning/notifications/
is configured.HERMES_AUTO_CONTINUE_NOTIFY_COMMAND - Hermes cron runs in fresh sessions, so the trigger prompt itself should be self-contained and explicitly tell the runner which local files to read first (
,GRAPH_REPORT.md
,PROJECT.md
,REQUIREMENTS.md
,ROADMAP.md
, and runtime summary/mirror files when present).STATE.md - If a machine-readable task board exists, the trigger prompt should explicitly tell the runner to use that board as the canonical next-task selector and to update it after each meaningful step.
- If local GSD skills/workflows are installed, the trigger prompt should read
first and let GSD decide whether the system currently needs discuss, plan, execute, or verify.gsd-next - If graphify is installed, prefer
,query
, andpath
before guessing at cross-module structure.explain - A useful next step is to auto-surface graphify hints from recent file changes so the agent can tell when
,path
,query
, orexplain
are especially valuable.wiki - If handoff is meant to auto-resume later, prefer machine-readable
probes such as:resume_conditionfile_exists:<path>file_missing:<path>task_done:<task_id>task_status:<task_id>:<status>ready_to_complete:<task_id>writer_recommendedboard_has_next_taskrepo_lock_freeproject_lock_free
Project-Level Completion Gate
For autonomous continuation loops, use a completion sentinel instead of guessing from partial task lists.
Recommended design:
- The normal runner keeps going by default.
- When the agent believes the whole scoped project is finished, it runs a dedicated completion script.
- That script executes the repo's full verification command.
- Only if verification succeeds and the worktree is clean does it write:
- a sentinel file such as
.planning/auto-continue-complete.json - an evidence doc such as
docs/auto-continue-completion-evidence.md
- a sentinel file such as
- Status checks should return
only when:COMPLETE- sentinel exists
- sentinel says
complete - sentinel HEAD matches current HEAD
- worktree is clean
This avoids the common failure mode where automation stops after a subtask, a phase checklist, or a focused test subset passes.
Minimum Verification Checklist
Run these before claiming success:
command -v hermes hermes --version command -v graphify graphify --help command -v gsd-sdk gsd-sdk --version ./scripts/graphify-sync.sh status ./scripts/graphify-sync.sh smart
For first-time bootstrap, also verify:
existed before any automation beganhermes- graphify was installed or upgraded via the selected Python + pip flow
- graphify global Hermes integration was applied with
graphify install --platform hermes - if graphify still warns about an older skill version, inspect other installed platform targets (for example
) and update those too~/.claude/skills/graphify/.graphify_version - GSD global install was applied with
npx -y get-shit-done-cc@latest --codex --global --sdk
Also verify:
exists when planning context is expected.planning/
exists if local GSD runtime is used.codex/
andgraphify-out/graph.json
exist after graph buildgraphify-out/GRAPH_REPORT.md- git hooks exist if graphify hook automation is expected
- if the repo uses an autonomous writer runtime,
/doctor
/auto-progress
all report the same primary writer factsauto-runner-show - if the repo uses a primary-root contract,
reportsauto-execution-surface-show
only for the intended main repowriter_recommended=yes
Standard Project Operating Loop
./scripts/graphify-sync.sh smart- Read
graphify-out/GRAPH_REPORT.md - Read
and.planning/STATE.md.planning/ROADMAP.md - Use GSD phase / plan / execute workflow
- Implement changes
- Re-run
./scripts/graphify-sync.sh smart - Update planning context if the phase meaning changed
Best division of labor:
- Hermes = orchestration and persistence
- graphify = architecture recall and code graph refresh
- GSD = planning cadence and execution structure
Upgrade Contract
Always preserve these constraints:
- wrappers may change
- repo-local scripts may change
- project docs may change
- user-level installed platform copies of a skill may need resync across multiple runtimes
- upstream repos should remain untouched unless upstream contribution is the actual task
If something breaks after upstream updates, fix in this order:
- wrapper path detection
- wrapper invocation contract
- project-local script assumptions
- only then consider upstream changes
Common Pitfalls
- Treating one repo's path layout as universal
- make wrapper paths configurable with env vars where reasonable
- Depending on one pip install mode everywhere
- virtualenv Python may reject
--user - system Python may need
--user - detect whether the chosen interpreter is in a venv, then choose
orpip install -U ...
accordinglypip install --user -U ...
- Depending on graphify outputs that are no longer stable across versions
- current graphify versions reliably produce
andgraphify-out/graph.jsongraphify-out/GRAPH_REPORT.md - do not require
unless you have verified that a specific version still emits itmanifest.json - wrappers and sync scripts should treat manifest as optional
- Misreading graphify version warnings as Hermes-only failures
- graphify scans multiple installed platform skill directories when checking installed skill versions
- an outdated
can trigger a warning even when~/.claude/skills/graphify/.graphify_version
is current~/.hermes/skills/graphify/ - if warning persists, update every installed graphify platform target you actually use
- Mixing project-specific guidance into the generic skill body
- put reusable logic in this skill
- put project-specific facts in repo docs or AGENTS.md
- Stopping on partial completion
- do not treat a single phase checklist, one small task, or a focused test subset as project completion
- require a project-level completion sentinel written by a dedicated verification script
- prefer default continue, not default stop
- Letting hooks do long-running work
- hooks should stay lightweight and should not run long autonomous sessions inline
- use hooks to enqueue/trigger and let a runner or cron/timer do the heavy work
- Missing concurrency control in auto-continue loops
- use a single-runner lock (
or equivalent)flock - if you want queue semantics, prefer one running + one pending over unbounded backlog
- periodic reconciliation should recover from stale locks or crashed runners
- Letting stale cron/state/lease metadata redefine the writer by accident
being empty does not prove there is no system cron entryhermes cron list --all- check system
when the observed writer and the intended writer disagreecrontab -l - if a stale sandbox/worktree cron is still running, remove the cron entry first, then clear or correct state/lease metadata so operator output returns to the main repo
- if state says
but kernel locks are already free, treat it as stale metadata and reconcile it explicitly instead of trusting the stale file foreverrunning
- Using one generic cron tag for every repo
- cron install/uninstall should be keyed by project, not by one shared tag string
- otherwise one repo's install step can silently overwrite another repo's autonomous loop
Files to Reuse
Load these bundled files when implementing:
templates/bootstrap-toolchain.shtemplates/graphify-wrapper.shtemplates/gsd-sdk-wrapper.shtemplates/ai-workflow.shtemplates/hermes-auto-continue-config.shtemplates/hermes-auto-continue-status.shtemplates/hermes-auto-continue-trigger.shtemplates/hermes-auto-continue-checkpoint.shtemplates/hermes-auto-continue-summary.shtemplates/hermes-auto-continue-task-board-init.shtemplates/hermes-auto-continue-task-board-status.shtemplates/hermes-auto-continue-task-board-update.shtemplates/hermes-auto-continue-task-board-complete-if-ready.shtemplates/hermes-auto-continue-task-board-sync-docs.shtemplates/hermes-auto-continue-resume-if-ready.shtemplates/hermes-auto-continue-notify.shtemplates/hermes-gsd-phase-engine-status.shtemplates/hermes-gsd-next-state.shtemplates/hermes-gsd-sync-runtime-mirror.shtemplates/hermes-graphify-strategy-hints.shtemplates/hermes-auto-continue-learnings.shtemplates/hermes-auto-continue-mark-complete.shtemplates/install-hermes-auto-continue-cron.shtemplates/husky-post-commit-auto-continue.shtemplates/husky-post-merge-auto-continue.shreferences/first-install.mdreferences/upgrade-contract.mdreferences/auto-continue-best-practices.mdreferences/ai-workflow-auto-continue-snippet.md
Execution Pattern
When using this skill:
- Audit live tool availability first
- If Hermes is missing, stop and instruct manual Hermes installation — do not auto-install it
- If Hermes exists, automatically install or upgrade latest graphify and GSD globally
- Add wrappers only if the native commands are missing or inconsistent after install
- Add project-local scripts second
- If the user wants autonomous continuation, add the repo-local auto-continue layer with hook + cron/timer + lock + completion gate
- Verify the full loop with real commands
- Document the contract so future upgrades stay safe
- For auto-continue setups, explicitly verify that partial completion does not stop the loop and that only the completion sentinel can stop it
- Verify that the bundled
surface, auto-continue scripts, and operator docs all expose the same command names before packaging the workflow for teammatesai-workflow.sh - Verify that Hermes runner failures become explicit blocked/operator state instead of being silently treated as ordinary incomplete runs