Awesome-omni-skill nereid
Collaborate in Nereid Mermaid sessions via MCP using AST-first, probe-refine workflows for sequence diagrams, flowcharts, xrefs, routes, and walkthroughs. Use when exploring or editing diagrams with a human watching live in TUI, and when coordinating attention through `attention.*`, `follow_ai.*`, and `selection.*`.
git clone https://github.com/diegosouzapw/awesome-omni-skill
T=$(mktemp -d) && git clone --depth=1 https://github.com/diegosouzapw/awesome-omni-skill "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data-ai/nereid" ~/.claude/skills/diegosouzapw-awesome-omni-skill-nereid && rm -rf "$T"
skills/data-ai/nereid/SKILL.mdNereid MCP Collaboration
Collaborate on Mermaid-backed diagrams in a live, shared TUI/MCP session. Keep context small, edits structured, and attention explicit.
Collaboration Contract
Assume co-presence by default:
- The user can see diagram updates in real time.
- The user can see where the agent is focused.
- The agent should steer attention visually, then speak briefly.
Drive collaboration with this state model:
: read the human cursor/attention in TUI.attention.human.read
: read the agent spotlight object.attention.agent.read
: move the agent spotlight to one object.attention.agent.set
: clear the agent spotlight.attention.agent.clear
/follow_ai.read
: read or toggle whether TUI follows agent spotlight.follow_ai.set
/selection.read
: shared working-set selection (multi-object).selection.update
Treat these as separate concerns:
- Human attention: what the person is looking at.
- Agent attention: what the agent wants to emphasize now.
- Selection: short-term working set for batch reasoning/edits.
- Follow-AI: whether the TUI camera/cursor tracks agent spotlight.
Core Principles
- Treat AST as source of truth; rendered text and Mermaid text are projections.
- Session files (
,nereid-session.meta.json
,diagrams/*.mmd
) are app-managed snapshots and can be rewritten frequently while Nereid runs.walkthroughs/*.wt.json - Use canonical
everywhere:ObjectRef
.d:<diagram_id>/<seq|flow>/<participant|message|node|edge>/<object_id> - Prefer small reads first (
,diagram.stat
,diagram.get_slice
,diagram.diff
).walkthrough.diff - Use typed query tools (
,seq.*
,flow.*
,xref.*
) before large snapshots.route.find - Gate edits with
and keep ops minimal.base_rev - Record evidence as refs (xrefs and walkthrough nodes) so reasoning is resumable.
- Keep dangling xrefs visible as TODO artifacts unless asked to clean them.
Execution Discipline
- Use MCP tools as the only source of truth.
- Do not inspect
,src/
,tests/
, ordocs/
to answer runtime collaboration questions.data/ - Make the first relevant MCP call immediately after reading the user prompt.
- Prefer direct MCP execution over schema/code exploration.
- If payload shape is unclear, call the tool once and adapt from validation errors.
- Use shell/file probing only when the user explicitly asks for file-level inspection or storage debugging.
Tool Groups
- Diagram lifecycle and target:
,diagram.list
,diagram.open
,diagram.delete
,diagram.currentdiagram.create_from_mermaid - Diagram reads:
,diagram.stat
,diagram.get_slice
,diagram.diff
,diagram.read
,diagram.get_astdiagram.render_text - Diagram mutation:
,diagram.propose_opsdiagram.apply_ops - Walkthrough lifecycle and target:
,walkthrough.list
,walkthrough.openwalkthrough.current - Walkthrough reads:
,walkthrough.stat
,walkthrough.diff
,walkthrough.read
,walkthrough.get_nodewalkthrough.render_text - Walkthrough mutation:
walkthrough.apply_ops - Collaboration state:
,attention.human.read
,attention.agent.read
,attention.agent.set
,attention.agent.clear
,follow_ai.read
,follow_ai.set
,selection.read
,selection.updateview.read_state - Cross-diagram mapping:
,xref.list
,xref.neighbors
,xref.addxref.remove - Object inspection:
object.read - Query helpers (route):
route.find - Query helpers (sequence):
,seq.messages
,seq.searchseq.trace - Query helpers (flow):
,flow.reachable
,flow.paths
,flow.cycles
,flow.unreachable
,flow.dead_endsflow.degrees
Default Operating Loop
- Resolve target:
->diagram.current
->diagram.list
.diagram.open
->walkthrough.current
->walkthrough.list
.walkthrough.open- if no diagram exists yet, bootstrap with
.diagram.create_from_mermaid
- Read live collab state:
,attention.human.read
,attention.agent.read
,follow_ai.read
.selection.read
- Probe local context:
,diagram.stat
, then one or two typed queries.diagram.get_slice
- Steer visual attention:
to the object currently being discussed.attention.agent.set- keep chat short while attention marker carries micro-guidance.
- Propose and apply minimal edits:
->diagram.propose_ops
.diagram.apply_ops
for walkthrough refinement.walkthrough.apply_ops
- Refresh with deltas:
/diagram.diff
(avoid full re-read unless needed).walkthrough.diff
Live Co-Presence Rules
- Use spotlight-first communication: set
before explaining a local change.attention.agent.set - Move spotlight when changing topic; clear it when complete.
- Keep narration compact when user is actively watching the diagram.
- Skip spotlight changes for quick query-only answers unless orientation is needed.
- Use
for temporary multi-object working sets, not as a focus proxy.selection.update - For create/switch-only requests, stop after
(and optionaldiagram.create_from_mermaid
); avoid extraattention.agent.set
,diagram.stat
, ordiagram.render_text
probes unless the user asks for inspection/debugging.flow.*
Tool Contracts (Input/Output)
diagram.create_from_mermaid
diagram.create_from_mermaidInput:
{ "mermaid": "flowchart TD\n A --> B", "diagram_id": "d-my-flow", "name": "My Flow", "make_active": true }
Output:
{ "diagram": { "diagram_id": "d-my-flow", "name": "My Flow", "kind": "flowchart", "rev": 0 }, "active_diagram_id": "d-my-flow" }
diagram.delete
diagram.deleteInput:
{ "diagram_id": "d-my-flow" }
Output:
{ "deleted_diagram_id": "d-my-flow", "active_diagram_id": "d-next" }
diagram.get_slice
diagram.get_sliceInput:
{ "diagram_id": "d-flow", "center_ref": "d:d-flow/flow/node/n:a", "radius": 1, "depth": 1, "filters": { "include_categories": ["flow/node", "flow/edge"], "exclude_categories": [] } }
Output:
{ "objects": ["d:d-flow/flow/node/n:a", "d:d-flow/flow/node/n:b"], "edges": ["d:d-flow/flow/edge/e:ab"] }
diagram.apply_ops
diagram.apply_opsInput:
{ "diagram_id": "d-seq", "base_rev": 3, "ops": [] }
Output:
{ "new_rev": 4, "applied": 1, "delta": { "added": [], "removed": [], "updated": [] } }
walkthrough.apply_ops
walkthrough.apply_opsInput:
{ "walkthrough_id": "w:1", "base_rev": 0, "ops": [] }
Output:
{ "new_rev": 1, "applied": 1, "delta": { "added": [], "removed": [], "updated": [] } }
object.read
object.readInput:
{ "object_ref": "d:d-seq/seq/block/b:0000" }
Output:
{ "objects": [ { "object_ref": "d:d-seq/seq/block/b:0000", "object": { "type": "seq_block", "kind": "alt", "header": "guard", "section_ids": ["sec:0000:00", "sec:0000:01"], "child_block_ids": [] } }, { "object_ref": "d:d-seq/seq/section/sec:0000:00", "object": { "type": "seq_section", "kind": "main", "header": "ok", "message_ids": ["m:0000"] } } ], "context": {} }
attention.human.read
attention.human.readInput:
{}
Output:
{ "object_ref": "d:d-auth-flow/flow/node/n:authorize", "diagram_id": "d-auth-flow" }
attention.agent.read
attention.agent.readInput:
{}
Output:
{ "object_ref": "d:d-auth-flow/flow/node/n:authorize", "diagram_id": "d-auth-flow" }
attention.agent.set
attention.agent.setInput:
{ "object_ref": "d:d-auth-flow/flow/node/n:authorize" }
Output:
{ "object_ref": "d:d-auth-flow/flow/node/n:authorize", "diagram_id": "d-auth-flow" }
attention.agent.clear
attention.agent.clearInput:
{}
Output:
{ "cleared": 1 }
follow_ai.read
follow_ai.readInput:
{}
Output:
{ "enabled": true }
follow_ai.set
follow_ai.setInput:
{ "enabled": true }
Output:
{ "enabled": true }
selection.update
selection.updateInput:
{ "object_refs": [ "d:d-auth-flow/flow/node/n:start", "d:d-auth-flow/flow/node/n:authorize" ], "mode": "replace" }
Output:
{ "applied": [ "d:d-auth-flow/flow/node/n:authorize", "d:d-auth-flow/flow/node/n:start" ], "ignored": [] }
Probe-and-Refine on Charts
Use a shallow, typed exploration loop:
- Anchor on one object (
or explicitattention.human.read
).object_ref - Pull local structure with
.diagram.get_slice - Ask one typed question (
,seq.*
,flow.*
,xref.*
).route.find - Shift agent spotlight to next object if needed.
- Repeat until ambiguity is resolved.
Escalate to global reads (
diagram.read, diagram.get_ast, diagram.render_text) only when local probes are insufficient.
Mutation Discipline
- Use
beforediagram.propose_ops
for non-trivial edits.diagram.apply_ops - Keep op batches minimal and scoped to one local intent.
- Use stable IDs for all new objects.
- Re-read
ordiagram.stat
after apply to confirm resulting rev/state.diagram.diff
Walkthrough and Evidence Artifacts
Build walkthroughs as resumable breadcrumbs:
- Add concise nodes with evidence refs (
).refs - Keep node titles short; put detail in
.body_md - Link nodes incrementally as understanding grows.
- Update walkthroughs after edits instead of re-deriving from scratch.
Use xrefs to preserve cross-diagram semantics:
for implementation/expansion links.xref.add
andxref.list
for map and traversal.xref.neighbors- Surface dangling xrefs explicitly for follow-up.
Conflict Recovery
When mutation fails due to stale
base_rev:
- call
ordiagram.diff
from last known rev,walkthrough.diff - rebase ops on
,current_rev - retry with updated
.base_rev
If diff history window is exhausted, fetch a fresh snapshot (
diagram.read or walkthrough.read) and resume delta-first flow.
Reporting Discipline
Report only what the user needs:
- include target (
/diagram_id
) and revision movement (walkthrough_id
->base_rev
),new_rev - include concise deltas and key refs,
- avoid full AST/text dumps unless requested,
- in live sessions, avoid narrating every micro-step already visible in TUI.
MCP Playbook Companion
Use
references/mcp-playbooks.md for extra payload templates. Keep this file as the primary protocol and behavior contract.