Swarmclaw swarmvault
Use when working with a SwarmVault knowledge vault (raw/, wiki/, swarmvault.schema.md). Establishes schema-first conventions and prefers graph queries over broad search.
install
source · Clone the upstream repo
git clone https://github.com/swarmclawai/swarmclaw
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/swarmclawai/swarmclaw "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/swarmvault" ~/.claude/skills/swarmclawai-swarmclaw-swarmvault && rm -rf "$T"
manifest:
skills/swarmvault/SKILL.mdsource content
SwarmVault
Use when the agent has a SwarmVault MCP server enabled (transport
stdio, command npx -y @swarmvaultai/cli mcp) pointed at a vault directory.
A SwarmVault workspace is a three-layer knowledge system:
— immutable source inputs (PDFs, transcripts, code, emails, URLs, sheets). Never edit.raw/
— generated markdown owned by the agent and the SwarmVault compiler. Pages carry frontmatter (wiki/
,page_id
,source_ids
,node_ids
,freshness
).source_hashes
— generated indexes, graphs, and approvals. Treat as opaque output ofstate/
.compile
The vault contract lives in
swarmvault.schema.md at the workspace root. The vault config lives in swarmvault.config.json.
Rules
- Read
first before any compile or query work. It defines categories, naming, freshness rules, and grounding conventions for this specific vault.swarmvault.schema.md - Read
before broad file searching when it exists; otherwise start withwiki/graph/report.md
. Both summarize the vault structure so you don't re-scan everything.wiki/index.md - Treat
as immutable. Never edit, rename, or delete files there. New sources go throughraw/
.ingest - Treat
as compiler-owned. Edits should preserve frontmatter fields exactly:wiki/
,page_id
,source_ids
,node_ids
,freshness
. If those drift, the nextsource_hashes
will overwrite or flag the page.compile - Prefer graph queries over grep/glob for "how does X relate to Y" or "what depends on Z" questions. The vault's typed graph is more reliable than text search.
- Save high-value answers to
(use thewiki/outputs/
orquery
tools) instead of leaving them only in chat. That way they become first-class vault content for next time.explore
Tool Palette
The SwarmVault MCP server exposes the following tools (names are prefixed by SwarmClaw with
mcp_<sanitized server name>_, e.g. mcp_SwarmVault_query_vault). Match the user's intent to the closest tool:
Vault inspection:
— return current vault paths and high-level counts. Use this first when you've never seen this vault.workspace_info
— list source manifests underlist_sources
.raw/
— full-text search across compiled wiki pages.search_pages
— read a specific wiki page by itsread_page
-relative path.wiki/
Graph (prefer over grep for relational questions):
— machine-readable graph report and trust artifact. Read this before broad searching.graph_report
— traverse the graph from search seeds without calling an LLM provider.query_graph
— explain a graph node, its page, community, neighbors, and group patterns.get_node
— neighbors of a node or page target.get_neighbors
— list graph hyperedges, optionally filtered.get_hyperedges
— shortest path between two graph targets.shortest_path
— highest-connectivity nodes (the vault's hubs).god_nodes
— impact analysis: what depends on this file or module?blast_radius
Question answering:
— natural-language question against the vault. Returns grounded citations. Passquery_vault
to persist the answer tosave: true
.wiki/outputs/
Ingest and maintenance:
— add a file path or URL toingest_input
and register it as a managed source.raw/
— re-derivecompile_vault
pages, graph, and search index. Run after ingest, after schema changes, or when freshness is stale.wiki/
— anti-drift and vault health checks.lint_vault
If the MCP server is unavailable but the agent has a
shell or execute tool, the same operations are available via swarmvault <subcommand> (or npx -y @swarmvaultai/cli <subcommand>) with the working directory set to the vault root.
Workflow
For a fresh question against the vault:
- Call
if you haven't already, then readworkspace_info
. Ifswarmvault.schema.md
orwiki/graph/report.md
exists, skim it.wiki/index.md - Use
(orquery_vault
/query_graph
/get_node
for relational questions). Cite returnedshortest_path
andsource_ids
.node_ids - If the answer reveals a gap, propose
for the missing source, theningest_input
.compile_vault - Save the final answer with
query_vault
so it becomes vault content undersave: true
.wiki/outputs/
For a new source the user mentions:
the file/URL.ingest_input
to derive new wiki pages, graph, and search index.compile_vault
to check frontmatter and links.lint_vault- Skim the new pages in
and confirm provenance.wiki/sources/
Boundaries
- Don't run
against an unreviewed change tocompile
—swarmvault.schema.md
first.lint - Don't promote candidate pages (
) towiki/candidates/
orwiki/concepts/
without the user's confirmation; the approval flow exists for a reason.wiki/entities/ - Don't push the vault graph to Neo4j or export to Obsidian without an explicit ask.