Understand-Anything understand
Analyze a codebase to produce an interactive knowledge graph for understanding architecture, components, and relationships
git clone https://github.com/Lum1104/Understand-Anything
T=$(mktemp -d) && git clone --depth=1 https://github.com/Lum1104/Understand-Anything "$T" && mkdir -p ~/.claude/skills && cp -r "$T/understand-anything-plugin/skills/understand" ~/.claude/skills/lum1104-understand-anything-understand && rm -rf "$T"
understand-anything-plugin/skills/understand/SKILL.md/understand
Analyze the current codebase and produce a
knowledge-graph.json file in .understand-anything/. This file powers the interactive dashboard for exploring the project's architecture.
Options
may contain:$ARGUMENTS
— Force a full rebuild, ignoring any existing graph--full
— Enable automatic graph updates on commit (writes--auto-update
toautoUpdate: true
).understand-anything/config.json
— Disable automatic graph updates (writes--no-auto-update
toautoUpdate: false
).understand-anything/config.json
— Run full LLM graph-reviewer instead of inline deterministic validation--review- A directory path (e.g.
or/path/to/repo
) — Analyze the given directory instead of the current working directory../other-project
Phase 0 — Pre-flight
Determine whether to run a full analysis or incremental update.
-
Resolve
:PROJECT_ROOT- Parse
for a non-flag token (any argument that does not start with$ARGUMENTS
). If found, treat it as the target directory path.--- If the path is relative, resolve it against the current working directory.
- Verify the resolved path exists and is a directory (run
). If it does not exist or is not a directory, report an error to the user and STOP.test -d <path> - Set
to the resolved absolute path.PROJECT_ROOT
- If no directory path argument is found, set
to the current working directory. 1.5. Ensure the plugin is built. Later phases invoke Node scripts that importPROJECT_ROOT
. On a fresh install@understand-anything/core
does not exist yet — build once. This skill file lives atpackages/core/dist/
, so the plugin root is two directories above it.<PLUGIN_ROOT>/skills/understand/SKILL.md
PLUGIN_ROOT="<two directories above this SKILL.md>" if [ ! -f "$PLUGIN_ROOT/packages/core/dist/index.js" ]; then cd "$PLUGIN_ROOT" && (pnpm install --frozen-lockfile 2>/dev/null || pnpm install) && pnpm --filter @understand-anything/core build fiIf
is missing, report to the user: "Install Node.js ≥ 22 and pnpm ≥ 10, then re-runpnpm
."/understand - Parse
-
Get the current git commit hash:
git rev-parse HEAD -
Create the intermediate and temp output directories:
mkdir -p $PROJECT_ROOT/.understand-anything/intermediate mkdir -p $PROJECT_ROOT/.understand-anything/tmp
3.5. Auto-update configuration:
- If
is in--auto-update
: write$ARGUMENTS
to{"autoUpdate": true}$PROJECT_ROOT/.understand-anything/config.json - If
is in--no-auto-update
: write$ARGUMENTS
to{"autoUpdate": false}$PROJECT_ROOT/.understand-anything/config.json - These flags only set the config — analysis proceeds normally regardless.
-
Check for subdomain knowledge graphs to merge: List all
files in*knowledge-graph*.json
excluding$PROJECT_ROOT/.understand-anything/
itself (e.g.knowledge-graph.json
,frontend-knowledge-graph.json
). If any subdomain graphs exist, run the merge script bundled with this skill (located next to this SKILL.md file — use the skill directory path, not the project root):backend-knowledge-graph.jsonpython <SKILL_DIR>/merge-subdomain-graphs.py $PROJECT_ROOTThe script discovers subdomain graphs, loads the existing
as a base (if present), and merges everything intoknowledge-graph.json
(deduplicating nodes and edges). Report the merge summary to the user, then continue with the merged graph.knowledge-graph.json -
Check if
exists. If it does, read it.$PROJECT_ROOT/.understand-anything/knowledge-graph.json -
Check if
exists. If it does, read it to get$PROJECT_ROOT/.understand-anything/meta.json
.gitCommitHash -
Decision logic:
Condition Action
flag in--full$ARGUMENTSFull analysis (all phases) No existing graph or meta Full analysis (all phases)
flag + existing graph + unchanged commit hash--reviewSkip to Phase 6 (review-only — reuse existing assembled graph) Existing graph + unchanged commit hash Ask the user: "The graph is up to date at this commit. Would you like to: (a) run a full rebuild (
), (b) run the LLM graph reviewer (--full
), or (c) do nothing?" Then follow their choice. If they pick (c), STOP.--reviewExisting graph + changed files Incremental update (re-analyze changed files only) Review-only path: Copy the existing
toknowledge-graph.json
, then jump directly to Phase 6 step 3.$PROJECT_ROOT/.understand-anything/intermediate/assembled-graph.jsonFor incremental updates, get the changed file list:
git diff <lastCommitHash>..HEAD --name-onlyIf this returns no files, report "Graph is up to date" and STOP.
-
Collect project context for subagent injection:
- Read
(orREADME.md
,README.rst
) fromreadme.md
if it exists. Store as$PROJECT_ROOT
(first 3000 characters).$README_CONTENT - Read the primary package manifest (
,package.json
,pyproject.toml
,Cargo.toml
,go.mod
) if it exists. Store aspom.xml
.$MANIFEST_CONTENT - Capture the top-level directory tree:
Store asfind $PROJECT_ROOT -maxdepth 2 -type f -not -path '*/node_modules/*' -not -path '*/.git/*' -not -path '*/dist/*' | head -100
.$DIR_TREE - Detect the project entry point by checking for common patterns (in order):
,src/index.ts
,src/main.ts
,src/App.tsx
,index.js
,main.py
,manage.py
,app.py
,wsgi.py
,asgi.py
,run.py
,__main__.py
,main.go
,cmd/*/main.go
,src/main.rs
,src/lib.rs
,src/main/java/**/Application.java
,Program.cs
,config.ru
. Store first match asindex.php
.$ENTRY_POINT
- Read
Phase 0.5 — Ignore Configuration
Set up and verify the
.understandignore file before scanning.
- Check if
exists.$PROJECT_ROOT/.understand-anything/.understandignore - If it does NOT exist, generate a starter file:
- Run the following Node.js one-liner in
(reads$PROJECT_ROOT
and deduplicates against built-in defaults):.gitignorenode -e " const fs = require('fs'); const path = require('path'); const root = process.cwd(); const defaults = ['node_modules/','node_modules','.git/','vendor/','venv/','.venv/','__pycache__/','dist/','dist','build/','build','out/','coverage/','coverage','.next/','.cache/','.turbo/','target/','obj/','*.lock','package-lock.json','yarn.lock','pnpm-lock.yaml','*.png','*.jpg','*.jpeg','*.gif','*.svg','*.ico','*.woff','*.woff2','*.ttf','*.eot','*.mp3','*.mp4','*.pdf','*.zip','*.tar','*.gz','*.min.js','*.min.css','*.map','*.generated.*','.idea/','.vscode/','LICENSE','.gitignore','.editorconfig','.prettierrc','.eslintrc*','*.log']; const norm = p => p.replace(/\/+$/, ''); const defaultSet = new Set(defaults.map(norm)); const header = '# .understandignore — patterns for files/dirs to exclude from analysis\n# Syntax: same as .gitignore (globs, # comments, ! negation, trailing / for dirs)\n# Lines below are suggestions — uncomment to activate.\n# Use ! prefix to force-include something excluded by defaults.\n#\n# Built-in defaults (always excluded unless negated):\n# node_modules/, .git/, dist/, build/, obj/, *.lock, *.min.js, etc.\n#\n'; let body = ''; const gitignorePath = path.join(root, '.gitignore'); if (fs.existsSync(gitignorePath)) { const gi = fs.readFileSync(gitignorePath, 'utf-8').split('\n').map(l => l.trim()).filter(l => l && !l.startsWith('#')).filter(p => !defaultSet.has(norm(p))); if (gi.length) { body += '# --- From .gitignore (uncomment to exclude) ---\n\n' + gi.map(p => '# ' + p).join('\n') + '\n\n'; } } const dirs = ['__tests__','test','tests','fixtures','testdata','docs','examples','scripts','migrations','.storybook']; const found = dirs.filter(d => fs.existsSync(path.join(root, d))); if (found.length) { body += '# --- Detected directories (uncomment to exclude) ---\n\n' + found.map(d => '# ' + d + '/').join('\n') + '\n\n'; } body += '# --- Test file patterns (uncomment to exclude) ---\n\n# *.test.*\n# *.spec.*\n# *.snap\n'; const outDir = path.join(root, '.understand-anything'); if (!fs.existsSync(outDir)) fs.mkdirSync(outDir, { recursive: true }); fs.writeFileSync(path.join(outDir, '.understandignore'), header + body); " - Report to the user:
Generated
with suggested exclusions based on your project structure. Please review it and uncomment any patterns you'd like to exclude from analysis. When ready, confirm to continue..understand-anything/.understandignore - Wait for user confirmation before proceeding.
- Run the following Node.js one-liner in
- If it already exists, report:
Found
. Review it if needed, then confirm to continue..understand-anything/.understandignore- Wait for user confirmation before proceeding.
- After confirmation, proceed to Phase 1.
Phase 1 — SCAN (Full analysis only)
Dispatch a subagent using the
project-scanner agent definition (at agents/project-scanner.md). Append the following additional context:
Additional context from main session:
Project README (first 3000 chars):
$README_CONTENTPackage manifest:
$MANIFEST_CONTENTUse this context to produce more accurate project name, description, and framework detection. The README and manifest are authoritative — prefer their information over heuristics.
Pass these parameters in the dispatch prompt:
Scan this project directory to discover all project files (including non-code files like configs, docs, infrastructure), detect languages and frameworks. Project root:
Write output to:$PROJECT_ROOT$PROJECT_ROOT/.understand-anything/intermediate/scan-result.json
After the subagent completes, read
$PROJECT_ROOT/.understand-anything/intermediate/scan-result.json to get:
- Project name, description
- Languages, frameworks
- File list with line counts and
per file (fileCategory
,code
,config
,docs
,infra
,data
,script
)markup - Complexity estimate
- Import map (
): pre-resolved project-internal imports per file (non-code files have empty arrays)importMap
Store
importMap in memory as $IMPORT_MAP for use in Phase 2 batch construction.
Store the file list as $FILE_LIST with fileCategory metadata for use in Phase 2 batch construction.
Gate check: If >100 files, inform the user and suggest scoping with a subdirectory argument. Proceed only if user confirms or add guidance that this may take a while.
If the scan result includes
filteredByIgnore > 0, report:
Excluded {filteredByIgnore} files via
..understandignore
Phase 2 — ANALYZE
Full analysis path
Batch the file list from Phase 1 into groups of 20-30 files each (aim for ~25 files per batch for balanced sizes).
Batching strategy for non-code files:
- Group related non-code files together in the same batch when possible:
- Dockerfile + docker-compose.yml + .dockerignore → same batch
- SQL migration files → same batch (ordered by filename)
- CI/CD config files (.github/workflows/*) → same batch
- Documentation files (docs/*.md) → same batch
- This allows the file-analyzer to create cross-file edges (e.g., docker-compose
Dockerfile)depends_on - Non-code files can be mixed with code files in the same batch if batch sizes are small
- Each file's
from Phase 1 must be included in the batch file listfileCategory
For each batch, dispatch a subagent using the
file-analyzer agent definition (at agents/file-analyzer.md). Run up to 5 subagents concurrently using parallel dispatch. Append the following additional context:
Additional context from main session:
Project:
—<projectName>Languages:<projectDescription><languages from Phase 1>
Before dispatching each batch, construct
batchImportData from $IMPORT_MAP:
batchImportData = {} for each file in this batch: batchImportData[file.path] = $IMPORT_MAP[file.path] ?? []
Fill in batch-specific parameters below and dispatch:
Analyze these files and produce GraphNode and GraphEdge objects. Project root:
Project:$PROJECT_ROOTLanguages:<projectName>Batch index:<languages>Skill directory (for bundled scripts):<batchIndex>Write output to:<SKILL_DIR>$PROJECT_ROOT/.understand-anything/intermediate/batch-<batchIndex>.jsonPre-resolved import data for this batch (use this for all import edge creation — do NOT re-resolve imports from source):
<batchImportData JSON>Files to analyze in this batch:
(<sizeLines> lines, fileCategory:<path>)<fileCategory> (<sizeLines> lines, fileCategory:<path>) ...<fileCategory>
After ALL batches complete, run the merge-and-normalize script bundled with this skill (located next to this SKILL.md file — use the skill directory path, not the project root):
python <SKILL_DIR>/merge-batch-graphs.py $PROJECT_ROOT
This script reads all
batch-*.json files from $PROJECT_ROOT/.understand-anything/intermediate/, then in one pass:
- Combines all nodes and edges across batches
- Normalizes node IDs (strips double prefixes, project-name prefixes, adds missing prefixes)
- Normalizes complexity values (
→low
,simple
→medium
,moderate
→high
, etc.)complex - Rewrites edge references to match corrected node IDs
- Deduplicates nodes by ID (keeps last occurrence) and edges by
(source, target, type) - Drops dangling edges referencing missing nodes
- Logs all corrections and dropped items to stderr
Output:
$PROJECT_ROOT/.understand-anything/intermediate/assembled-graph.json
Include the script's warnings in
$PHASE_WARNINGS for the reviewer.
Incremental update path
Use the changed files list from Phase 0. Batch and dispatch file-analyzer subagents using the same process as above (20-30 files per batch, up to 5 concurrent, with batchImportData constructed from $IMPORT_MAP), but only for changed files.
After batches complete:
- Remove old nodes whose
matches any changed file from the existing graphfilePath - Remove old edges whose
orsource
references a removed nodetarget - Write the pruned existing nodes/edges as
in the intermediate directorybatch-existing.json - Run the same merge script — it will combine
with the freshbatch-existing.json
files:batch-*.jsonpython <SKILL_DIR>/merge-batch-graphs.py $PROJECT_ROOT
Phase 3 — ASSEMBLE REVIEW
Dispatch a subagent using the
assemble-reviewer agent definition (at agents/assemble-reviewer.md).
Pass these parameters in the dispatch prompt:
Review the assembled graph at
. Project root:$PROJECT_ROOT/.understand-anything/intermediate/assembled-graph.jsonBatch files are at:$PROJECT_ROOTWrite review output to:$PROJECT_ROOT/.understand-anything/intermediate/batch-*.json$PROJECT_ROOT/.understand-anything/intermediate/assemble-review.jsonMerge script report:
<paste the full stderr output from merge-batch-graphs.py>Import map for cross-batch edge verification:
$IMPORT_MAP
After the subagent completes, read
$PROJECT_ROOT/.understand-anything/intermediate/assemble-review.json and add any notes to $PHASE_WARNINGS.
Phase 4 — ARCHITECTURE
Build the combined prompt template:
- Use the
agent definition (atarchitecture-analyzer
).agents/architecture-analyzer.md - Language context injection: For each language detected in Phase 1 (e.g.,
,python
,markdown
,dockerfile
,yaml
,sql
,terraform
,graphql
,protobuf
,shell
,html
), read the file atcss
(e.g.,./languages/<language-id>.md
,./languages/python.md
) and append its content after the base template under a./languages/dockerfile.md
header. If the file does not exist for a detected language, skip it silently and continue. These files are in the## Language Context
subdirectory next to this SKILL.md file. Include non-code language snippets — they provide edge patterns and summary styles for non-code files.languages/ - Framework addendum injection: For each framework detected in Phase 1 (e.g.,
), read the file atDjango
(e.g.,./frameworks/<framework-id-lowercase>.md
) and append its full content after the language context. If the file does not exist for a detected framework, skip it silently and continue. These files are in the./frameworks/django.md
subdirectory next to this SKILL.md file.frameworks/
Append the language/framework context and the following additional context to the agent's prompt:
Additional context from main session:
Frameworks detected:
<frameworks from Phase 1>Directory tree (top 2 levels):
$DIR_TREEUse the directory tree, language context, and framework addendums (appended above) to inform layer assignments. Directory structure is strong evidence for layer boundaries. Non-code files (config, docs, infrastructure, data) should be assigned to appropriate layers — see the prompt template for guidance.
Pass these parameters in the dispatch prompt:
Analyze this codebase's structure to identify architectural layers. Project root:
Write output to:$PROJECT_ROOTProject:$PROJECT_ROOT/.understand-anything/intermediate/layers.json—<projectName><projectDescription>File nodes (all node types — includes code files, config, document, service, pipeline, table, schema, resource, endpoint):
[list of {id, type, name, filePath, summary, tags} for ALL file-level nodes — omit complexity, languageNotes]Import edges:
[list of edges with type "imports"]All edges (for cross-category analysis — includes configures, documents, deploys, triggers, etc.):
[list of ALL edges — include all edge types]
After the subagent completes, read
$PROJECT_ROOT/.understand-anything/intermediate/layers.json and normalize it into a final layers array. Apply these steps in order:
- Unwrap envelope: If the file contains
instead of a plain array, extract the inner array. (The prompt requests a plain array, but LLMs may still produce an envelope.){ "layers": [...] } - Rename legacy fields: If any layer object has a
field instead ofnodes
, renamenodeIds
→nodes
. IfnodeIds
entries are objects with annodes
field rather than plain strings, extract just theid
values intoid
.nodeIds - Synthesize missing IDs: If any layer is missing an
, generate one asid
.layer:<kebab-case-name> - Convert file paths: If
entries are raw file paths without a known prefix (nodeIds
,file:
,config:
,document:
,service:
,pipeline:
,table:
,schema:
,resource:
), convert them toendpoint:
.file:<relative-path> - Drop dangling refs: Remove any
entries that do not exist in the merged node set.nodeIds
Each element of the final
layers array MUST have this shape:
[ { "id": "layer:<kebab-case-name>", "name": "<layer name>", "description": "<what belongs in this layer>", "nodeIds": ["file:src/App.tsx", "config:tsconfig.json", "document:README.md"] } ]
All four fields (
id, name, description, nodeIds) are required.
For incremental updates: Always re-run architecture analysis on the full merged node set, since layer assignments may shift when files change.
Context for incremental updates: When re-running architecture analysis, also inject the previous layer definitions:
Previous layer definitions (for naming consistency):
[previous layers from existing graph]Maintain the same layer names and IDs where possible. Only add/remove layers if the file structure has materially changed.
Phase 5 — TOUR
Dispatch a subagent using the
tour-builder agent definition (at agents/tour-builder.md). Append the following additional context:
Additional context from main session:
Project README (first 3000 chars):
$README_CONTENTProject entry point:
$ENTRY_POINTUse the README to align the tour narrative with the project's own documentation. Start the tour from the entry point if one was detected. The tour should tell the same story the README tells, but through the lens of actual code structure.
Pass these parameters in the dispatch prompt:
Create a guided learning tour for this codebase. Project root:
Write output to:$PROJECT_ROOTProject:$PROJECT_ROOT/.understand-anything/intermediate/tour.json—<projectName>Languages:<projectDescription><languages>Nodes (all file-level nodes — includes code files, config, document, service, pipeline, table, schema, resource, endpoint):
[list of {id, name, filePath, summary, type} for ALL file-level nodes — do NOT include function or class nodes]Layers:
[list of {id, name, description} for each layer — omit nodeIds]Edges (all types — includes imports, calls, configures, documents, deploys, triggers, etc.):
[list of ALL edges — include all edge types for complete graph topology analysis]
After the subagent completes, read
$PROJECT_ROOT/.understand-anything/intermediate/tour.json and normalize it into a final tour array. Apply these steps in order:
- Unwrap envelope: If the file contains
instead of a plain array, extract the inner array. (The prompt requests a plain array, but LLMs may still produce an envelope.){ "steps": [...] } - Rename legacy fields: If any step has
instead ofnodesToInspect
, rename it →nodeIds
. If any step hasnodeIds
instead ofwhyItMatters
, rename it →description
.description - Convert file paths: If
entries are raw file paths without a known prefix (nodeIds
,file:
,config:
,document:
,service:
,pipeline:
,table:
,schema:
,resource:
), convert them toendpoint:
.file:<relative-path> - Drop dangling refs: Remove any
entries that do not exist in the merged node set.nodeIds - Sort by
before saving.order
Each element of the final
tour array MUST have this shape:
[ { "order": 1, "title": "Project Overview", "description": "Start with the README to understand the project's purpose and architecture.", "nodeIds": ["document:README.md"] }, { "order": 2, "title": "Application Entry Point", "description": "This step explains how the frontend boots and mounts.", "nodeIds": ["file:src/main.tsx", "file:src/App.tsx"] } ]
Required fields:
order, title, description, nodeIds. Preserve optional languageLesson when present.
Phase 6 — REVIEW
Assemble the full KnowledgeGraph JSON object:
{ "version": "1.0.0", "project": { "name": "<projectName>", "languages": ["<languages>"], "frameworks": ["<frameworks>"], "description": "<projectDescription>", "analyzedAt": "<ISO 8601 timestamp>", "gitCommitHash": "<commit hash from Phase 0>" }, "nodes": [<all nodes from assembled-graph.json after Phase 3 review>], "edges": [<all edges from assembled-graph.json after Phase 3 review>], "layers": [<layers from Phase 4>], "tour": [<steps from Phase 5>] }
-
Before writing the assembled graph, validate that:
is an array of objects with these required fields:layers
,id
,name
,descriptionnodeIds
is an array of objects with these required fields:tour
,order
,title
,descriptionnodeIds
is allowed as an optional string fieldtour[*].languageLesson- Every
entry exists in the merged node setlayers[*].nodeIds - Every
entry exists in the merged node settour[*].nodeIds
If validation fails, automatically normalize and rewrite the graph into this shape before saving. If the graph still fails final validation after the normalization pass, save it with warnings but mark dashboard auto-launch as skipped.
-
Write the assembled graph to
.$PROJECT_ROOT/.understand-anything/intermediate/assembled-graph.json -
Check
for$ARGUMENTS
flag. Then run the appropriate validation path:--review
Default path (no --review
): inline deterministic validation
--reviewWrite the following Node.js script to
$PROJECT_ROOT/.understand-anything/tmp/ua-inline-validate.cjs:
#!/usr/bin/env node const fs = require('fs'); const graphPath = process.argv[2]; const outputPath = process.argv[3]; try { const graph = JSON.parse(fs.readFileSync(graphPath, 'utf8')); const issues = [], warnings = []; if (!Array.isArray(graph.nodes)) { issues.push('graph.nodes is missing or not an array'); graph.nodes = []; } if (!Array.isArray(graph.edges)) { issues.push('graph.edges is missing or not an array'); graph.edges = []; } const nodeIds = new Set(); const seen = new Map(); graph.nodes.forEach((n, i) => { if (!n.id) { issues.push(`Node[${i}] missing id`); return; } if (!n.type) issues.push(`Node[${i}] '${n.id}' missing type`); if (!n.name) issues.push(`Node[${i}] '${n.id}' missing name`); if (!n.summary) issues.push(`Node[${i}] '${n.id}' missing summary`); if (!n.tags || !n.tags.length) issues.push(`Node[${i}] '${n.id}' missing tags`); if (seen.has(n.id)) issues.push(`Duplicate node ID '${n.id}' at indices ${seen.get(n.id)} and ${i}`); else seen.set(n.id, i); nodeIds.add(n.id); }); graph.edges.forEach((e, i) => { if (!nodeIds.has(e.source)) issues.push(`Edge[${i}] source '${e.source}' not found`); if (!nodeIds.has(e.target)) issues.push(`Edge[${i}] target '${e.target}' not found`); }); const fileLevelTypes = new Set(['file', 'config', 'document', 'service', 'pipeline', 'table', 'schema', 'resource', 'endpoint']); const fileNodes = graph.nodes.filter(n => fileLevelTypes.has(n.type)).map(n => n.id); const assigned = new Map(); if (!Array.isArray(graph.layers)) { if (graph.layers) warnings.push('graph.layers is not an array'); graph.layers = []; } if (!Array.isArray(graph.tour)) { if (graph.tour) warnings.push('graph.tour is not an array'); graph.tour = []; } graph.layers.forEach(layer => { (layer.nodeIds || []).forEach(id => { if (!nodeIds.has(id)) issues.push(`Layer '${layer.id}' refs missing node '${id}'`); if (assigned.has(id)) issues.push(`Node '${id}' appears in multiple layers`); assigned.set(id, layer.id); }); }); fileNodes.forEach(id => { if (!assigned.has(id)) issues.push(`File node '${id}' not in any layer`); }); graph.tour.forEach((step, i) => { (step.nodeIds || []).forEach(id => { if (!nodeIds.has(id)) issues.push(`Tour step[${i}] refs missing node '${id}'`); }); }); const withEdges = new Set([ ...graph.edges.map(e => e.source), ...graph.edges.map(e => e.target) ]); graph.nodes.forEach(n => { if (!withEdges.has(n.id)) warnings.push(`Node '${n.id}' has no edges (orphan)`); }); const stats = { totalNodes: graph.nodes.length, totalEdges: graph.edges.length, totalLayers: graph.layers.length, tourSteps: graph.tour.length, nodeTypes: graph.nodes.reduce((a, n) => { a[n.type] = (a[n.type]||0)+1; return a; }, {}), edgeTypes: graph.edges.reduce((a, e) => { a[e.type] = (a[e.type]||0)+1; return a; }, {}) }; fs.writeFileSync(outputPath, JSON.stringify({ issues, warnings, stats }, null, 2)); process.exit(0); } catch (err) { process.stderr.write(err.message + '\n'); process.exit(1); }
Execute it:
node $PROJECT_ROOT/.understand-anything/tmp/ua-inline-validate.cjs \ "$PROJECT_ROOT/.understand-anything/intermediate/assembled-graph.json" \ "$PROJECT_ROOT/.understand-anything/intermediate/review.json"
If the script exits non-zero, read stderr, fix the script, and retry once.
--review
path: full LLM reviewer
--reviewIf
--review IS in $ARGUMENTS, dispatch the LLM graph-reviewer subagent as follows:
Dispatch a subagent using the
graph-reviewer agent definition (at agents/graph-reviewer.md). Append the following additional context:
Additional context from main session:
Phase 1 scan results (file inventory):
[list of {path, sizeLines} from scan-result.json]Phase warnings/errors accumulated during analysis:
- [list any batch failures, skipped files, or warnings from Phases 2-5]
Cross-validate: every file in the scan inventory should have a corresponding node in the graph (node types may vary:
,file:,config:,document:,service:,pipeline:,table:,schema:,resource:). Flag any missing files. Also flag any graph nodes whoseendpoint:doesn't appear in the scan inventory.filePath
Pass these parameters in the dispatch prompt:
Validate the knowledge graph at
. Project root:$PROJECT_ROOT/.understand-anything/intermediate/assembled-graph.jsonRead the file and validate it for completeness and correctness. Write output to:$PROJECT_ROOT$PROJECT_ROOT/.understand-anything/intermediate/review.json
-
Read
.$PROJECT_ROOT/.understand-anything/intermediate/review.json -
If
array is non-empty:issues- Review the
listissues - Apply automated fixes where possible:
- Remove edges with dangling references
- Fill missing required fields with sensible defaults (e.g., empty
->tags
, empty["untagged"]
->summary
)"No summary available" - Remove nodes with invalid types
- Re-run the final graph validation after automated fixes
- If critical issues remain after one fix attempt, save the graph anyway but include the warnings in the final report and mark dashboard auto-launch as skipped
- Review the
-
If
array is empty: Proceed to Phase 7.issues
Phase 7 — SAVE
-
Write the final knowledge graph to
.$PROJECT_ROOT/.understand-anything/knowledge-graph.json -
Write metadata to
:$PROJECT_ROOT/.understand-anything/meta.json{ "lastAnalyzedAt": "<ISO 8601 timestamp>", "gitCommitHash": "<commit hash>", "version": "1.0.0", "analyzedFiles": <number of files analyzed> }
2.5. Generate structural fingerprints for all analyzed files and save to
$PROJECT_ROOT/.understand-anything/fingerprints.json. This creates the baseline for future automatic incremental updates.
Write and execute a Node.js script that uses the core fingerprint module (tree-sitter-based, not regex):
import { buildFingerprintStore } from '@understand-anything/core'; import { saveFingerprints } from '@understand-anything/core'; const store = await buildFingerprintStore('<PROJECT_ROOT>', sourceFilePaths); saveFingerprints('<PROJECT_ROOT>', store);
Where
sourceFilePaths is the list of all analyzed source file paths from Phase 1. This uses the same tree-sitter analysis pipeline as the main fingerprint engine, ensuring the baseline matches the comparison logic used during auto-updates.
-
Clean up intermediate files:
rm -rf $PROJECT_ROOT/.understand-anything/intermediate rm -rf $PROJECT_ROOT/.understand-anything/tmp -
Report a summary to the user containing:
- Project name and description
- Files analyzed / total files (with breakdown by fileCategory: code, config, docs, infra, data, script, markup)
- Nodes created (broken down by type: file, function, class, config, document, service, table, endpoint, pipeline, schema, resource)
- Edges created (broken down by type)
- Layers identified (with names)
- Tour steps generated (count)
- Any warnings from the reviewer
- Path to the output file:
$PROJECT_ROOT/.understand-anything/knowledge-graph.json
-
Only automatically launch the dashboard by invoking the
skill if final graph validation passed after normalization/review fixes. If final validation did not pass, report that the graph was saved with warnings and dashboard launch was skipped./understand-dashboard
Error Handling
- If any subagent dispatch fails, retry once with the same prompt plus additional context about the failure.
- Track all warnings and errors from each phase in a
list. When using$PHASE_WARNINGS
, pass this list to the graph-reviewer in Phase 6. On the default path, include accumulated warnings in the Phase 7 final report.--review - If it fails a second time, skip that phase and continue with partial results.
- ALWAYS save partial results — a partial graph is better than no graph.
- Report any skipped phases or errors in the final summary so the user knows what happened.
- NEVER silently drop errors. Every failure must be visible in the final report.
Reference: KnowledgeGraph Schema
Node Types (13 total)
| Type | Description | ID Convention |
|---|---|---|
| Source code file | |
| Function or method | |
| Class, interface, or type | |
| Logical module or package | |
| Abstract concept or pattern | |
| Configuration file (YAML, JSON, TOML, env) | |
| Documentation file (Markdown, RST, TXT) | |
| Deployable service definition (Dockerfile, K8s) | |
| Database table or migration | |
| API endpoint or route definition | |
| CI/CD pipeline configuration | |
| Schema definition (GraphQL, Protobuf, Prisma) | |
| Infrastructure resource (Terraform, CloudFormation) | |
Edge Types (26 total)
| Category | Types |
|---|---|
| Structural | , , , , |
| Behavioral | , , , |
| Data flow | , , , |
| Dependencies | , , |
| Semantic | , |
| Infrastructure | , , , |
| Schema/Data | , , , |
Edge Weight Conventions
| Edge Type | Weight |
|---|---|
| 1.0 |
, | 0.9 |
, , | 0.8 |
, , | 0.7 |
, , | 0.6 |
, , , , | 0.5 |
| All others | 0.5 (default) |