Dotnet-skills dotnet-graphify-dotnet
Use `graphify-dotnet` to generate codebase knowledge graphs, architecture snapshots, and exportable repository maps from .NET or polyglot source trees, with optional AI-enriched semantic relationships.
install
source · Clone the upstream repo
git clone https://github.com/managedcode/dotnet-skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/managedcode/dotnet-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/catalog/Tools/Graphify/skills/dotnet-graphify-dotnet" ~/.claude/skills/managedcode-dotnet-skills-dotnet-graphify-dotnet && rm -rf "$T"
manifest:
catalog/Tools/Graphify/skills/dotnet-graphify-dotnet/SKILL.mdsource content
graphify-dotnet
Trigger On
,graphify
,graphify run
,graphify watch
, orgraphify benchmarkgraphify config- generating
,graph.json
,graph.html
,graph.svg
,graph.cypher
,GRAPH_REPORT.md
, orobsidian/wiki/ - building onboarding maps, architecture snapshots, or dependency-discovery artifacts from a repository
- choosing between AST-only extraction and AI-enriched semantic extraction
- pushing graph output into Neo4j, Obsidian, wiki docs, or CI artifacts
Workflow
- Confirm the problem is structural discovery, architecture review, onboarding, or graph export. If the user only needs one symbol lookup, one bug fix, or one dependency trace, normal repo search and tests are cheaper than a full graph run.
- Install and verify the tool before doing anything else:
dotnet --version dotnet tool install -g graphify-dotnet graphify --version - Start with a bounded AST-only run so the first output is fast and deterministic:
graphify run ./src --format json,html,report --provider none --verbose - Review outputs in this order:
for quick signalGRAPH_REPORT.md
for visual explorationgraph.html
for scripting and downstream toolinggraph.json
- Add AI enrichment only when inferred relationships or conceptual grouping matter more than strict syntax-only structure.
- Expand export formats for the real consumer:
for static docs and PRssvg
for graph queriesneo4j
for knowledge-base or onboarding flowsobsidian,wiki
- Use
for iterative architecture work, but rerun a cleanwatch
periodically because deletes and renames can leave stale references behind.run - Run
only after you already trust the generatedbenchmark
; its value is comparative token-reduction evidence, not billing-grade accounting.graph.json
Architecture
flowchart LR A["Repository or subtree"] --> B["graphify run / watch"] B --> C{"AI provider configured?"} C -->|No| D["AST extraction only"] C -->|Yes| E["AST + semantic extraction"] D --> F["Knowledge graph + Louvain communities"] E --> F F --> G{"Output target"} G -->|Human review| H["graph.html + GRAPH_REPORT.md"] G -->|Automation| I["graph.json"] G -->|Static docs| J["graph.svg"] G -->|Knowledge base| K["obsidian/ or wiki/"] G -->|Graph queries| L["graph.cypher for Neo4j"]
Practical Recipes
Write a quick architecture snapshot
graphify run . --format html,report --output ./artifacts/graph
Use this when you need a fast human-readable map of the current repo. Read
./artifacts/graph/GRAPH_REPORT.md first, then open ./artifacts/graph/graph.html.
Write queryable and documentation exports
graphify run ./src --format json,neo4j,svg,obsidian,wiki --output ./graphify-out
Use this when the graph will be consumed by scripts, Neo4j, docs, or knowledge-base tooling instead of only a browser.
Read and benchmark an existing graph
graphify benchmark ./graphify-out/graph.json
Treat this as a heuristic efficiency check for AI-context workflows after the graph already exists.
Provider Choice
: best first run, deterministic, fast, no external dependenciesnone
: local and privacy-friendly; good for sensitive code or low-cost experimentationollama
: enterprise-hosted semantic extraction with explicit endpoint, key, and deploymentazureopenai
: lowest-friction option for teams that already authenticate with GitHub Copilotcopilotsdk
Choose the provider by operational constraint first, not by model hype:
- privacy or offline requirements:
ollama - enterprise Azure governance:
azureopenai - fastest setup for existing subscribers:
copilotsdk - no semantic extraction required:
none
Configuration Patterns
graphify resolves settings in this priority order:
- CLI arguments
- user secrets
- environment variables
appsettings.local.jsonappsettings.json
Use
graphify config for the interactive wizard and graphify config show to inspect the resolved effective settings.
Common environment-variable patterns:
# AST-only explicit override export GRAPHIFY__Provider=None # Ollama export GRAPHIFY__Provider=Ollama export GRAPHIFY__Ollama__Endpoint=http://localhost:11434 export GRAPHIFY__Ollama__ModelId=llama3.2 # Azure OpenAI export GRAPHIFY__Provider=AzureOpenAI export GRAPHIFY__AzureOpenAI__Endpoint=https://myresource.openai.azure.com/ export GRAPHIFY__AzureOpenAI__ApiKey=... export GRAPHIFY__AzureOpenAI__DeploymentName=gpt-4o # GitHub Copilot SDK export GRAPHIFY__Provider=CopilotSdk export GRAPHIFY__CopilotSdk__ModelId=gpt-4.1
Tradeoffs And Constraints
- AST-only mode is reliable for structural facts such as files, classes, methods, and imports, but it will not infer conceptual links that are absent from syntax.
- AI enrichment produces richer graphs but adds latency, provider setup, quota or subscription concerns, and privacy review.
mode is an inner-loop accelerator, not a perfect source of truth. Deleted files are not fully removed from the graph until a clean rebuild, and renames can temporarily duplicate nodes.watch
is great for quick inspection, but large graphs can render slowly and some browsers blockgraph.html
loading. Serve the output folder locally if the page renders blank.file://- graphify respects
, so an empty graph can be a path-selection problem instead of a parser failure..gitignore
is approximate. The source uses heuristic token estimation, so treat the numbers as directional rather than invoice-grade.benchmark
Deliver
- a justified choice of AST-only vs AI-enriched extraction
- concrete
commands for the repo, folder, or output consumergraphify - the right export-format set for humans, docs, scripts, or graph databases
- configuration guidance that fits the chosen provider and operating model
- a validation path for the produced graph artifacts
Validate
shows a .NET 10 SDKdotnet --version
resolves after installationgraphify --version
completes without provider or path errorsgraphify run <path> --format json,html,report -v- the output folder contains the expected artifacts for the selected formats
reflects the intended provider configuration when AI enrichment is enabledgraphify config show
runs only after a real graph file existsgraphify benchmark <graph.json>
Load References
- upstream repository and docs map with direct links to the README, CLI docs, provider setup guides, sample project, and export-format docsreferences/source-map.md
- practical commands, provider setup patterns, export selection, watch-mode behavior, troubleshooting, and benchmark caveatsreferences/usage-and-operations.md