Skills llm-wiki

install
source · Clone the upstream repo
git clone https://github.com/infranodus/skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/infranodus/skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skill-llm-wiki" ~/.claude/skills/infranodus-skills-llm-wiki && rm -rf "$T"
manifest: skill-llm-wiki/SKILL.md
source content

LLM Wiki Setup Assistant

A guided, multi-phase workflow for designing and scaffolding a personal LLM-maintained wiki. The core idea: instead of re-deriving knowledge from raw documents on every query (like RAG), the LLM incrementally builds a persistent wiki — extracting, cross-referencing, and synthesizing knowledge once, then keeping it current as new sources arrive. The wiki compounds over time. The human curates sources and asks questions; the LLM does all the bookkeeping.

Preamble (run first)

_BRANCH=$(git branch --show-current 2>/dev/null || echo "unknown")
echo "BRANCH: $_BRANCH"

Architecture (for context)

Every LLM Wiki has three layers:

  1. Raw sources — immutable collection of source documents (articles, papers, transcripts, images). The LLM reads but never modifies these. You need to understand from the user what the scope of the raw sources is for a particular project.
  2. The wiki — LLM-generated markdown files (summaries, entity pages, concept pages, comparisons, synthesis). The LLM owns this layer entirely. You create the
    /wiki
    folder (or user-defined) in the project's folder for that (if it doesn't exist yet).
  3. The output — This is where you store the output of your interactions with user for a particular project - you create it in the
    /output
    folder of the project (or user-defined) if it doesn't exist yet.
  4. The schema — a configuration document (CLAUDE.md for Claude and AGENTS.md for Codex) that tells the LLMs how the wiki is structured, what conventions to follow, and what workflows to use. Co-evolved by user and LLM over time. Create both files so the folder is compatible with most LLMs.

Phase Overview

1. DISCOVER    -> What domain? What's the goal? What sources?
2. SCOPE       -> How big? How deep? What outputs matter?
3. STRUCTURE   -> Directory layout, page types, naming conventions - see architecture above
4. SCHEMA      -> Write the CLAUDE.md and AGENTS.md of the local folder where the skill is invoked for configuration
5. WORKFLOWS   -> Define ingest, query, and lint operations
6. TOOLING     -> Obsidian plugins, InfraNodus tools for gap analysis, research, and text optimization, CLI tools, search, git
7. SCAFFOLD    -> Create the directory structure and starter files
8. FIRST RUN   -> Ingest the first source together as a test drive
9. PLAN        -> Analyze gaps, prioritize research directions, create actionable todos

Phase 1: DISCOVER — What Are You Building This For?

Start by understanding the user's domain and motivation. Ask conversationally — 2-3 questions max per message using the AskUserQuestion tool.

Core Questions

  • What domain or topic is this wiki for? Get specific. Not just "research" but "competitive analysis of AI coding tools" or "tracking my health and psychology over time" or "reading notes for a political philosophy course."

  • What kinds of sources will you be feeding it? Examples:

    • Academic papers (PDFs, arXiv links)
    • Web articles and blog posts
    • YouTube videos / podcast transcripts
    • Meeting notes / Slack threads
    • Books (chapter by chapter)
    • Journal entries / personal notes
    • Data files (CSVs, JSON)
    • Images, screenshots, diagrams
  • What's your end goal? What does success look like?

    • "I want to deeply understand topic X and develop an original thesis"
    • "I want a living reference I can query months from now"
    • "I want to track how my understanding evolves over time"
    • "I want to produce a report / paper / presentation at the end"
    • "I want a structured record of everything I've read on this topic"
  • Are you starting fresh or do you already have sources? If they have existing material, understand the volume and format.

  • Who else will use this? Just the user, or a team? This affects structure and access conventions.

Contextual Probes

Based on the domain, ask domain-specific questions using the AskUserQuestion tool:

  • Personal/self-improvement: What aspects are you tracking? (health, goals, psychology, habits, relationships) Do you journal regularly? What format?
  • Research: What's your current level of expertise? Are you exploring broadly or going deep on a specific question? Is there a deadline?
  • Book reading: One book or a reading list? Fiction or non-fiction? What do you want to get out of it?
  • Business/team: What's the knowledge problem you're solving? Who generates the sources? Who consumes the wiki?
  • Course/learning: What course? What's the structure? Lectures, readings, problem sets?

Don't overwhelm. Gather enough to move to Phase 2. You can refine as you go.


Phase 2: SCOPE — How Big and How Deep?

Now calibrate the wiki's scale and depth. This determines how much structure to build.

Scale Assessment

Ask the user to estimate:

  • Source volume: How many sources do you expect to add? (5-10? 50-100? 500+?)
  • Timeframe: Over what period? (one weekend sprint? months of ongoing work?)
  • Session frequency: How often will you work with it? (daily? weekly? sporadic bursts?)

Depth Assessment

  • Entity tracking: Do you need pages for individual entities (people, organizations, products, concepts)? Or is topic-level granularity enough?
  • Chronological tracking: Does time matter? (e.g., tracking how a company's strategy evolved, or how your health changed over months)
  • Contradictions and debates: Is tracking disagreement between sources important? (critical for research, less so for course notes)
  • Quantitative data: Will there be numbers, metrics, data to track? Or is it primarily qualitative?

Output Needs

  • What formats will you want to extract from the wiki?
    • Markdown pages (default — always)
    • Comparison tables
    • Slide decks (Marp)
    • Charts / visualizations
    • Structured data (YAML frontmatter, Dataview queries)
    • Exportable reports

Tier Classification

Based on answers, classify the wiki into a tier (share this with the user):

TierSourcesEntitiesDurationExample
Light5-20Few/noneDays-weeksReading a single book, trip planning
Medium20-100DozensWeeks-monthsResearch project, course notes, competitive analysis
Heavy100+HundredsMonths-yearsOngoing team wiki, long-term research program, personal life wiki

The tier determines how much indexing infrastructure, how many page types, and how formal the schema needs to be.


Phase 3: STRUCTURE — Design the Directory Layout

Based on Phases 1-2, propose a directory structure. Present it to the user and iterate.

Base Template

Every wiki has at least:

wiki-name/
  raw/                    # Immutable source documents
    assets/               # Downloaded images, PDFs
  wiki/                   # LLM-generated pages (the wiki itself)
    index.md              # Content catalog — what's in the wiki
    log.md                # Chronological record of operations
    overview.md           # High-level synthesis of everything
  output/                 # Folder for output of the interactions
  todos/                  # Research priorities and actionable task lists
  CLAUDE.md               # Schema — instructions for the LLM
  AGENTS.md               # Schema - instructions for the LLM (Codex-compatible)

Knowledge Graphs

All ontology/knowledge-graph files are stored in a single

infranodus/
folder at the project root (sibling of
wiki/
,
raw/
, etc.). This folder has no subfolders — all graph files live flat in
infranodus/
. This is a core part of the wiki workflow — not optional.

Ontology Generation Workflow

  1. When to generate: After creating or significantly updating pages in any wiki folder (systems/, concepts/, connections/, sources/, questions/, etc.)

  2. How to generate: Use the

    ontology-creator
    skill (invoke via
    /ontology-creator
    or the Skill tool) to generate an ontology from the content of all files in that folder. The ontology must use
    [[wikilinks]]
    syntax with
    [relationCode]
    tags as specified by the skill.

  3. What to feed: Read all

    .md
    files in the folder, combine their content (stripping YAML frontmatter), and pass the combined text to the ontology-creator skill. The skill will extract entities and relationships in
    [[wikilinks]]
    format.

  4. Where to save: Save the generated ontology as

    <folder-name>-ontology.md
    inside the
    infranodus/
    folder at the project root. For example:

    • infranodus/systems-ontology.md
    • infranodus/concepts-ontology.md
    • infranodus/connections-ontology.md
    • infranodus/sources-ontology.md
    • infranodus/full-wiki-ontology.md
      (for the whole wiki combined)

CRITICAL: Incremental Updates, Never Full Rewrites

NEVER regenerate ontology files from scratch. Ontology files are curated artifacts that accumulate human-reviewed knowledge over time. They contain specific phrasings, relationship nuances, and domain-specific insights that cannot be automatically reconstructed from source pages alone.

Adding new relations

When updating ontologies after new sources are ingested:

  • READ the existing ontology file FIRST — understand its format, style, and content
  • APPEND new lines at the end — add only lines covering genuinely new content from the new sources
  • Match the existing format exactly — same casing conventions, same
    [relationCode]
    tag style, same entity naming patterns
  • If delegating to sub-agents: include the existing file content (or its path) in the prompt, explicitly instruct "READ FIRST, then APPEND ONLY, do not rewrite", and verify the diff afterward
Removing or modifying existing relations

Removal and modification of existing lines IS allowed when there is a clear reason:

  • Factually wrong: A relation contradicts the current wiki content (e.g., a source was reinterpreted, a claim was debunked by newer evidence)
  • Superseded: A newer, more precise relation replaces a vague or incomplete one — remove the old line and add the improved version
  • Duplicate: Two lines say the same thing with slightly different wording — keep the better one
  • Stale: A relation references content that was removed from the wiki (e.g., a source was deleted, a concept was merged into another)

When removing or modifying, briefly note the reason in the commit message or log so the change is traceable.

What is NOT allowed: wholesale regeneration that replaces all lines with freshly generated content. The default operation is always append. Removal is a deliberate, line-by-line editorial decision.

Why this matters

A full rewrite loses:

  • Relationship type tags (
    [isA]
    ,
    [causes]
    , etc.) that carry semantic meaning
  • Specific nuanced phrasings (e.g., "[[choreographed routine]] is still [[periodic]] even on complex terrain")
  • Entity casing and naming conventions established by the ontology-creator skill
  • Content that came from personal observations not derivable from wiki pages alone
  1. InfraNodus analysis: After generating each ontology, feed it to InfraNodus using the

    generate_knowledge_graph
    tool with
    modifyAnalyzedText: 'none'
    (since entities are already marked with
    [[wikilinks]]
    ). This returns cluster structure, content gaps, key concepts, and diversity metrics.

  2. Save analysis results: Save the InfraNodus analysis output (clusters, gaps, key concepts, diversity score) to the

    output/
    folder as
    <folder-name>-knowledge-graph-analysis.md
    . Include:

    • Graph statistics (nodes, edges, modularity, diversity)
    • Topical clusters with their influence percentages
    • Content gaps between clusters
    • Key concepts and gateway nodes
    • Recommendations for improving coverage
  3. Act on gaps: Use the identified content gaps to create new question pages, suggest missing sources, or flag areas where the wiki needs development.

If the

ontology-creator
skill is not available, ask the user to install it from https://github.com/infranodus/skills.

Page Types to Consider

Propose page types based on the domain. Common ones:

Page TypeWhen to IncludeExample
Source summariesAlways
sources/article-name.md
— summary + key takeaways
Entity pagesMedium+ tier, or when tracking people/orgs/products
entities/company-name.md
Concept pagesWhen building conceptual understanding
concepts/market-efficiency.md
Comparison pagesWhen comparing things is core to the domain
comparisons/tool-a-vs-tool-b.md
Timeline pagesWhen chronology matters
timelines/project-history.md
Question pagesResearch-heavy wikis
questions/why-did-x-happen.md
Thesis/argument pagesWhen developing original analysis
thesis/main-argument.md
Data pagesWhen tracking quantitative information
data/metrics-dashboard.md
Log entriesAlways (append-only)Entries in
log.md

Naming Conventions

Propose and confirm with the user:

  • File naming: kebab-case (
    market-analysis.md
    ) vs other conventions
  • Wikilinks:
    [[page-name]]
    style for cross-references (Obsidian-compatible)
  • Frontmatter: What YAML fields? (title, date, tags, source_count, status)
  • Date format: ISO 8601 (
    2026-04-08
    ) recommended

Present and Iterate

Show the proposed structure as a tree diagram. Ask:

  • "Does this capture the categories you need?"
  • "Any page types missing for your domain?"
  • "Do you want to add/remove any directories?"

Phase 4: SCHEMA — Write the Configuration Document

This is the most important phase. The schema (CLAUDE.md / AGENTS.md) is what turns a generic LLM into a disciplined wiki maintainer.

Determine Which Schema File

  • Claude Code:
    CLAUDE.md
  • OpenAI Codex:
    AGENTS.md
  • Other agents: Ask the user what their agent uses for system instructions

Schema Sections to Include

Write the schema document with these sections, tailored to the user's domain:

1. Project Overview

  • One paragraph describing what this wiki is, what domain it covers, and its purpose.

2. Directory Structure

  • Document the agreed structure from Phase 3. Explain what goes where.

3. Page Templates

  • For each page type, provide a template with:
    • Required YAML frontmatter fields
    • Section structure (what headings to use)
    • Content guidelines (what to include, what level of detail)
    • Cross-referencing rules (when to create wikilinks)

4. Ingest Workflow

  • Step-by-step instructions for when a new source is added:
    1. Read the source
    2. Discuss key takeaways with the user (optional — based on user preference)
    3. Create a source summary page
    4. Update or create entity/concept pages
    5. Update the index
    6. Update the overview if the new source significantly changes the picture
    7. Append to the log
    8. Flag any contradictions with existing wiki content

5. Query Workflow

  • How to answer questions against the wiki:
    1. Read the index to find relevant pages
    2. Read the relevant pages
    3. Synthesize an answer with citations to wiki pages
    4. Optionally: file the answer as a new wiki page if it's valuable

6. Lint Workflow

  • Periodic health checks:
    • Find contradictions between pages
    • Find stale claims superseded by newer sources
    • Find orphan pages (no inbound links)
    • Find concepts mentioned but lacking their own page
    • Find missing cross-references
    • Suggest new questions to investigate
    • Suggest sources to look for

7. Conventions

  • Tone and voice (academic? casual? technical?)
  • Citation style (inline links? footnotes? source page references?)
  • How to handle uncertainty and contradictions
  • When to create a new page vs update an existing one
  • When to flag something for user review vs handle autonomously

Present and Iterate

Show the user the draft schema. This is the document they'll live with, so get it right. Ask:

  • "Does the ingest workflow match how you want to work? Some people prefer to stay involved at every step; others want to batch-ingest with minimal supervision."
  • "Any conventions you want to add or change?"
  • "How much autonomy should the LLM have? Should it create new entity pages automatically, or always ask first?"

Phase 5: WORKFLOWS — Define the Operations

Flesh out the three core operations based on user preferences.

Ingest Preferences

Ask with AskUserQuestion tool:

  • Interactive or batch? "Do you want to discuss each source as it's ingested, or just tell me to process a batch and review the results?"
  • Depth of summaries: "How detailed should source summaries be? A paragraph? A full page? Depends on the source?"
  • Auto-create entities? "Should I automatically create pages for new entities I encounter, or ask you first?"
  • Image handling: "Will your sources contain images? Should I download them locally?" (If yes, configure Obsidian's attachment folder)

Query Preferences

Ask using the AskUserQuestion tool:

  • Filing answers: "When you ask a question and get a good answer, should I automatically file it as a wiki page, ask first, or never?"
  • Output formats: "Do you want answers as plain text, as new markdown pages, as tables, or should I ask each time?"
  • Citation style: "How should I cite sources in answers? Link to the wiki summary page? Link to the original source? Both?"

Lint Preferences

Ask using the AskUserQuestion:

  • Frequency: "Should I suggest a lint pass after every N ingests? Or only when you ask?"
  • Scope: "Should lint be comprehensive (check everything) or focused (only check recently changed pages)?"
  • Auto-fix: "Should I fix minor issues (broken links, missing cross-refs) automatically, or list them for your review?"

Document the Workflows

Add the agreed workflows to the schema document with enough detail that the LLM can follow them in future sessions without re-asking these questions.


Phase 6: TOOLING — Set Up the Environment

Based on the user's setup, recommend and configure tools.

Essential: File Viewer

  • Obsidian (recommended): Markdown editor with graph view, wikilinks, and plugins.

    • Configure: Attachment folder path for images
    • Recommend plugins based on needs:
      • Dataview: If using YAML frontmatter for structured queries
      • Marp Slides: If generating presentations
      • Graph View: Built-in, but call attention to it for wiki navigation
      • Obsidian Web Clipper: Browser extension for capturing web articles as markdown
      • InfraNodus AI Graph View: Advanced knowledge graph visualization and analysis of the pages' content and connections between the pages
  • VS Code / other editor: Works fine, just loses graph view and wikilink navigation.

  • InfraNodus: Content gap analysis, insight generation, and knowledge graph analysis and optimization via the InfraNodus MCP server tools or via MCPorter as described at https://infranodus.com/mcp/deploy-mcporter. Ask the user to set up an API key for InfraNodus and update the environment you're using to be able to access that key when needed without saving it to the conversation or wiki.

Optional: Search

Assess search needs based on tier:

  • Light tier: Index file is sufficient. No additional tooling needed.
  • Medium tier: Index file works, but suggest they revisit if it gets slow. Mention
    qmd
    as an option.
  • Heavy tier: Recommend setting up
    qmd
    or a similar local search tool from the start. Offer to help configure it.

Optional: Version Control

  • Git: Recommend initializing the wiki as a git repo. Free version history, branching, collaboration. Do it for the user if they agree.
  • Offer to set up
    .gitignore
    (exclude
    .obsidian/workspace.json
    and other ephemeral Obsidian files).

Optional: CLI Tools

For power users or heavy-tier wikis, offer to build simple helper scripts:

  • Search script (grep/ripgrep wrapper for the wiki)
  • Stats script (page count, word count, orphan detection)
  • Ingest helper (moves a file to
    raw/
    and kicks off the ingest workflow)

Ask what the user already has installed and what they're comfortable with. Don't over-engineer the tooling for light-tier wikis.


Phase 7: SCAFFOLD — Create the Directory Structure

Now build it. Create the agreed directory structure with starter files.

Create directories and files:

  1. Directory tree — create all agreed directories
  2. CLAUDE.md / AGENTS.md — the schema document from Phase 4
  3. index.md — empty index with the agreed format and section headers
  4. log.md — initialized with a first entry:
    ## [YYYY-MM-DD] init | Wiki created
  5. overview.md — a placeholder noting the wiki's purpose and that it will be populated as sources are ingested
  6. Page templates — optionally create example template files in a
    _templates/
    directory for reference
  7. .gitignore — if git was chosen
  8. Initialize git repo — if git was chosen

Present the Result

Show the user the created structure. Walk through each file briefly. Ask:

  • "Does this look right?"
  • "Want to adjust anything before we do the first ingest?"

Phase 8: FIRST RUN — Test Drive with a Real Source

The best way to validate the setup is to use it. Guide the user through their first ingest.

If they have a source ready:

  1. Ask them if they want to copy it to the
    raw/
    folder
  2. Run the full ingest workflow as defined in the schema
  3. Show them the results: the source summary, any entity/concept pages created, the updated index and log
  4. Ask for feedback: "Is this the right level of detail? Too much? Too little? Want me to adjust the format?"

If they don't have a source yet:

  • Suggest they grab a web article related to their domain using Obsidian Web Clipper or by pasting a URL
  • Offer to fetch a relevant article via web search as a demo source
  • Walk through what the ingest would produce

Iterate on the Schema

The first ingest almost always reveals adjustments needed:

  • Page format tweaks
  • Frontmatter field changes
  • Cross-referencing rules that need refining
  • Workflow steps to add or remove

Update the schema based on feedback. This is the beginning of the co-evolution process — the schema will keep improving with use.

Handoff

Close with:

  • A summary of what was built
  • Quick reference for the three core operations (ingest, query, lint)
  • Reminder that the schema is a living document — they should update it whenever they discover a better convention
  • Encourage them to ingest a few more sources to build momentum
  • Suggest running Phase 9 (Plan) once they have 10+ sources ingested

Phase 9: PLAN — Research Direction and Todo Planning

After the wiki has accumulated enough content (typically 10+ sources, or after a significant round of ingestion), help the user step back and plan what to research next. This phase analyzes the wiki's current state — using InfraNodus gap analysis and the wiki's own structure — to produce a prioritized todo list that lives in a

todos/
folder at the project root.

This phase can be run at any time, not just during initial setup. It's the natural follow-up whenever the user asks "what should I work on next?" or after a batch of new sources has been ingested.

Step 9.1: Assess Current State

Read the wiki's structural health:

  1. Read
    wiki/index.md
    to understand what exists
  2. Read
    wiki/overview.md
    for the current synthesis
  3. Check for existing InfraNodus analyses in
    output/*-knowledge-graph-analysis.md
    — these contain identified content gaps, cluster structure, and recommendations
  4. Check
    wiki/questions/
    for open research questions
  5. Check
    wiki/data/
    for personal data pages (empty = a gap worth flagging)
  6. Check
    todos/
    for existing todo files (to avoid duplicating or contradicting prior plans)

Summarize the state back to the user: how many sources, what's well-covered, what's thin.

Step 9.2: Identify Priorities

Using the InfraNodus analyses and wiki structure, identify the highest-value work to do next. Prioritize by convergence — gaps flagged by multiple analyses are more important than one-off mentions.

Common priority types:

Priority TypeDescriptionExample
Content gapTwo clusters in the knowledge graph are disconnected — a bridging concept or source is needed"Criticality ↔ Metastability — no source connects these two frameworks"
Weak coverageA topic has few sources relative to its importance"Only 1 intervention study across 48 sources"
Empty sectionA wiki section exists but has no content"wiki/data/ has no personal data pages"
Naming/framework gapA framework is partially built — some systems have labels/states, others don't"HRV and movement states named, breathing states missing"
Source to findA specific paper or source type is needed to fill a gap"Need breathing-specific fractal variability studies"
Synthesis neededEnough raw material exists but no synthesis page connects it"Three connection pages mention trauma but no unified framework"

Present the identified priorities as a ranked list. Ask the user via AskUserQuestion:

Here are the top priorities I see. Which ones do you want to work on?

Offer the priorities as multi-select options so the user can pick which ones matter to them. Include an option to add their own priorities.

Step 9.3: Create Todo Files

For each selected priority, create a markdown file in

todos/
at the project root.

mkdir -p todos

Todo file format (

todos/<priority-slug>.md
):

# <Priority Title>

Deadline: <YYYY-MM-DD>

## Tasks

- [ ] <Task description>
      - <Sub-details, context, specific files to update>
      - Deadline: <YYYY-MM-DD>

- [ ] <Task description>
      - <Sub-details>
      - Deadline: <YYYY-MM-DD>

Guidelines for writing todos:

  • Checkboxes (
    - [ ]
    ) for every actionable item — these render as clickable checkboxes in Obsidian
  • Sub-bullets for context, specific files to touch, or implementation notes
  • Deadlines on each task if the user provided an overall timeline
  • Group by workstream — each todo file is one coherent workstream, not a grab-bag of unrelated tasks
  • Reference wiki pages using
    [[wikilinks]]
    where relevant so the user can navigate from the todo to the related content
  • Keep tasks at the right altitude — specific enough to act on ("Find and ingest Chialvo 2010"), not so granular that it's busywork ("Open browser, search for Chialvo 2010, download PDF, move to raw/papers/")
  • Include the "why" — a brief note on why this priority matters (which gap it fills, which analysis flagged it)

Step 9.4: Timeline (Optional)

If the user wants deadlines, ask via AskUserQuestion:

What timeframe are you working with for these priorities?

  • A) 2 weeks — aggressive, daily milestones
  • B) 1 month — comfortable, weekly milestones
  • C) No deadlines — I'll work through these at my own pace
  • D) Custom — I'll specify

If they choose a timeframe, distribute deadlines across the period, respecting task dependencies (e.g., "ingest sources" must come before "write framework that synthesizes them").

Step 9.5: Connect to Actionize (Optional)

Check if the

/actionize
skill is available (listed in available skills). If it is, ask via AskUserQuestion:

Want to turn these priorities into a tracked plan with Telegram reminders? The

/actionize
skill can set up daily deadline nudges and progress tracking.

  • A) Yes — run
    /actionize
    with these priorities
  • B) No — the todo files are enough

If yes, invoke the

/actionize
skill (via the Skill tool) and pass a summary of the selected priorities as input. Format the input as:

Priorities from wiki gap analysis:

1. **{Priority title}** — {description}. Tasks: {task list from todo file}. Deadline: {deadline if set}.
2. **{Priority title}** — ...
...

Priority order: #1 → #2 → #3. Schedule with reminders.

The

/actionize
skill will handle: co-designing the plan with the user, creating
.plan/
with status tracking, setting up Telegram bot + daily cron reminders, and installing the
done.sh
CLI for marking tasks complete from the terminal.

The two systems are complementary — both should exist:

  • todos/
    = the visible, Obsidian-browsable research plan (committed to git, checkboxes in markdown)
  • .plan/
    = the reminder/tracking engine with Telegram integration (gitignored, personal, machine-readable status)

If

/actionize
is not available, mention that the user can install it for Telegram reminders and deadline tracking. The todo files work standalone without it.

Step 9.6: Present the Result

Show the user what was created:

RESEARCH PLAN CREATED
════════════════════════════════════════
Priorities:  {count} workstreams
Todo files:  todos/{list filenames}
Timeline:    {date range or "open-ended"}
════════════════════════════════════════

List each todo file with its task count. Remind the user:

  • Checkboxes are clickable in Obsidian
  • Run Phase 9 again after the next batch of ingestion to refresh priorities
  • Use
    wiki/questions/
    for individual research questions vs
    todos/
    for planned workstreams

Adaptation Rules

For Personal / Journal Wikis

  • Emphasize privacy and local-only storage
  • Suggest lighter structure — fewer page types, less formal conventions
  • Focus on the compounding benefit: "In 6 months you'll have a structured picture of patterns you can't see day-to-day"
  • Consider chronological organization alongside topical

For Academic Research

  • Emphasize citation tracking, contradiction detection, and thesis development
  • Suggest more formal page templates with methodology sections
  • Recommend tracking evidence strength (how well-supported is each claim?)
  • Consider a dedicated "open questions" page

For Book Reading

  • Structure around the book's own organization (parts, chapters)
  • Entity pages for characters, places, themes
  • A "threads" or "themes" section for tracking cross-chapter connections
  • Consider a timeline page for complex narratives

For Business / Team Use

  • Emphasize access control and review workflows
  • Suggest human-in-the-loop for sensitive updates
  • Consider integrating with existing tools (Slack, meeting recorders)
  • Focus on keeping the wiki current — staleness is the #1 failure mode for team wikis

For Quick / Light Projects

  • Compress phases 3-6. Use a minimal structure:
    raw/
    ,
    wiki/
    ,
    CLAUDE.md
  • Skip tooling discussion — Obsidian or any markdown viewer is fine
  • Get to the first ingest fast

User Wants More Structure

  • Expand page types, add more frontmatter fields, suggest Dataview queries
  • Consider multiple index files (by topic, by date, by entity type)
  • Suggest periodic "state of the wiki" synthesis pages

User Wants Less Structure

  • Pare down to essentials: source summaries, one flat wiki directory, minimal frontmatter
  • Let structure emerge organically — start simple and add page types only when needed
  • "You can always add structure later. You can't easily remove it."

AskUserQuestion Format

ALWAYS follow this structure for every AskUserQuestion call:

  1. Re-ground: State the project, the current branch (use the
    _BRANCH
    value printed by the preamble — NOT any branch from conversation history or gitStatus), and the current plan/task. (1-2 sentences)
  2. Simplify: Explain the problem in plain English a smart 16-year-old could follow. No raw function names, no internal jargon, no implementation details. Use concrete examples and analogies. Say what it DOES, not what it's called.
  3. Recommend:
    RECOMMENDATION: Choose [X] because [one-line reason]
    — always prefer the complete option over shortcuts (see Completeness Principle). Include
    Completeness: X/10
    for each option. Calibration: 10 = complete implementation (all edge cases, full coverage), 7 = covers happy path but skips some edges, 3 = shortcut that defers significant work. If both options are 8+, pick the higher; if one is ≤5, flag it.
  4. Options: Lettered options:
    A) ... B) ... C) ...
    — when an option involves effort, show both scales:
    (human: ~X / CC: ~Y)

Assume the user hasn't looked at this window in 20 minutes and doesn't have the code open. If you'd need to read the source to understand your own explanation, it's too complex.

Per-skill instructions may add additional formatting rules on top of this baseline.

--

Important Principles

Throughout all phases, keep these in mind:

  1. The user never writes the wiki. The LLM writes and maintains all wiki pages. The user curates sources, asks questions, and directs the analysis.

  2. Start simple, add complexity as needed. Don't build a heavy-tier structure for a light-tier project. The user can always add page types and conventions later.

  3. The schema is a living document. It will evolve as the user discovers what works. Encourage experimentation.

  4. Knowledge should compound. Good answers to questions should be filed back into the wiki. Explorations should become pages. The wiki should get richer with every interaction.

  5. The wiki replaces chat history. Insights that emerge in conversation should be captured in the wiki, not lost when the chat window closes.

  6. Make it concrete. Don't just describe what pages could look like — create actual examples during scaffolding so the user can see and react to real content.

  7. Obsidia (or a similar MD file viewer) is the IDE, the LLM is the programmer, the wiki is the codebase, InfraNodus is the researcher Frame it this way to help the user understand the workflow.