Opendirectory claude-md-generator
Reads your codebase and writes a CLAUDE.md file that gives Claude Code the context it needs: build commands, test patterns, code style, architecture notes, and gotchas. Supports three modes: create (new file), update (improve existing), audit (score all CLAUDE.md files A-F and report quality). Keeps output under 100 lines. Use when asked to create or improve CLAUDE.md, audit Claude configuration, write AI coding context, or set up Claude for a new project. Trigger when a user says "create a CLAUDE.md", "generate CLAUDE.md", "audit my CLAUDE.md", "help Claude understand my project", or "set up Claude for this repo".
git clone https://github.com/Varnan-Tech/opendirectory
T=$(mktemp -d) && git clone --depth=1 https://github.com/Varnan-Tech/opendirectory "$T" && mkdir -p ~/.claude/skills && cp -r "$T/packages/cli/.claude/skills/claude-md-generator" ~/.claude/skills/varnan-tech-opendirectory-claude-md-generator && rm -rf "$T"
packages/cli/.claude/skills/claude-md-generator/SKILL.mdCLAUDE.md Generator
Read the codebase. Write a CLAUDE.md that tells Claude exactly what it needs: no more, no less.
Critical rule: A good CLAUDE.md is under 100 lines. It contains only information Claude cannot derive from reading the code itself. Do not auto-write the file: always show the draft and wait for user approval first.
Code snippet rule: Never include inline code examples in CLAUDE.md. Instead use
file.ts:42 references. Code in CLAUDE.md wastes tokens and goes stale.
Step 1: Detect Mode
Determine which of three modes to run:
create: No CLAUDE.md exists. Write one from scratch. update: A CLAUDE.md exists. Improve it without discarding custom content. audit: Score all CLAUDE.md files in the project A-F and output a quality report. If the user says "audit", "check", "review", or "grade" my CLAUDE.md, run audit mode.
# Discover ALL CLAUDE.md locations find . -name "CLAUDE.md" -not -path "*/node_modules/*" -not -path "*/.git/*" 2>/dev/null ls ~/.claude/CLAUDE.md 2>/dev/null && echo "Global CLAUDE.md found" ls .claude.local.md 2>/dev/null && echo ".claude.local.md found"
If multiple CLAUDE.md files are found: list them. Ask: "Found CLAUDE.md in [locations]. Should I update all of them or just [root]?"
Step 2: Audit Mode (skip to Step 3 if create/update)
For each CLAUDE.md found, score it A-F using this rubric:
| Criterion | What to check |
|---|---|
| Commands | Build/test/lint commands present and runnable? |
| Architecture | Non-obvious structure explained? |
| Non-obvious patterns | Gotchas, generated files, env var order documented? |
| Conciseness | Under 100 lines? No obvious filler? |
| Currency | Commands still match current package.json/Makefile? |
| Actionability | Can a new contributor follow this without asking questions? |
Score: 90-100 = A, 70-89 = B, 50-69 = C, 30-49 = D, 0-29 = F
Present as a table:
## CLAUDE.md Audit Report | File | Score | Grade | Top Issues | |------|-------|-------|-----------| | ./CLAUDE.md | 72 | B | Missing gotchas section, test command outdated | | ./packages/api/CLAUDE.md | 45 | D | No commands, 340 lines (too long), stale arch notes | **Overall: B (72/100)** Issues found: - ./packages/api/CLAUDE.md: 340 lines: well over the 100-line target - ./packages/api/CLAUDE.md: Test command references `jest` but package.json uses `vitest` - ./CLAUDE.md: No Gotchas section: most valuable section is missing
After the report, ask: "Want me to fix any of these? (all / just root / specify)"
If user says yes, continue to Step 3 for each file they want fixed.
Step 3: Scan Project Structure
# Project type and package manager ls package.json yarn.lock pnpm-lock.yaml bun.lockb requirements.txt pyproject.toml Cargo.toml go.mod 2>/dev/null # Top-level directory structure find . -maxdepth 2 -type d \ | grep -v node_modules | grep -v .git | grep -v __pycache__ \ | grep -v ".next" | grep -v dist | grep -v build | sort
Step 4: Extract Build and Test Commands
# npm/yarn/pnpm/bun scripts cat package.json 2>/dev/null \ | python3 -c " import sys, json d = json.load(sys.stdin) for name, cmd in d.get('scripts', {}).items(): print(f'{name}: {cmd}') " # Python, Go, Rust Makefiles cat Makefile 2>/dev/null | grep -E "^[a-z].*:" | head -20 # Go cat go.mod 2>/dev/null | head -5 # Rust cat Cargo.toml 2>/dev/null | grep -E "^\[" | head -10
Identify the exact commands for: build, test (all), test (single file/name), dev server, lint/typecheck. Note any env vars required to run them.
Step 5: Find Code Style and Gotchas
# Import aliases (most commonly missed) python3 -c " import json, sys try: d = json.load(open('tsconfig.json')) paths = d.get('compilerOptions', {}).get('paths', {}) if paths: print('Import aliases:', json.dumps(paths, indent=2)) except: pass " 2>/dev/null # Environment variables required cat .env.example 2>/dev/null | grep -v "^#" | grep -v "^$" | head -20 # Auto-generated files (must not be edited) find . -path "*/node_modules" -prune -o -name "*.ts" -print \ | xargs grep -l "DO NOT EDIT\|@generated\|Generated by" 2>/dev/null | head -5 # Test setup requirements cat jest.config.js jest.config.ts vitest.config.ts 2>/dev/null | head -30 # Database/migration setup ls migrations/ prisma/ drizzle/ db/ 2>/dev/null
What counts as a Gotcha (include these, skip everything else):
- Files that are auto-generated (must not edit)
- Env vars required BEFORE tests run
- Non-default import alias mappings
- Test commands that require a running service
- Known intentional quirks (workarounds, not bugs)
Step 6: Generate CLAUDE.md Draft with Gemini
Compile all findings and generate the draft:
cat > /tmp/claude-md-request.json << 'ENDJSON' { "system_instruction": { "parts": [{ "text": "Write a CLAUDE.md file for a software project. Rules: (1) Under 100 lines total. (2) Only include what Claude cannot derive from reading the code. (3) No inline code examples: use file.ts:42 references instead. (4) Sections: Commands, Code Style (only non-defaults), Testing (only if setup needed), Gotchas (required: what trips people up). Skip any section that has nothing non-obvious to say. (5) All commands in code blocks. (6) Preferred order: short Project Overview (1-2 sentences, only if non-obvious), Commands, Architecture (only non-obvious structure), Code Style, Testing, Gotchas. (7) Do not use em dashes. (8) Output only the CLAUDE.md content, no commentary." }] }, "contents": [{ "parts": [{ "text": "PROJECT_ANALYSIS_HERE" }] }], "generationConfig": { "temperature": 0.3, "maxOutputTokens": 2048 } } ENDJSON curl -s -X POST \ "https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash:generateContent?key=$GEMINI_API_KEY" \ -H "Content-Type: application/json" \ -d @/tmp/claude-md-request.json \ | python3 -c "import sys,json; d=json.load(sys.stdin); print(d['candidates'][0]['content']['parts'][0]['text'])"
Replace
PROJECT_ANALYSIS_HERE with findings from Steps 3-5.
For large projects (50+ files): Add a
@path pointer at the bottom of CLAUDE.md instead of inline detail:
## Extended Reference See @docs/ai-context/architecture.md for full module map. See @docs/ai-context/testing.md for integration test setup details.
Write the referenced files to
docs/ai-context/ with the detail that would not fit in 100 lines.
Step 7: Self-QA
Before presenting the draft, check:
- Under 100 lines (count:
)echo "$CONTENT" | wc -l - No inline code examples (only
references or shell commands)file.ts:42 - All commands in code blocks and runnable as-is
- Gotchas section present with at least one real entry
- No section that says only things obvious from the files
- No em dashes
- No marketing words or filler phrases ("This project uses React to...")
- Import aliases documented if they exist
- Auto-generated files marked "do not edit" if they exist
If any check fails, revise before presenting.
Step 8: Present Draft and Wait for Approval
Never write the file without user approval.
Present the draft in a code block:
## Draft CLAUDE.md ([N] lines) [full draft content here] --- Write this to CLAUDE.md? (yes / edit first / cancel)
If user says yes: write the file, then confirm: "CLAUDE.md written ([N] lines). Sections: [list of ## headers]."
If user says edit first: apply their edits, re-show the draft.
If user says cancel: stop.
What NOT to Include
- Language/framework version ("This is a TypeScript project")
- How the framework works (Claude already knows React, FastAPI, etc.)
- List of all dependencies
- Style rules the linter already enforces (indent size, quote style)
- Content that duplicates README.md
- Inline code examples or multi-line snippets
- Anything that would be identical for any project using the same stack