Dotforge audit-project
Audits the Claude Code configuration of a project against the dotforge template. Generates a report with score and gaps.
git clone https://github.com/luiseiman/dotforge
T=$(mktemp -d) && git clone --depth=1 https://github.com/luiseiman/dotforge "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/audit-project" ~/.claude/skills/luiseiman-dotforge-audit-project && rm -rf "$T"
skills/audit-project/SKILL.mdAudit Project
Run a full audit of the Claude Code configuration for the current project.
Step 1: Detect stack
Use detection rules from
$DOTFORGE_DIR/stacks/detect.md.
Step 1b: Detect project tier
Auto-detect project tier based on signals:
- simple (<5K LOC, 1 stack, no CI config): recommended items are relaxed (items 8-10 don't penalize)
- standard (5K-50K LOC, 1-2 stacks): default behavior
- complex (>50K LOC, 3+ stacks, monorepo indicators like
orpackages/
): recommended items 8-10 become semi-obligatory (each worth 0-2 instead of 0-1)apps/
Detection signals:
- LOC: count non-empty lines in source files (
)find . -name '*.py' -o -name '*.ts' -o -name '*.js' -o -name '*.go' -o -name '*.java' -o -name '*.swift' | xargs wc -l - Stack count: number of stacks detected in step 1
- CI: presence of
,.github/workflows/
,.gitlab-ci.yml
,Jenkinsfile.circleci/ - Monorepo: presence of
,packages/
,apps/
,lerna.json
,pnpm-workspace.yamlturbo.json
Save tier in registry entry.
Step 1c: Config coherence check
Before scoring, validate internal coherence. Run
$DOTFORGE_DIR/tests/test-config.sh <project-dir> or perform equivalent checks inline:
- Hooks referenced in settings.json exist and are executable
- Rules have valid
orglobs:
frontmatter (withpaths:
for lazy loading)alwaysApply: false - Rule globs match at least 1 real file in the project
- settings.json is valid JSON with deny list covering .env, *.key, *.pem
- CLAUDE.md has minimum required sections (stack, build/test, architecture)
- No contradictory allow+deny patterns in settings.json
- No prompt injection patterns in rules or CLAUDE.md
If coherence check finds critical failures (missing hooks, invalid JSON), report them in a
── COHERENCE ── section BEFORE the score. These are configuration bugs, not gaps.
Step 2: Load checklist
Read
$DOTFORGE_DIR/audit/checklist.md for evaluation criteria.
Read $DOTFORGE_DIR/audit/scoring.md for weights and caps.
Step 3: Evaluate
For each checklist item, verify existence and quality:
Obligatory (0-10 points)
- CLAUDE.md — Does it exist? Verify it has key sections:
- Stack/technologies mentioned explicitly
- At least 1 exact build/test command
- Project structure or architecture
- Do NOT count only lines — a 50-line boilerplate file is score 1
- settings.json — Does it exist in
? Does it have explicit permissions? Does it have a deny list?.claude/ - Rules — Is there at least 1 rule in
? Does it have frontmatter with.claude/rules/
orglobs:
?paths: - Hook block-destructive — Verify:
- Does
exist?.claude/hooks/block-destructive.sh - Is it executable? (
or check permissions)test -x - Is it referenced in
under hooks?.claude/settings.json
- Does
- Build/test commands — Are they in CLAUDE.md? Do they match the detected stack?
Recommended (0-7 bonus points)
- CLAUDE_ERRORS.md — Does it exist with table format with Type column?
- Hook lint — Does it exist? Is it executable? (verify
)chmod +x - Custom commands — Are there files in
?.claude/commands/ - Memory — Are there project memory files?
- Agents — Is there
+.claude/agents/
rule in rules?agents.md - .gitignore — Does it protect .env, *.key, *.pem, credentials?
- Prompt injection scan — Are rules/CLAUDE.md free of suspicious patterns?
Tier adjustments:
: items 8-10 score 0 don't penalize (treated as N/A)simple
: items 8-10 become semi-obligatory (each 0-2 instead of 0-1)complex
Step 4: Calculate score
Use weights from
$DOTFORGE_DIR/audit/scoring.md:
— maximum 10score_obligatory = sum(items 1-5)
— maximum 7score_recommended = sum(items 6-12)
— max 7.0 + 3.0 = 10.0score_total = score_obligatory * 0.7 + score_recommended * (3.0 / 7)- Apply tier adjustments before calculating (see Step 1b)
score_normalized = min(score_total, 10)
Security cap: If item 2 (settings.json) or item 4 (block-destructive) is 0, maximum score = 6.0.
Step 5: Generate report
Format:
═══ AUDIT dotforge: {{project}} ═══ Date: {{YYYY-MM-DD}} Detected stack: {{stacks}} Tier: {{simple|standard|complex}} dotforge version: {{version from last bootstrap/sync if detectable}} Score: {{X.X}}/10 {{level}} ── OBLIGATORY ── {{✅|⚠️|❌}} CLAUDE.md ({{0-2}}) — {{detail: which sections exist/missing}} {{✅|⚠️|❌}} settings.json ({{0-2}}) — {{detail: deny list yes/no, permissions}} {{✅|⚠️|❌}} Rules ({{0-2}}) — {{detail: N rules, globs yes/no}} {{✅|⚠️|❌}} Hook block-destructive ({{0-2}}) — {{detail: executable yes/no, wired yes/no}} {{✅|⚠️|❌}} Build/test commands ({{0-2}}) — {{detail: which ones and whether they match the stack}} ── RECOMMENDED ── {{✅|⚠️}} CLAUDE_ERRORS.md — {{detail}} {{✅|⚠️}} Hook lint — {{detail: executable yes/no}} {{✅|⚠️}} Custom commands — {{detail: N commands}} {{✅|⚠️}} Memory — {{detail}} {{✅|⚠️}} Agents — {{detail}} {{✅|⚠️}} .gitignore — {{detail}} {{✅|⚠️}} Prompt injection scan — {{detail}} ── DOMAIN KNOWLEDGE ── Role defined: {{✓ if ## Role exists in CLAUDE.md with content | ✗ otherwise}} Domain rules: {{N files in .claude/rules/domain/ | "none"}} Stale (>90 days): {{N files with last_verified older than 90 days | "none"}} Coverage: {{list glob patterns from domain rules → cross-reference with git log --name-only -30 to estimate % of recent edits covered}} Note: Domain knowledge is informational only — does not affect the audit score. If no domain rules exist and the project has business logic, suggest: /forge domain extract ── CRITICAL GAPS ── 1. {{what is missing}} → {{recommended action}} 2. ... ── NEXT STEP ── Run `/forge sync` to apply the dotforge template and close the gaps.
Step 6: Cross-project error promotion
If the project has
CLAUDE_ERRORS.md, scan it for recurring patterns:
- Read
and group errors by Area columnCLAUDE_ERRORS.md - If any Area has 3+ entries with similar root causes, it's a candidate for promotion
- Check
and$DOTFORGE_DIR/practices/inbox/
for existing practices covering that patternactive/ - If no existing practice covers it, create a new practice in
using the capture format:practices/inbox/source_type: cross-projecttags: [error-promotion, <area>]- Description: the recurring pattern and derived rule
- Report promotions in the audit output under
── ERROR PATTERNS ──
This closes the Memory → Learning synergy: recurring project errors feed the practices pipeline.
Step 7: Audit gaps as practices
For each obligatory item scored 0 or 1, and each recommended item scored 0:
- Check if a practice already exists in
orpractices/inbox/
for that gapactive/ - If not, create a practice in
:practices/inbox/source_type: audit-gaptags: [audit-gap, <item-name>]- Description: what's missing and recommended fix
- Only create practices for gaps that reflect a template/stack issue (not project-specific misconfigurations)
- Report in audit output under
── CAPTURED GAPS ──
This closes the Audit → Learning synergy: detected gaps feed back into the practices pipeline.
Step 8: Update registry
If
$DOTFORGE_DIR/registry/projects.yml exists, update the project entry:
with the calculated scorescore:
with the current datelast_audit:
with the VERSION version if the project was bootstrappeddotforge_version:
preserve the existing value (do not modify here)last_sync:
brief summary of the auditnotes:
append a new entryhistory:
. Never overwrite previous entries — this enables score trending over time.{date: YYYY-MM-DD, score: X.X, version: <dotforge_version>}