Skills code-refinement
install
source · Clone the upstream repo
git clone https://github.com/openclaw/skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/athola/nm-pensive-code-refinement" ~/.claude/skills/clawdbot-skills-code-refinement && rm -rf "$T"
manifest:
skills/athola/nm-pensive-code-refinement/SKILL.mdsource content
Night Market Skill — ported from claude-night-market/pensive. For the full experience with agents, hooks, and commands, install the Claude Code plugin.
Table of Contents
- Quick Start
- When to Use
- Analysis Dimensions
- Progressive Loading
- Required TodoWrite Items
- Workflow
- Tiered Analysis
- Cross-Plugin Dependencies
Code Refinement Workflow
Analyze and improve living code quality across six dimensions.
Quick Start
/refine-code /refine-code --level 2 --focus duplication /refine-code --level 3 --report refinement-plan.md
When To Use
- After rapid AI-assisted development sprints
- Before major releases (quality gate)
- When code "works but smells"
- Refactoring existing modules for clarity
- Reducing technical debt in living code
When NOT To Use
- Removing dead/unused code (use conserve:bloat-detector)
- Removing dead/unused code (use conserve:bloat-detector)
Analysis Dimensions
| # | Dimension | Module | What It Catches |
|---|---|---|---|
| 1 | Duplication & Redundancy | | Near-identical blocks, similar functions, copy-paste |
| 2 | Algorithmic Efficiency | | O(n^2) where O(n) works, unnecessary iterations |
| 3 | Clean Code Violations | | Long methods, deep nesting, poor naming, magic values |
| 4 | Architectural Fit | | Paradigm mismatches, coupling violations, leaky abstractions |
| 5 | Anti-Slop Patterns | | Premature abstraction, enterprise cosplay, hollow patterns |
| 6 | Error Handling | | Bare excepts, swallowed errors, happy-path-only |
Progressive Loading
Load modules based on refinement focus:
(~400 tokens): Duplication detection and consolidationmodules/duplication-analysis.md
(~400 tokens): Complexity analysis and optimizationmodules/algorithm-efficiency.md
(~450 tokens): Clean code, anti-slop, error handlingmodules/clean-code-checks.md
(~400 tokens): Paradigm alignment and couplingmodules/architectural-fit.md
Load all for comprehensive refinement. For focused work, load only relevant modules.
Required TodoWrite Items
— Scope, language, framework detectionrefine:context-established
— Findings across all dimensionsrefine:scan-complete
— Findings ranked by impact and effortrefine:prioritized
— Concrete refactoring plan with before/afterrefine:plan-generated
— Evidence appendix perrefine:evidence-capturedimbue:proof-of-work
Workflow
Step 1: Establish Context (refine:context-established
)
refine:context-establishedDetect project characteristics:
# Language detection find . -not -path "*/.venv/*" -not -path "*/__pycache__/*" \ -not -path "*/node_modules/*" -not -path "*/.git/*" \ \( -name "*.py" -o -name "*.ts" -o -name "*.rs" -o -name "*.go" \) \ | head -20 # Framework detection ls package.json pyproject.toml Cargo.toml go.mod 2>/dev/null # Size assessment find . -not -path "*/.venv/*" -not -path "*/__pycache__/*" \ -not -path "*/node_modules/*" -not -path "*/.git/*" \ \( -name "*.py" -o -name "*.ts" -o -name "*.rs" \) \ | xargs wc -l 2>/dev/null | tail -1
Step 2: Dimensional Scan (refine:scan-complete
)
refine:scan-completeLoad relevant modules and execute analysis per tier level.
Step 3: Prioritize (refine:prioritized
)
refine:prioritizedRank findings by:
- Impact: How much quality improves (HIGH/MEDIUM/LOW)
- Effort: Lines changed, files touched (SMALL/MEDIUM/LARGE)
- Risk: Likelihood of introducing bugs (LOW/MEDIUM/HIGH)
Priority = HIGH impact + SMALL effort + LOW risk first.
Step 4: Generate Plan (refine:plan-generated
)
refine:plan-generatedFor each finding, produce:
- File path and line range
- Current code snippet
- Proposed improvement
- Rationale (which principle/dimension)
- Estimated effort
Step 5: Evidence Capture (refine:evidence-captured
)
refine:evidence-capturedDocument with
imbue:proof-of-work (if available):
,[E1]
references for each finding[E2]- Metrics before/after where measurable
- Principle violations cited
Fallback: If
imbue is not installed, capture evidence inline in the report using the same [E1] reference format without TodoWrite integration.
Tiered Analysis
| Tier | Time | Scope |
|---|---|---|
| 1: Quick (default) | 2-5 min | Complexity hotspots, obvious duplication, naming, magic values |
| 2: Targeted | 10-20 min | Algorithm analysis, full duplication scan, architectural alignment |
| 3: Deep | 30-60 min | All above + cross-module coupling, paradigm fitness, comprehensive plan |
Cross-Plugin Dependencies
| Dependency | Required? | Fallback |
|---|---|---|
| Yes | Core review patterns |
| Optional | Inline evidence in report |
| Optional | Built-in KISS/YAGNI/SOLID checks |
| Optional | Principle-based checks only (no paradigm detection) |
Supporting Modules
- Code quality analysis - duplication detection commands and consolidation strategies
When optional plugins are not installed, the skill degrades gracefully:
- Without
: Evidence captured inline, no TodoWrite proof-of-workimbue - Without
: Uses built-in clean code checks (subset)conserve - Without
: Skips paradigm-specific alignment, uses coupling/cohesion principles onlyarchetypes