Skills fix-llm-artifacts
Applies fixes from a prior review-llm-artifacts run, with safe/risky classification
git clone https://github.com/openclaw/skills
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/anderskev/fix-llm-artifacts" ~/.claude/skills/clawdbot-skills-fix-llm-artifacts && rm -rf "$T"
skills/anderskev/fix-llm-artifacts/SKILL.mdFix LLM Artifacts
Apply fixes from a previous
review-llm-artifacts run with automatic safe/risky classification.
Usage
/beagle-core:fix-llm-artifacts [--dry-run] [--all] [--category <name>]
Flags:
- Show what would be fixed without changing files--dry-run
- Fix entire codebase (runs review with --all first)--all
- Only fix specific category:--category <name>tests|dead-code|abstraction|style
Instructions
1. Parse Arguments
Extract flags from
$ARGUMENTS:
- Preview mode only--dry-run
- Full codebase scan--all
- Filter to specific category--category <name>
2. Pre-flight Safety Checks
# Check for uncommitted changes git status --porcelain
If working directory is dirty, warn:
Warning: You have uncommitted changes. Creating a git stash before proceeding. Run `git stash pop` to restore if needed.
Create stash if dirty:
git stash push -m "beagle-core: pre-fix-llm-artifacts backup"
3. Load Review Results
Check for existing review file:
cat .beagle/llm-artifacts-review.json 2>/dev/null
If file missing:
- If
flag: Run--all
firstreview-llm-artifacts --all --json - Otherwise: Fail with: "No review results found. Run
first."/beagle-core:review-llm-artifacts
If file exists, validate freshness:
# Get stored git HEAD from JSON stored_head=$(jq -r '.git_head' .beagle/llm-artifacts-review.json) current_head=$(git rev-parse HEAD) if [ "$stored_head" != "$current_head" ]; then echo "Warning: Review was run at commit $stored_head, but HEAD is now $current_head" fi
If stale, prompt: "Review results are stale. Re-run review? (y/n)"
4. Partition Findings by Safety
Parse findings from JSON and classify by
fix_safety field:
Safe Fixes (auto-apply):
- Unused importsunused_import
- Stale TODO/FIXME commentstodo_comment
- Obviously unreachable codedead_code_obvious
- Overly verbose LLM-style commentsverbose_comment
- Redundant type annotationsredundant_type
Risky Fixes (require confirmation):
- Test structure changestest_refactor
- Class/function extractionabstraction_change
- Removing functional codecode_removal
- Test mock scope changesmock_boundary
- Any behavioral modificationslogic_change
5. Apply Safe Fixes
If
--dry-run:
## Safe Fixes (would apply automatically) | File | Line | Type | Description | |------|------|------|-------------| | src/api.py | 15 | unused_import | Remove `from typing import List` | | src/models.py | 42 | verbose_comment | Remove 23-line docstring | ...
Otherwise, spawn parallel agents per category with
Task tool:
Task: Apply safe fixes for category "{category}" Files: [list of files with findings in this category] Instructions: Apply each fix, preserving surrounding code. Report success/failure per fix.
Categories to parallelize:
- Comments, formattingstyle
- Imports, unreachable codedead-code
- Test-related safe fixestests
- Safe refactorsabstraction
6. Handle Risky Fixes
For each risky fix, prompt interactively:
[src/services/auth.py:156] Remove seemingly unused authenticate_legacy() method? This method has no callers in the codebase but may be used externally. (y)es / (n)o / (s)kip all risky:
Track user choices:
- Apply this fixy
- Skip this fixn
- Skip all remaining risky fixess
7. Post-Fix Verification
Detect project type and run appropriate linters:
Python:
# Check if ruff config exists if [ -f "pyproject.toml" ] || [ -f "ruff.toml" ]; then ruff check --fix . ruff format . fi # Check if mypy config exists if [ -f "pyproject.toml" ] || [ -f "mypy.ini" ]; then mypy . fi
TypeScript/JavaScript:
# Check for eslint if [ -f "eslint.config.js" ] || [ -f ".eslintrc.json" ]; then npx eslint --fix . fi # Check for TypeScript if [ -f "tsconfig.json" ]; then npx tsc --noEmit fi
Go:
if [ -f "go.mod" ]; then go vet ./... go build ./... fi
8. Run Tests
# Python if [ -f "pyproject.toml" ] || [ -f "pytest.ini" ]; then pytest fi # JavaScript/TypeScript if [ -f "package.json" ]; then npm test 2>/dev/null || yarn test 2>/dev/null || true fi # Go if [ -f "go.mod" ]; then go test ./... fi
9. Report Results
## Fix Summary ### Applied Fixes - [x] src/api.py:15 - Removed unused import `List` - [x] src/models.py:42-64 - Removed verbose docstring - [x] src/auth.py:156-189 - Removed dead method (user confirmed) ### Skipped Fixes - [ ] src/services/cache.py:23 - User declined risky fix - [ ] tests/test_api.py:45 - Test refactor skipped ### Verification Results - Linter: PASSED - Type check: PASSED - Tests: PASSED (42 passed, 0 failed) ### Diff Summary ```bash git diff --stat
Cleanup
On successful completion (all verifications pass):
rm .beagle/llm-artifacts-review.json
If any verification fails, keep the file and report:
Review file preserved at .beagle/llm-artifacts-review.json Fix issues and re-run, or restore with: git stash pop
Example
# Preview all fixes without applying /beagle-core:fix-llm-artifacts --dry-run # Fix only dead code issues /beagle-core:fix-llm-artifacts --category dead-code # Full codebase scan and fix /beagle-core:fix-llm-artifacts --all # Fix style issues only, preview first /beagle-core:fix-llm-artifacts --category style --dry-run