Claude-night-market review-core
Reusable scaffolding for review workflows with context establishment, evidence capture, and structured output.
install
source · Clone the upstream repo
git clone https://github.com/athola/claude-night-market
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/athola/claude-night-market "$T" && mkdir -p ~/.claude/skills && cp -r "$T/plugins/imbue/skills/review-core" ~/.claude/skills/athola-claude-night-market-review-core && rm -rf "$T"
manifest:
plugins/imbue/skills/review-core/SKILL.mdsource content
Core Review Workflow
Table of Contents
- When to Use
- Activation Patterns
- Required TodoWrite Items
- Step 1 – Establish Context
- Step 2 – Inventory Scope
- Step 3 – Capture Evidence
- Step 4 – Structure Deliverables
- Step 5 – Contingency Plan
When To Use
- Use this skill at the beginning of any detailed review workflow (e.g., for architecture, math, or an API).
- It provides a consistent structure for capturing context, logging evidence, and formatting the final report, which makes the findings of different reviews comparable.
When NOT To Use
- Diff-focused analysis - use diff-analysis
Activation Patterns
Trigger Keywords: review, audit, analysis, assessment, evaluation, inspection Contextual Cues:
- "review this code/design/architecture"
- "conduct an audit of"
- "analyze this for issues"
- "evaluate the quality of"
- "perform an assessment"
Auto-Load When: Any review-specific workflow is detected or when analysis methodologies are requested.
Required TodoWrite Items
review-core:context-establishedreview-core:scope-inventoriedreview-core:evidence-capturedreview-core:deliverables-structuredreview-core:contingencies-documented
Step 1 – Establish Context (review-core:context-established
)
review-core:context-established- Confirm
, repo, branch, and upstream base (e.g.,pwd
,git status -sb
).git rev-parse --abbrev-ref HEAD - Note comparison target (merge base, release tag) so later diffs reference a concrete range.
- Summarize the feature/bug/initiative under review plus stakeholders and deadlines.
Step 2 – Inventory Scope (review-core:scope-inventoried
)
review-core:scope-inventoried- List relevant artifacts for this review: source files, configs, docs, specs, generated assets (OpenAPI, Makefiles, ADRs, notebooks, etc.).
- Record how you enumerated them (commands like
,rg --files -g '*.mk'
,ls docs
).cargo metadata - Capture assumptions or constraints inherited from the plan/issue so the domain-specific analysis can cite them.
Step 3 – Capture Evidence (review-core:evidence-captured
)
review-core:evidence-captured- Log every command/output that informs the review (e.g.,
,git diff --stat
,make -pn
,cargo doc
citations). Keep snippets or line numbers for later reference.web.run - Track open questions or variances found during preflight; if they block progress, record owners/timelines now.
Step 4 – Structure Deliverables (review-core:deliverables-structured
)
review-core:deliverables-structured- Prepare the reporting skeleton shared by all reviews:
- Summary (baseline, scope, recommendation)
- Ordered findings (severity, file:line, principle violated, remediation)
- Follow-up tasks (owner + due date)
- Evidence appendix (commands, URLs, notebooks)
- validate the domain-specific checklist will populate each section before concluding.
Step 5 – Contingency Plan (review-core:contingencies-documented
)
review-core:contingencies-documented- If a required tool or skill is unavailable (e.g.,
), document the alternative steps that will be taken and any limitations this introduces. This helps reviewers understand any gaps in coverage.web.run - Note any outstanding approvals or data needed to complete the review.
Exit Criteria
- All TodoWrite items complete with concrete notes (commands run, files listed, evidence paths).
- Domain-specific review can now assume consistent context/evidence/deliverable scaffolding and focus on specialized analysis.