Harness-engineering harness-impact-analysis
<!-- Generated by harness generate-slash-commands. Do not edit. -->
git clone https://github.com/Intense-Visions/harness-engineering
T=$(mktemp -d) && git clone --depth=1 https://github.com/Intense-Visions/harness-engineering "$T" && mkdir -p ~/.claude/skills && cp -r "$T/agents/commands/codex/harness/harness-impact-analysis" ~/.claude/skills/intense-visions-harness-engineering-harness-impact-analysis && rm -rf "$T"
agents/commands/codex/harness/harness-impact-analysis/SKILL.mdHarness Impact Analysis
Graph-based impact analysis. Answers: "if I change X, what breaks?"
When to Use
- Before merging a PR — understand the blast radius of changes
- When planning a refactoring — know what will be affected
- When a test fails — trace backwards to find what change caused it
- When
triggers fireon_pr - NOT for understanding code (use harness-onboarding or harness-code-review)
- NOT for finding dead code (use cleanup-dead-code)
Prerequisites
A knowledge graph at
.harness/graph/ enables full analysis. If no graph exists,
the skill uses static analysis fallbacks (see Graph Availability section).
Run harness scan to enable graph-enhanced analysis.
Graph Availability
Before starting, check if
.harness/graph/graph.json exists.
If graph exists: Check staleness — compare
.harness/graph/metadata.json
scanTimestamp against git log -1 --format=%ct (latest commit timestamp).
If graph is more than 2 commits behind (git log --oneline <scanTimestamp>..HEAD | wc -l),
run harness scan to refresh before proceeding. (Staleness sensitivity: High)
If graph exists and is fresh (or refreshed): Use graph tools as primary strategy.
If no graph exists: Output "Running without graph (run
harness scan to
enable full analysis)" and use fallback strategies for all subsequent steps.
Process
Phase 1: IDENTIFY — Determine Changed Files
- From diff: If a git diff is available, parse it to extract changed file paths.
- From input: If file paths are provided directly, use those.
- From git: If neither, use
to get recent changes.git diff --name-only HEAD~1
Phase 2: ANALYZE — Query Graph for Impact
For each changed file:
-
Direct dependents: Use
MCP tool to find all files that import or call the changed file.get_impactget_impact(filePath="src/services/auth.ts") → tests: [auth.test.ts, integration.test.ts] → docs: [auth-guide.md] → code: [routes/login.ts, middleware/verify.ts, ...] -
Transitive dependents: Use
with depth 3 to find indirect consumers.query_graphquery_graph(rootNodeIds=["file:src/services/auth.ts"], maxDepth=3, includeEdges=["imports", "calls"]) -
Documentation impact: Use
to findget_relationships
edges pointing to changed nodes.documents -
Test coverage: Identify test files connected via
edges. Flag changed files with no test coverage.imports -
Design token impact: When the graph contains
nodes, useDesignToken
withquery_graph
edges to find components that consume changed tokens.USES_TOKENquery_graph(rootNodeIds=["designtoken:color.primary"], maxDepth=2, includeEdges=["uses_token"]) → components: [Button.tsx, Card.tsx, Header.tsx, ...]If a changed file is
, identify ALL tokens that changed and trace each to its consuming components. This reveals the full design blast radius of a token change.design-system/tokens.json -
Design constraint impact: When the graph contains
nodes, check if changed code introduces newDesignConstraint
edges.VIOLATES_DESIGN
Fallback (without graph)
When no graph is available, use static analysis to approximate impact:
- Parse imports: For each changed file, grep all source files for
andimport.*from.*<changed-file>
patterns to find direct dependents.require.*<changed-file> - Follow imports 2 levels deep: For each direct dependent found, repeat the import grep to find second-level dependents. Stop at 2 levels (fallback cannot reliably trace deeper).
- Find test files by naming convention: For each changed file
, search for:foo.ts
,foo.test.ts
(same directory andfoo.spec.ts
directory)__tests__/
and*.test.*
files that import the changed file (from step 1)*.spec.*
- Find docs by path matching: Grep
directory for references to the changed module name (filename without extension).docs/ - Group results the same as the graph version: tests, docs, code, other. Note the count of files found.
Fallback completeness: ~70% — misses transitive deps beyond 2 levels.
Phase 3: ASSESS — Risk Assessment and Report
-
Impact score: Calculate based on:
- Number of direct dependents (weight: 3x)
- Number of transitive dependents (weight: 1x)
- Whether affected code includes entry points (weight: 5x)
- Whether tests exist for the changed code (no tests = higher risk)
- Whether design tokens are affected (weight: 2x — token changes cascade to all consumers)
-
Risk tiers:
- Critical (score > 50): Changes affect entry points or >20 downstream files
- High (score 20-50): Changes affect multiple modules or shared utilities
- Medium (score 5-20): Changes affect a few files within the same module
- Low (score < 5): Changes are isolated with minimal downstream impact
-
Output report:
## Impact Analysis Report ### Changed Files - src/services/auth.ts (modified) - src/types/user.ts (modified) ### Impact Summary - Direct dependents: 8 files - Transitive dependents: 23 files - Affected tests: 5 files - Affected docs: 2 files - Risk tier: HIGH ### Affected Tests (must run) 1. tests/services/auth.test.ts (direct) 2. tests/routes/login.test.ts (transitive) 3. tests/integration/auth-flow.test.ts (transitive) ### Affected Documentation (may need update) 1. docs/auth-guide.md → documents src/services/auth.ts 2. docs/api-reference.md → documents src/types/user.ts ### Downstream Consumers 1. src/routes/login.ts — imports auth.ts 2. src/middleware/verify.ts — imports auth.ts 3. src/routes/signup.ts — imports user.ts (transitive via auth.ts) ### Affected Design Tokens (when tokens change) 1. color.primary → used by 12 components 2. typography.body → used by 8 components
Harness Integration
— Recommended before this skill for full graph-enhanced analysis. If graph is missing, skill uses static analysis fallbacks.harness scan
— Run after acting on findings to verify project health.harness validate- Graph tools — This skill uses
,query_graph
, andget_impact
MCP tools.get_relationships
Success Criteria
- Impact report generated with a risk tier (Critical / High / Medium / Low)
- All affected test files listed with direct vs transitive classification
- All affected documentation files listed with relationship context
- Report follows the structured output format
- All findings are backed by graph query evidence (with graph) or systematic static analysis (without graph)
Examples
Example: Analyzing a Change to auth.ts
Input: git diff shows src/services/auth.ts modified 1. IDENTIFY — Extract changed file: src/services/auth.ts 2. ANALYZE — get_impact(filePath="src/services/auth.ts") query_graph(rootNodeIds=["file:src/services/auth.ts"], maxDepth=3) Results: 8 direct dependents, 23 transitive, 5 tests, 2 docs 3. ASSESS — Impact score: 34 (High tier) - Entry points affected: no - Tests exist: yes (5 files) Output: Risk tier: HIGH Must-run tests: auth.test.ts, login.test.ts, auth-flow.test.ts Docs to update: auth-guide.md, api-reference.md Downstream consumers: 8 files across 3 modules
Rationalizations to Reject
| Rationalization | Reality |
|---|---|
| "The change is small so the blast radius must be low -- I can skip the transitive dependent check" | Small changes to shared utilities can have outsized blast radius. A one-line change to auth.ts can affect 23 transitive dependents. |
| "The graph is a few commits behind but it is close enough for this analysis" | If the graph is more than 2 commits behind, the skill requires a refresh before proceeding. Recent commits may have added new consumers. |
| "No graph exists so I cannot produce a useful impact analysis" | The fallback strategy using import parsing and naming conventions achieves ~70% completeness. Missing the graph does not mean stopping. |
Gates
- Graph preferred, fallback available. If no graph exists, use fallback strategies (import parsing, naming conventions, path matching). Do not stop — produce the best analysis possible with available tools.
- No risk assessment without data. Use graph queries when available; use import parsing and naming conventions when not. If neither approach yields data, state what is missing.
Escalation
- When graph is stale: If the graph's last scan timestamp is older than the most recent commit, warn that results may be incomplete and suggest re-scanning.
- When impact is critical: If risk tier is Critical, recommend a thorough code review and full test suite run before merging.