Harness-engineering node-performance-profiling

Node.js Performance Profiling

install
source · Clone the upstream repo
git clone https://github.com/Intense-Visions/harness-engineering
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/Intense-Visions/harness-engineering "$T" && mkdir -p ~/.claude/skills && cp -r "$T/agents/skills/codex/node-performance-profiling" ~/.claude/skills/intense-visions-harness-engineering-node-performance-profiling-a53956 && rm -rf "$T"
manifest: agents/skills/codex/node-performance-profiling/SKILL.md
source content

Node.js Performance Profiling

Profile Node.js applications using --prof, clinic.js, memory snapshots, and event loop lag

When to Use

  • Diagnosing slow API endpoints or high CPU usage
  • Finding memory leaks in long-running Node.js processes
  • Identifying event loop blocking and lag
  • Optimizing startup time and request throughput

Instructions

  1. Built-in CPU profiling with
    --prof
    :
node --prof app.js
# Generate human-readable output
node --prof-process isolate-*.log > profile.txt
  1. Chrome DevTools profiling:
node --inspect app.js
# Open chrome://inspect in Chrome
# Click "inspect" on the Node.js target
# Use the Performance and Memory tabs
  1. Measure event loop lag:
import { monitorEventLoopDelay } from 'node:perf_hooks';

const histogram = monitorEventLoopDelay({ resolution: 20 });
histogram.enable();

setInterval(() => {
  console.log({
    min: histogram.min / 1e6, // ms
    max: histogram.max / 1e6,
    mean: histogram.mean / 1e6,
    p99: histogram.percentile(99) / 1e6,
  });
  histogram.reset();
}, 5000);
  1. Performance timing API:
import { performance, PerformanceObserver } from 'node:perf_hooks';

performance.mark('start');
await processData();
performance.mark('end');

performance.measure('processData', 'start', 'end');

const entries = performance.getEntriesByName('processData');
console.log(`Duration: ${entries[0].duration}ms`);
  1. Memory usage monitoring:
function logMemory() {
  const { heapUsed, heapTotal, external, rss } = process.memoryUsage();
  console.log({
    heapUsed: `${(heapUsed / 1024 / 1024).toFixed(1)} MB`,
    heapTotal: `${(heapTotal / 1024 / 1024).toFixed(1)} MB`,
    rss: `${(rss / 1024 / 1024).toFixed(1)} MB`,
  });
}

setInterval(logMemory, 10_000);
  1. Heap snapshots for memory leak detection:
import v8 from 'node:v8';
import { writeFileSync } from 'node:fs';

function takeHeapSnapshot() {
  const filename = `heap-${Date.now()}.heapsnapshot`;
  const snapshotStream = v8.writeHeapSnapshot(filename);
  console.log(`Heap snapshot written to ${snapshotStream}`);
}

// Take snapshots at intervals to compare in Chrome DevTools
  1. Clinic.js for automated profiling:
npx clinic doctor -- node app.js
# Generates a flamechart visualization
npx clinic flame -- node app.js
# Generates a flamegraph for CPU profiling
npx clinic bubbleprof -- node app.js
# Visualizes async operations
  1. Common performance patterns:
// Cache expensive computations
const cache = new Map<string, Result>();

async function getCachedResult(key: string): Promise<Result> {
  if (cache.has(key)) return cache.get(key)!;
  const result = await expensiveComputation(key);
  cache.set(key, result);
  return result;
}

// Use setImmediate to yield the event loop during CPU work
async function processLargeArray(items: Item[]) {
  for (let i = 0; i < items.length; i++) {
    processItem(items[i]);
    if (i % 1000 === 0) {
      await new Promise((resolve) => setImmediate(resolve));
    }
  }
}

Details

Node.js performance profiling identifies CPU bottlenecks, memory leaks, and event loop blocking that cause high latency or resource exhaustion.

Key metrics:

  • Event loop lag — time the event loop is blocked. >100ms indicates blocking synchronous code
  • Heap used — current memory allocation. Steadily growing indicates a memory leak
  • RSS — resident set size (total process memory). Includes heap, stack, and native objects
  • CPU time — user and system CPU time. High system time may indicate excessive I/O

Flamegraphs: Visual representations of CPU time across call stacks. Wide bars indicate functions consuming significant CPU. Available through Chrome DevTools Performance tab or

clinic flame
.

Memory leak patterns:

  • Growing Maps or Sets that are never cleared
  • Event listeners added in loops without removal
  • Closures capturing large scopes
  • Uncleared timers (
    setInterval
    without
    clearInterval
    )
  • Streams not properly destroyed

Trade-offs:

  • --prof
    is built-in — but produces hard-to-read output without processing
  • Chrome DevTools is powerful — but adds overhead that can affect results
  • clinic.js
    automates analysis — but is a development-only tool
  • Event loop monitoring reveals blocking — but cannot identify which function is blocking

Source

https://nodejs.org/en/docs/guides/simple-profiling

Process

  1. Read the instructions and examples in this document.
  2. Apply the patterns to your implementation, adapting to your specific context.
  3. Verify your implementation against the details and edge cases listed above.

Harness Integration

  • Type: knowledge — this skill is a reference document, not a procedural workflow.
  • No tools or state — consumed as context by other skills and agents.

Success Criteria

  • The patterns described in this document are applied correctly in the implementation.
  • Edge cases and anti-patterns listed in this document are avoided.