Skillshub nodejs-performave-with-flame

install
source · Clone the upstream repo
git clone https://github.com/ComeOnOliver/skillshub
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/ComeOnOliver/skillshub "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/Harmeet10000/skills/nodejs-performave-with-flame" ~/.claude/skills/comeonoliver-skillshub-nodejs-performave-with-flame && rm -rf "$T"
manifest: skills/Harmeet10000/skills/nodejs-performave-with-flame/SKILL.md
source content

Skill Name: Node.js Performance Architect (LLM-Friendly Profiling)

Description

The agent possesses the ability to ingest, interpret, and act upon pprof-based Markdown analysis generated by tools like

@platformatic/flame
. It can bridge the gap between low-level CPU/Heap profiles and high-level architectural code fixes.

Contextual Knowledge (from Platformatic Blog)

  • The Problem: Traditional flamegraphs are hard to search and require human expertise to prioritize hotspots.
  • The Solution: The
    pprof-to-md
    format provides a structured, text-based representation of stack frames, "Self Time," and "Total Time" that LLMs can parse natively.
  • Efficiency Gains: Systematic evals show that LLMs using this data achieve up to 144x throughput improvements (e.g., moving JSON parsing out of hot paths) and massive latency reductions (e.g., fixing O(n²) loops).

Agent Capabilities & Instructions

1. Profile Ingestion & Triage

When provided with a

.md
performance profile (Summary, Detailed, or Adaptive formats), the agent must:

  • Identify Top Hotspots by ranking "Self Time" (time spent in the function itself) vs. "Total Time" (time spent in the function + its children).
  • Distinguish between CPU Bottlenecks (heavy computation, regex, JSON parsing) and Heap/Memory Churn (excessive object allocation, large intermediate arrays).

2. Pattern Recognition

The agent should specifically look for these "Platformatic-Verified" anti-patterns:

  • The Middleware Trap: Parsing static config files or expensive JSON inside request handlers (Fix: Move to startup/singleton).
  • The N+1 Async Loop: Sequential
    await
    calls in a loop (Fix: Use
    Promise.all()
    ).
  • Hidden Latency: Using expensive abstractions like the
    URL
    constructor or
    spread
    operators inside hot loops (Fix: Use simpler primitives or
    Set
    for O(1) lookups).

3. Actionable Optimization Workflow

Upon identifying a bottleneck, the agent must:

  1. Locate the Source: Use the file paths and line numbers provided in the Markdown table.
  2. Hypothesize & Patch: Propose a code change (e.g., "Memoize this result," "Move this regex outside the function").
  3. Verify: Instruct the user to re-run
    flame run
    to confirm the fix actually shifted the hotspots in the next profile.

Example Prompt for Triggering this Skill

"I've attached a

cpu-profile.md
generated by Platformatic Flame. Based on the Top Hotspots, analyze my
src/handler.js
and provide a prioritized list of fixes. Specifically, look for any O(n²) operations or redundant I/O that could be moved to the initialization phase."


Technical Requirements for Execution

  • Environment: Node.js 22.6.0+ (for ESM interoperability).
  • Tooling:
    @platformatic/flame
    latest version.
  • API Usage: Use
    generateMarkdown('profile.pb', 'analysis.md', { format: 'detailed' })
    for programmatic analysis.

Would you like me to generate a sample Performance Analysis Markdown file so you can see exactly what the AI Agent would see?