Skillshub nodejs-performave-with-flame
install
source · Clone the upstream repo
git clone https://github.com/ComeOnOliver/skillshub
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/ComeOnOliver/skillshub "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/Harmeet10000/skills/nodejs-performave-with-flame" ~/.claude/skills/comeonoliver-skillshub-nodejs-performave-with-flame && rm -rf "$T"
manifest:
skills/Harmeet10000/skills/nodejs-performave-with-flame/SKILL.mdsource content
Skill Name: Node.js Performance Architect (LLM-Friendly Profiling)
Description
The agent possesses the ability to ingest, interpret, and act upon pprof-based Markdown analysis generated by tools like
@platformatic/flame. It can bridge the gap between low-level CPU/Heap profiles and high-level architectural code fixes.
Contextual Knowledge (from Platformatic Blog)
- The Problem: Traditional flamegraphs are hard to search and require human expertise to prioritize hotspots.
- The Solution: The
format provides a structured, text-based representation of stack frames, "Self Time," and "Total Time" that LLMs can parse natively.pprof-to-md - Efficiency Gains: Systematic evals show that LLMs using this data achieve up to 144x throughput improvements (e.g., moving JSON parsing out of hot paths) and massive latency reductions (e.g., fixing O(n²) loops).
Agent Capabilities & Instructions
1. Profile Ingestion & Triage
When provided with a
.md performance profile (Summary, Detailed, or Adaptive formats), the agent must:
- Identify Top Hotspots by ranking "Self Time" (time spent in the function itself) vs. "Total Time" (time spent in the function + its children).
- Distinguish between CPU Bottlenecks (heavy computation, regex, JSON parsing) and Heap/Memory Churn (excessive object allocation, large intermediate arrays).
2. Pattern Recognition
The agent should specifically look for these "Platformatic-Verified" anti-patterns:
- The Middleware Trap: Parsing static config files or expensive JSON inside request handlers (Fix: Move to startup/singleton).
- The N+1 Async Loop: Sequential
calls in a loop (Fix: Useawait
).Promise.all() - Hidden Latency: Using expensive abstractions like the
constructor orURL
operators inside hot loops (Fix: Use simpler primitives orspread
for O(1) lookups).Set
3. Actionable Optimization Workflow
Upon identifying a bottleneck, the agent must:
- Locate the Source: Use the file paths and line numbers provided in the Markdown table.
- Hypothesize & Patch: Propose a code change (e.g., "Memoize this result," "Move this regex outside the function").
- Verify: Instruct the user to re-run
to confirm the fix actually shifted the hotspots in the next profile.flame run
Example Prompt for Triggering this Skill
"I've attached a
generated by Platformatic Flame. Based on the Top Hotspots, analyze mycpu-profile.mdand provide a prioritized list of fixes. Specifically, look for any O(n²) operations or redundant I/O that could be moved to the initialization phase."src/handler.js
Technical Requirements for Execution
- Environment: Node.js 22.6.0+ (for ESM interoperability).
- Tooling:
latest version.@platformatic/flame - API Usage: Use
for programmatic analysis.generateMarkdown('profile.pb', 'analysis.md', { format: 'detailed' })
Would you like me to generate a sample Performance Analysis Markdown file so you can see exactly what the AI Agent would see?