Agent-skills-standard nestjs-performance
Optimize NestJS throughput with Fastify adapter, singleton scope enforcement, compression, and query projections. Use when switching to Fastify, diagnosing request-scoped bottlenecks, or profiling API overhead. (triggers: main.ts, FastifyAdapter, compression, SINGLETON, REQUEST scope)
install
source · Clone the upstream repo
git clone https://github.com/HoangNguyen0403/agent-skills-standard
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/HoangNguyen0403/agent-skills-standard "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/nestjs/nestjs-performance" ~/.claude/skills/hoangnguyen0403-agent-skills-standard-nestjs-performance && rm -rf "$T"
manifest:
skills/nestjs/nestjs-performance/SKILL.mdsource content
Performance Tuning
Priority: P1 (OPERATIONAL)
Workflow: Performance Audit
- Switch to Fastify — Replace Express with
for ~2x throughput.FastifyAdapter - Enable compression — Add Gzip/Brotli middleware.
- Audit provider scopes — Ensure no unintended
scope chains.REQUEST - Add query projections — Use
on all repository queries.select: [] - Profile overhead — Benchmark Total Duration, DB Execution, and API Overhead.
Fastify + Compression Setup
- Keep-Alive: Configure
keep-alive settings to reuse TCP connections for upstream services.http.Agent
Scope & Dependency Injection
- Default Scope: Adhere to
scope (default).SINGLETON - Request Scope: AVOID
scope unless absolutely necessary.REQUEST - Pro Tip: single request-scoped service makes its entire injection chain request-scoped.
- Solution: Use Durable Providers (
) for multi-tenancy.durable: true - Lazy Loading: Use
for heavyweight modules (e.g., Admin panels).LazyModuleLoader
Caching Strategy
- Application Cache: Use
for computation results.@nestjs/cache-manager - Deep Dive: See Caching & Redis for L1/L2 strategies and Invalidation patterns.
- HTTP Cache: Set
headers for client-side caching (CDN/Browser).Cache-Control - Distributed: In microservices, use Redis store, not memory store.
Queues & Async Processing
- Offloading: Never block HTTP request for long-running tasks (Emails, Reports, webhooks).
- Tool: Use
(BullMQ) or RabbitMQ (@nestjs/bull
).@nestjs/microservices - Pattern: Producer (Controller) -> Queue -> Consumer (Processor).
Serialization
- Warning:
CPU expensive.class-transformer - Optimization: For high-throughput READ endpoints, consider manual mapping or using
(built-in fastify serialization) instead of interceptors.fast-json-stringify
Database Tuning
- Projections: Always use
to fetch only needed columns.select: [] - N+1: Prevent N+1 queries by using
carefully orrelations
for Graph/Field resolvers.DataLoader - Connection Pooling: Configure pool size (e.g.,
) in config to match DB limits.pool: { min: 2, max: 10 }
Profiling & Scaling
- API Overhead vs DB Execution: Use "Execution Bucket" strategy to continuously benchmark
,Total Duration
, andDB Execution Time
.API Overhead - Total Baseline: Excellent (< 50ms), Acceptable (< 200ms), Poor (> 500ms). Exception: Authentication routes (e.g. bcrypt/argon2) should take 300-500ms intentionally.
- DB Execution Baseline: Excellent (< 5ms), Acceptable (< 30ms), Poor (> 100ms - implies missing index or N+1 problem).
- API Overhead Baseline: Excellent (< 20ms), Poor (> 100ms - implies heavy synchronous processing or serialization blocking Node's event loop).
- Offloading: Move CPU-heavy tasks (Image processing, Crypto) to
.worker_threads - Clustering: For non-containerized environments, use
to utilize all CPU cores. In K8s, prefer ReplicaSets.ClusterModule
Anti-Patterns
- No REQUEST scope without evaluation: One REQUEST-scoped provider makes entire chain request-scoped.
- No CPU tasks in HTTP handler: Offload image/crypto work to
or BullMQ.worker_threads - No unprojected queries: Always
needed columns to avoid serializing unused data.select: []