Skillshub nestjs-performance
Fastify adapter, Scope management, and Compression. Use when optimizing NestJS performance with Fastify, request-scoped providers, or compression. (triggers: main.ts, FastifyAdapter, compression, SINGLETON, REQUEST scope)
install
source · Clone the upstream repo
git clone https://github.com/ComeOnOliver/skillshub
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/ComeOnOliver/skillshub "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/HoangNguyen0403/agent-skills-standard/nestjs-performance" ~/.claude/skills/comeonoliver-skillshub-nestjs-performance && rm -rf "$T"
manifest:
skills/HoangNguyen0403/agent-skills-standard/nestjs-performance/SKILL.mdsource content
Performance Tuning
Priority: P1 (OPERATIONAL)
High-performance patterns and optimization techniques for NestJS applications.
-
Adapter: Use
instead of Express (2x throughput).FastifyAdapter -
Compression: Enable Gzip/Brotli compression.
// main.ts app.use(compression()); -
Keep-Alive: Configure
keep-alive settings to reuse TCP connections for upstream services.http.Agent
Scope & Dependency Injection
- Default Scope: Adhere to
scope (default).SINGLETON - Request Scope: AVOID
scope unless absolutely necessary.REQUEST- Pro Tip: A single request-scoped service makes its entire injection chain request-scoped.
- Solution: Use Durable Providers (
) for multi-tenancy.durable: true
- Lazy Loading: Use
for heavyweight modules (e.g., Admin panels).LazyModuleLoader
Caching Strategy
- Application Cache: Use
for computation results.@nestjs/cache-manager- Deep Dive: See Caching & Redis for L1/L2 strategies and Invalidation patterns.
- HTTP Cache: Set
headers for client-side caching (CDN/Browser).Cache-Control - Distributed: In microservices, use Redis store, not memory store.
Queues & Async Processing
- Offloading: Never block the HTTP request for long-running tasks (Emails, Reports, webhooks).
- Tool: Use
(BullMQ) or RabbitMQ (@nestjs/bull
).@nestjs/microservices- Pattern: Producer (Controller) -> Queue -> Consumer (Processor).
Serialization
- Warning:
is CPU expensive.class-transformer - Optimization: For high-throughput READ endpoints, consider manual mapping or using
(built-in fastify serialization) instead of interceptors.fast-json-stringify
Database Tuning
- Projections: Always use
to fetch only needed columns.select: [] - N+1: Prevent N+1 queries by using
carefully orrelations
for Graph/Field resolvers.DataLoader - Connection Pooling: Configure pool size (e.g.,
) in config to match DB limits.pool: { min: 2, max: 10 }
Profiling & Scaling
- API Overhead vs DB Execution: Use an "Execution Bucket" strategy to continuously benchmark
,Total Duration
, andDB Execution Time
.API Overhead- Total Baseline: Excellent (< 50ms), Acceptable (< 200ms), Poor (> 500ms). Exception: Authentication routes (e.g. bcrypt/argon2) should take 300-500ms intentionally.
- DB Execution Baseline: Excellent (< 5ms), Acceptable (< 30ms), Poor (> 100ms - implies missing index or N+1 problem).
- API Overhead Baseline: Excellent (< 20ms), Poor (> 100ms - implies heavy synchronous processing or serialization blocking Node's event loop).
- Offloading: Move CPU-heavy tasks (Image processing, Crypto) to
.worker_threads - Clustering: For non-containerized environments, use
to utilize all CPU cores. In K8s, prefer ReplicaSets.ClusterModule
Anti-Patterns
- No REQUEST scope without evaluation: One REQUEST-scoped provider makes the entire chain request-scoped.
- No CPU tasks in HTTP handler: Offload image/crypto work to
or BullMQ.worker_threads - No unprojected queries: Always
the needed columns to avoid serializing unused data.select: []