Claude-code-plugins-plus-skills replit-performance-tuning
install
source · Clone the upstream repo
git clone https://github.com/jeremylongshore/claude-code-plugins-plus-skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/jeremylongshore/claude-code-plugins-plus-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/plugins/saas-packs/replit-pack/skills/replit-performance-tuning" ~/.claude/skills/jeremylongshore-claude-code-plugins-plus-skills-replit-performance-tuning && rm -rf "$T"
manifest:
plugins/saas-packs/replit-pack/skills/replit-performance-tuning/SKILL.mdsource content
Replit Performance Tuning
Overview
Optimize Replit app performance across the entire lifecycle: cold start reduction, Nix environment caching, build speed, runtime memory management, and deployment configuration. Replit containers have resource limits — efficient usage is critical.
Prerequisites
- Replit app deployed or running in Workspace
- Understanding of
and.replitreplit.nix - Access to deployment monitoring
Instructions
Step 1: Reduce Cold Start Time
Autoscale deployments scale to zero when idle. First request triggers a cold start (10-30s). Minimize it:
// 1. Lazy-load heavy modules — only import when needed // BAD: imports everything at startup import { heavyAnalytics } from './analytics'; // 500ms import { imageProcessor } from './images'; // 300ms // GOOD: import on demand app.get('/api/analyze', async (req, res) => { const { heavyAnalytics } = await import('./analytics'); res.json(await heavyAnalytics.process(req.query)); }); // 2. Defer non-critical initialization let dbPool: Pool | null = null; function getDB(): Pool { if (!dbPool) { dbPool = new Pool({ connectionString: process.env.DATABASE_URL, ssl: { rejectUnauthorized: false }, max: 5, // Keep pool small for faster init }); } return dbPool; } // 3. Start server immediately, initialize after const app = express(); const PORT = parseInt(process.env.PORT || '3000'); app.listen(PORT, '0.0.0.0', () => { console.log(`Server ready in ${process.uptime().toFixed(1)}s`); // Warm up in background after server is accepting requests warmup().catch(console.error); }); async function warmup() { await getDB().query('SELECT 1'); // Pre-connect }
Step 2: Optimize Nix Environment
# replit.nix — only include what you actually need # BAD: kitchen-sink approach { pkgs }: { deps = [ pkgs.nodejs-20_x pkgs.python311 pkgs.go pkgs.rustc pkgs.cargo pkgs.postgresql pkgs.redis pkgs.imagemagick ]; } # GOOD: minimal deps for a Node.js app { pkgs }: { deps = [ pkgs.nodejs-20_x pkgs.nodePackages.typescript-language-server ]; # Only add postgresql if you need psql CLI: # pkgs.postgresql }
# .replit — pin Nix channel for cache hits [nix] channel = "stable-24_05" # Changing channel invalidates all Nix caches # Only upgrade when needed
Step 3: Optimize Build Step
# .replit — fast production builds [deployment] build = ["sh", "-c", "npm ci --production && npm run build"] run = ["sh", "-c", "node dist/index.js"]
// package.json — optimize build scripts { "scripts": { "build": "tsc --incremental", "start": "node dist/index.js", "dev": "tsx watch src/index.ts" } }
// tsconfig.json — incremental builds { "compilerOptions": { "incremental": true, "tsBuildInfoFile": ".tsbuildinfo", "skipLibCheck": true } }
Tips for faster builds:
- Use
(notnpm ci
) — deterministic, fasternpm install - Add
to skip devDependencies--production - Use TypeScript
for rebuild caching--incremental - Avoid
scripts that compile native addonspostinstall
Step 4: Memory Management
Replit containers have memory limits (512 MB to 16 GiB depending on plan/tier):
// Monitor memory usage function logMemory() { const usage = process.memoryUsage(); const mb = (bytes: number) => Math.round(bytes / 1024 / 1024); console.log({ heapUsed: `${mb(usage.heapUsed)} MB`, heapTotal: `${mb(usage.heapTotal)} MB`, rss: `${mb(usage.rss)} MB`, external: `${mb(usage.external)} MB`, }); } // Check every 60 seconds setInterval(logMemory, 60000); // Expose via health endpoint app.get('/health', (req, res) => { const mem = process.memoryUsage(); res.json({ status: 'ok', uptime: process.uptime(), memoryMB: Math.round(mem.heapUsed / 1024 / 1024), memoryPercent: ((mem.heapUsed / mem.heapTotal) * 100).toFixed(1), }); });
Memory optimization patterns:
// Stream large files instead of loading into memory import { createReadStream } from 'fs'; app.get('/download/:file', (req, res) => { const stream = createReadStream(`/tmp/${req.params.file}`); stream.pipe(res); }); // Paginate database queries app.get('/api/items', async (req, res) => { const page = parseInt(req.query.page as string) || 1; const limit = 50; const offset = (page - 1) * limit; const { rows } = await pool.query( 'SELECT * FROM items ORDER BY id LIMIT $1 OFFSET $2', [limit, offset] ); res.json({ items: rows, page, hasMore: rows.length === limit }); }); // Clear caches when memory is high const cache = new Map<string, any>(); setInterval(() => { if (process.memoryUsage().heapUsed > 400 * 1024 * 1024) { cache.clear(); console.log('Cache cleared due to high memory'); } }, 30000);
Step 5: Database Connection Efficiency
// PostgreSQL pool tuning for Replit const pool = new Pool({ connectionString: process.env.DATABASE_URL, ssl: { rejectUnauthorized: false }, max: 5, // Small pool — containers are limited idleTimeoutMillis: 30000, // Close idle connections after 30s connectionTimeoutMillis: 5000, }); // Use connection pooling, never create per-request connections // BAD: new Pool() per request // GOOD: single pool, shared across requests
Step 6: Deployment Type Selection
| Scenario | Best Type | Why |
|---|---|---|
| < 100 daily requests | Autoscale | Free when idle |
| Consistent traffic | Reserved VM | No cold starts |
| Static frontend | Static | Fastest, cheapest |
| Latency-sensitive API | Reserved VM | Always warm |
| Cron jobs / webhooks | Reserved VM | Must be always-on |
Error Handling
| Issue | Cause | Solution |
|---|---|---|
| Cold start > 15s | Heavy imports | Lazy-load, defer init |
| OOM killed | Exceeding memory limit | Stream data, reduce pool size |
| Build timeout | Slow npm install | Use |
| Slow first query | DB cold connection | Pre-connect in warmup() |
Resources
Next Steps
For cost optimization, see
replit-cost-tuning.