Claude-skill-registry deploy-render
Provides comprehensive Render.com deployment standards covering environment configuration, database migrations, cron jobs, health checks, log management, and production best practices for web services
git clone https://github.com/majiayu000/claude-skill-registry
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data/deploy-render" ~/.claude/skills/majiayu000-claude-skill-registry-deploy-render && rm -rf "$T"
skills/data/deploy-render/SKILL.mdRender.com Deployment Standards
This skill provides complete guidelines for deploying applications to Render.com, covering all aspects from initial setup to production monitoring.
Pre-Deployment Checklist
Repository Requirements
- Code pushed to GitHub/GitLab/Bitbucket
-
with correct start scriptpackage.json - Build command configured (if using build step)
-
includes.gitignore
,.env
, build artifactsnode_modules - Dependencies properly listed (not in devDependencies if needed for production)
- Database migrations ready (if applicable)
- Health check endpoint implemented
Environment Preparation
- Production environment variables documented
- Secrets stored securely (not in repo)
- Database connection strings prepared
- Third-party API keys obtained
- Domain/subdomain configured (if using custom domain)
Service Configuration
Web Service Setup
Basic Configuration:
# render.yaml (Infrastructure as Code - optional but recommended) services: - type: web name: my-app env: node region: oregon plan: starter buildCommand: npm run build startCommand: npm start envVars: - key: NODE_ENV value: production - key: DATABASE_URL fromDatabase: name: my-postgres-db property: connectionString - key: API_KEY sync: false # Secret - set manually in dashboard healthCheckPath: /api/health autoDeploy: true
Key Settings:
| Setting | Recommended Value | Notes |
|---|---|---|
| Environment | Node, Docker, Python, etc. | Based on your stack |
| Region | oregon, frankfurt, singapore | Choose closest to users |
| Plan | starter → standard → pro | Scale based on traffic |
| Build Command | | Empty if no build step |
| Start Command | | Must be defined |
| Auto-Deploy | | Deploy on git push |
| Health Check | or | Critical for zero-downtime |
Environment Variables
Required Variables for Next.js:
# Core NODE_ENV=production PORT=10000 # Render assigns this automatically # Application NEXT_PUBLIC_SITE_URL=https://your-app.onrender.com NEXTAUTH_URL=https://your-app.onrender.com NEXTAUTH_SECRET=your-secret-key-min-32-chars # Database (if using Render PostgreSQL) DATABASE_URL=${DATABASE_URL} # Auto-populated from database connection # Supabase (example) NEXT_PUBLIC_SUPABASE_URL=https://your-project.supabase.co NEXT_PUBLIC_SUPABASE_ANON_KEY=your-anon-key SUPABASE_SERVICE_ROLE_KEY=your-service-role-key # External APIs STRIPE_SECRET_KEY=sk_live_... STRIPE_WEBHOOK_SECRET=whsec_... SENDGRID_API_KEY=SG...
Setting Variables:
Via Dashboard:
- Go to your service → Environment
- Add environment variables
- Mark sensitive ones as "Secret"
Via render.yaml:
envVars: - key: NODE_ENV value: production - key: DATABASE_URL fromDatabase: name: my-postgres-db property: connectionString - key: API_SECRET generateValue: true # Auto-generate random value - key: STRIPE_KEY sync: false # Must set manually (secret)
Build & Start Commands
Next.js:
buildCommand: npm install && npm run build startCommand: npm start
Vite/React:
buildCommand: npm install && npm run build startCommand: npx serve -s dist -l $PORT
Node.js/Express:
buildCommand: npm install startCommand: npm start
Docker:
dockerfilePath: ./Dockerfile dockerCommand: npm start
TypeScript:
buildCommand: npm install && npm run build startCommand: node dist/index.js
Database Configuration
PostgreSQL Setup
Create Database:
- Dashboard → New → PostgreSQL
- Select region (same as web service)
- Choose plan (Starter is free)
- Database created with auto-generated credentials
Connect to Web Service:
# render.yaml databases: - name: my-postgres-db plan: starter region: oregon databaseName: myapp_db user: myapp_user services: - type: web name: my-app envVars: - key: DATABASE_URL fromDatabase: name: my-postgres-db property: connectionString
Manual Connection String:
postgresql://user:password@host:port/database
Redis Setup (for caching/sessions)
# render.yaml services: - type: redis name: my-redis plan: starter region: oregon maxmemoryPolicy: allkeys-lru
Environment Variable:
REDIS_URL=${REDIS_URL} # Auto-populated
Database Migrations
Prisma Migration Strategy
Option 1: Run migrations in build command (Recommended)
buildCommand: npm install && npx prisma generate && npx prisma migrate deploy && npm run build
Option 2: Separate migration job
# render.yaml services: - type: worker name: migration-runner env: node buildCommand: npm install && npx prisma generate startCommand: npx prisma migrate deploy && exit 0 plan: starter envVars: - key: DATABASE_URL fromDatabase: name: my-postgres-db property: connectionString
Drizzle Migration
buildCommand: npm install && npm run db:migrate && npm run build
Migration script in package.json:
{ "scripts": { "db:migrate": "drizzle-kit push:pg", "db:generate": "drizzle-kit generate:pg" } }
Manual Migrations
For complex migrations, use a separate job:
# In your repo, create migrate.js const { Pool } = require('pg'); const fs = require('fs'); async function runMigrations() { const pool = new Pool({ connectionString: process.env.DATABASE_URL }); const sql = fs.readFileSync('./migrations/001_initial.sql', 'utf8'); await pool.query(sql); await pool.end(); console.log('Migrations complete'); } runMigrations();
# render.yaml jobs: - type: cron name: run-migrations schedule: "@manual" # Run manually command: node migrate.js
Cron Jobs & Background Tasks
Cron Job Configuration
# render.yaml services: - type: cron name: daily-cleanup schedule: "0 2 * * *" # Every day at 2 AM UTC env: node buildCommand: npm install startCommand: node scripts/cleanup.js region: oregon envVars: - key: DATABASE_URL fromDatabase: name: my-postgres-db property: connectionString
Common Cron Schedules:
| Schedule | Expression | Description |
|---|---|---|
| Every hour | | At minute 0 |
| Every 6 hours | | At 00:00, 06:00, 12:00, 18:00 |
| Daily at 2 AM | | Every day at 2:00 AM UTC |
| Weekly (Monday) | | Monday at midnight |
| Monthly (1st) | | 1st of month at midnight |
Cron Expression Format:
* * * * * │ │ │ │ │ │ │ │ │ └─── Day of week (0-7, 0 and 7 = Sunday) │ │ │ └───── Month (1-12) │ │ └─────── Day of month (1-31) │ └───────── Hour (0-23) └─────────── Minute (0-59)
Background Workers
# render.yaml services: - type: worker name: email-worker env: node buildCommand: npm install startCommand: node workers/email-processor.js plan: starter envVars: - key: REDIS_URL fromService: type: redis name: my-redis property: connectionString
Example Worker (Bull Queue):
// workers/email-processor.js import Queue from 'bull'; const emailQueue = new Queue('email', process.env.REDIS_URL); emailQueue.process(async (job) => { const { to, subject, body } = job.data; // Send email logic console.log(`Sending email to ${to}`); }); console.log('Email worker running...');
Health Checks
Implementation
Express.js:
// routes/health.ts app.get('/api/health', async (req, res) => { try { // Check database connection await db.query('SELECT 1'); // Check Redis (if applicable) await redis.ping(); res.status(200).json({ status: 'healthy', timestamp: new Date().toISOString(), uptime: process.uptime(), database: 'connected', redis: 'connected' }); } catch (error) { res.status(503).json({ status: 'unhealthy', error: error.message }); } });
Next.js API Route:
// app/api/health/route.ts import { NextResponse } from 'next/server'; export async function GET() { try { // Basic health check return NextResponse.json({ status: 'healthy', timestamp: new Date().toISOString() }); } catch (error) { return NextResponse.json( { status: 'unhealthy', error: error.message }, { status: 503 } ); } }
Configuration in Render:
healthCheckPath: /api/health
Or via Dashboard:
- Service Settings → Health Check
- Path:
/api/health - Render will ping every 30 seconds
- 3 failed checks = service marked unhealthy
Logging & Monitoring
Structured Logging
// utils/logger.ts const logger = { info: (message: string, meta?: any) => { console.log(JSON.stringify({ level: 'info', message, timestamp: new Date().toISOString(), ...meta })); }, error: (message: string, error?: Error, meta?: any) => { console.error(JSON.stringify({ level: 'error', message, error: error?.message, stack: error?.stack, timestamp: new Date().toISOString(), ...meta })); }, warn: (message: string, meta?: any) => { console.warn(JSON.stringify({ level: 'warn', message, timestamp: new Date().toISOString(), ...meta })); } }; export default logger;
Usage:
logger.info('User logged in', { userId: user.id }); logger.error('Database connection failed', error);
Viewing Logs
Via Dashboard:
- Go to your service
- Click "Logs" tab
- View real-time logs
- Filter by date/time
Via CLI:
# Install Render CLI npm install -g @render/cli # Login render login # View logs render logs my-app --tail render logs my-app --since 1h
Log Aggregation (Advanced)
Integration with LogDNA/Datadog:
// Log forwarding const logForwarder = require('logdna-winston'); const logger = winston.createLogger({ transports: [ new logForwarder({ key: process.env.LOGDNA_KEY, app: 'my-app', env: process.env.NODE_ENV }) ] });
Custom Domains
Setup Steps
-
Add domain in Render:
- Service Settings → Custom Domain
- Enter your domain (e.g.,
)app.yourdomain.com
-
Configure DNS:
- Add CNAME record pointing to Render
Type: CNAME Name: app (or www) Value: your-app.onrender.com TTL: 3600 -
SSL Certificate:
- Automatically provisioned by Render (Let's Encrypt)
- Takes 5-10 minutes after DNS propagation
Apex Domain (yourdomain.com)
Type: ALIAS or ANAME (if provider supports) Name: @ Value: your-app.onrender.com Or use A records (provided by Render in dashboard)
Force HTTPS
// middleware.ts (Next.js) export function middleware(request: NextRequest) { const proto = request.headers.get('x-forwarded-proto'); if (proto !== 'https') { return NextResponse.redirect( `https://${request.headers.get('host')}${request.nextUrl.pathname}`, 301 ); } }
Scaling Configuration
Horizontal Scaling
# render.yaml services: - type: web name: my-app scaling: minInstances: 1 maxInstances: 10 targetMemoryPercent: 80 targetCPUPercent: 70
Manual Scaling (via Dashboard):
- Service Settings → Scaling
- Adjust instance count
- Immediate effect
Vertical Scaling
Upgrade Plan:
- Starter (512 MB RAM, 0.5 CPU)
- Standard (2 GB RAM, 1 CPU)
- Pro (4 GB RAM, 2 CPU)
- Pro Plus (8 GB RAM, 4 CPU)
Zero-Downtime Deployments
Strategy
Render performs zero-downtime deployments automatically:
- New version built
- Health check passes on new instance
- Traffic gradually shifted to new instance
- Old instance terminated after drain period
Ensure zero-downtime:
- Health check endpoint returns 200
- Database migrations are backwards-compatible
- No breaking API changes
Rollback
Via Dashboard:
- Service → Deploys
- Find previous successful deploy
- Click "Rollback to this version"
Via render.yaml:
# Revert git commit and push git revert HEAD git push origin main
Deployment Triggers
Auto-Deploy
Enable:
services: - type: web name: my-app autoDeploy: true # Deploy on every push to main branch: main
Manual Deploy:
autoDeploy: false
Deploy manually via:
- Dashboard → "Manual Deploy"
- CLI:
render deploy my-app
Deploy Hooks
Webhook URL:
- Service Settings → Deploy Hook
- Copy webhook URL
- Trigger deploys via HTTP POST
curl -X POST https://api.render.com/deploy/srv-xxx?key=xxx
Use cases:
- CI/CD pipelines
- Automated workflows
- External monitoring systems
Production Best Practices
Security
- Use environment secrets for sensitive data
- Enable HTTPS (automatic with Render)
- Set secure headers (helmet.js for Node)
- Implement rate limiting
- Use CORS properly
- Keep dependencies updated
- Run security audits (
)npm audit
Performance
- Enable compression (gzip/brotli)
- Implement caching (Redis, CDN)
- Optimize database queries (indexes, connection pooling)
- Use CDN for static assets
- Monitor response times
- Set appropriate timeouts
Reliability
- Implement health checks
- Set up error tracking (Sentry, Rollbar)
- Configure alerting
- Test deployments in staging first
- Document rollback procedures
- Monitor resource usage (CPU, memory)
Cost Optimization
- Use starter plan for low-traffic services
- Scale down non-production environments
- Use suspending services for dev/staging
- Optimize build times (caching)
- Monitor bandwidth usage
Troubleshooting
Common Issues
Build Fails:
# Check logs for error # Common causes: - Missing dependencies in package.json - Build command incorrect - Out of memory (upgrade plan)
Service Won't Start:
# Verify: - Start command is correct - Port binding: app.listen(process.env.PORT || 10000) - Environment variables set correctly
Database Connection Fails:
# Check: - DATABASE_URL is set - Database is in same region - IP allowlist (not needed on Render) - Connection pool limits
Health Check Fails:
# Verify: - /api/health endpoint exists - Returns 200 status - Responds within 10 seconds - No dependencies fail (DB, Redis)
Monitoring Checklist
- Health check endpoint responding
- Logs streaming properly
- Deployment notifications configured
- Error tracking integrated (Sentry, etc.)
- Database performance monitored
- Uptime monitoring (UptimeRobot, Pingdom)
- SSL certificate valid
- Custom domain resolving correctly
- Backup strategy in place (database)
- Disaster recovery plan documented
Resources
Critical Reminder: Always test deployments in a staging environment before pushing to production. Keep deployment scripts and documentation updated as your application evolves.