Agent-almanac configure-nginx
install
source · Clone the upstream repo
git clone https://github.com/pjt222/agent-almanac
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/pjt222/agent-almanac "$T" && mkdir -p ~/.claude/skills && cp -r "$T/i18n/caveman/skills/configure-nginx" ~/.claude/skills/pjt222-agent-almanac-configure-nginx-602ed1 && rm -rf "$T"
manifest:
i18n/caveman/skills/configure-nginx/SKILL.mdsource content
Configure Nginx
Set up Nginx as web server and reverse proxy with SSL termination, security hardening.
When Use
- Serving static files (HTML, CSS, JS) in production
- Reverse proxying to backend services (Node.js, Python, Go, R/Shiny)
- Terminating SSL/TLS with Let's Encrypt certificates
- Load balancing across multiple backend instances
- Adding rate limiting, security headers
Inputs
- Required: Deployment target (Docker container or bare metal)
- Required: Backend service(s) to proxy (host:port)
- Optional: Domain name for SSL
- Optional: Static file directory
Steps
Step 1: Basic Reverse Proxy
nginx.conf:
events { worker_connections 1024; } http { upstream app { server app:3000; } server { listen 80; server_name example.com; location / { proxy_pass http://app; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } } }
Docker Compose service:
services: nginx: image: nginx:1.27-alpine ports: - "80:80" - "443:443" volumes: - ./nginx.conf:/etc/nginx/nginx.conf:ro depends_on: - app
Got: Requests to port 80 forwarded to app service.
Step 2: Static File Serving
server { listen 80; root /usr/share/nginx/html; index index.html; location / { try_files $uri $uri/ /index.html; } location /assets/ { expires 1y; add_header Cache-Control "public, immutable"; } location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg|woff2?)$ { expires 6M; add_header Cache-Control "public"; } }
Step 3: SSL/TLS with Let's Encrypt
Using certbot with webroot method:
server { listen 80; server_name example.com; location /.well-known/acme-challenge/ { root /var/www/certbot; } location / { return 301 https://$host$request_uri; } } server { listen 443 ssl; server_name example.com; ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; ssl_protocols TLSv1.2 TLSv1.3; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; location / { proxy_pass http://app; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } }
Docker Compose with certbot:
services: nginx: image: nginx:1.27-alpine ports: - "80:80" - "443:443" volumes: - ./nginx.conf:/etc/nginx/nginx.conf:ro - certbot-webroot:/var/www/certbot:ro - certbot-certs:/etc/letsencrypt:ro certbot: image: certbot/certbot volumes: - certbot-webroot:/var/www/certbot - certbot-certs:/etc/letsencrypt volumes: certbot-webroot: certbot-certs:
Initial certificate:
docker compose run --rm certbot certonly \ --webroot -w /var/www/certbot \ -d example.com --email admin@example.com --agree-tos
Got: HTTPS works with valid Let's Encrypt certificate.
If fail: Check DNS points to server. Verify port 80 open for ACME challenges.
Step 4: Security Headers
server { # ... SSL config above ... add_header X-Frame-Options "SAMEORIGIN" always; add_header X-Content-Type-Options "nosniff" always; add_header X-XSS-Protection "1; mode=block" always; add_header Referrer-Policy "strict-origin-when-cross-origin" always; add_header Strict-Transport-Security "max-age=63072000; includeSubDomains" always; add_header Content-Security-Policy "default-src 'self'; script-src 'self'; style-src 'self' 'unsafe-inline';" always; # Hide Nginx version server_tokens off; }
Step 5: Rate Limiting
http { # Define rate limit zones limit_req_zone $binary_remote_addr zone=api:10m rate=10r/s; limit_req_zone $binary_remote_addr zone=login:10m rate=1r/s; server { location /api/ { limit_req zone=api burst=20 nodelay; proxy_pass http://app; } location /login { limit_req zone=login burst=5; proxy_pass http://app; } } }
Step 6: Load Balancing
upstream app { least_conn; server app1:3000; server app2:3000; server app3:3000 backup; }
| Method | Directive | Behavior |
|---|---|---|
| Round robin | (default) | Equal distribution |
| Least connections | | Routes to least busy |
| IP hash | | Sticky sessions |
| Weighted | | Proportional |
Step 7: Test Configuration
# Test config syntax docker compose exec nginx nginx -t # Reload without downtime docker compose exec nginx nginx -s reload # Check response headers curl -I https://example.com
Got:
nginx -t reports syntax OK. Headers include security headers.
Checks
-
reports configuration validnginx -t - HTTP redirects to HTTPS (if SSL enabled)
- Backend service reachable through proxy
- Security headers present in response
- Rate limiting triggers on excessive requests
- SSL Labs test gives A+ rating (if public)
Pitfalls
- Missing
: Backend receives wrong host header, breaking virtual hosts and redirects.proxy_set_header Host
order matters: Nginx uses most specific match. Exact (location
) > prefix (=
) > regex (^~
) > general prefix.~- SSL certificate renewal: Set up cron or timer to run
, reload Nginx.certbot renew - Large request bodies: Default
is 1MB. Increase for file uploads:client_max_body_size
.client_max_body_size 50m; - WebSocket proxying: Requires additional headers. See
for pattern.configure-reverse-proxy
See Also
- multi-tool proxy patterns including WebSocket and Traefikconfigure-reverse-proxy
- compose stack that includes Nginxsetup-compose-stack
- uses Nginx as frontend for SearXNGdeploy-searxng
- Kubernetes ingress (NGINX Ingress Controller)configure-ingress-networking