Marketplace deploying-postgres-k8s
install
source · Clone the upstream repo
git clone https://github.com/aiskillstore/marketplace
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/aiskillstore/marketplace "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/asmayaseen/deploying-postgres-k8s" ~/.claude/skills/aiskillstore-marketplace-deploying-postgres-k8s && rm -rf "$T"
manifest:
skills/asmayaseen/deploying-postgres-k8s/SKILL.mdsource content
Deploying PostgreSQL on Kubernetes
Deploy production-ready PostgreSQL clusters using CloudNativePG operator (v1.28+) with automated failover.
Quick Start
# 1. Install CloudNativePG operator kubectl apply --server-side -f \ https://raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/release-1.28/releases/cnpg-1.28.0.yaml # 2. Wait for operator kubectl rollout status deployment -n cnpg-system cnpg-controller-manager # 3. Deploy PostgreSQL cluster kubectl apply -f - <<EOF apiVersion: postgresql.cnpg.io/v1 kind: Cluster metadata: name: pg-cluster spec: instances: 3 storage: size: 10Gi EOF # 4. Wait for cluster kubectl wait cluster/pg-cluster --for=condition=Ready --timeout=300s
Operator Installation
Direct Manifest (Recommended)
kubectl apply --server-side -f \ https://raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/release-1.28/releases/cnpg-1.28.0.yaml # Verify kubectl rollout status deployment -n cnpg-system cnpg-controller-manager kubectl get pods -n cnpg-system
Helm Installation
helm repo add cnpg https://cloudnative-pg.github.io/charts helm repo update helm upgrade --install cnpg \ --namespace cnpg-system \ --create-namespace \ cnpg/cloudnative-pg
Namespace-Scoped (Enhanced Security)
helm upgrade --install cnpg \ --namespace cnpg-system \ --create-namespace \ --set config.clusterWide=false \ cnpg/cloudnative-pg
Cluster Configurations
Development (Single Instance)
apiVersion: postgresql.cnpg.io/v1 kind: Cluster metadata: name: pg-dev spec: instances: 1 imageName: ghcr.io/cloudnative-pg/postgresql:17.2 primaryUpdateStrategy: unsupervised storage: size: 5Gi postgresql: parameters: max_connections: "100" shared_buffers: "256MB"
Production (HA with 3 Replicas)
apiVersion: postgresql.cnpg.io/v1 kind: Cluster metadata: name: pg-production spec: instances: 3 imageName: ghcr.io/cloudnative-pg/postgresql:17.2 primaryUpdateStrategy: unsupervised storage: storageClass: standard size: 100Gi resources: requests: memory: "2Gi" cpu: "1" limits: memory: "4Gi" cpu: "2" postgresql: parameters: max_connections: "200" shared_buffers: "1GB" effective_cache_size: "3GB" maintenance_work_mem: "256MB" checkpoint_completion_target: "0.9" wal_buffers: "16MB" default_statistics_target: "100" random_page_cost: "1.1" effective_io_concurrency: "200" affinity: podAntiAffinityType: required # Spread across nodes monitoring: enablePodMonitor: true
With Bootstrap Database
apiVersion: postgresql.cnpg.io/v1 kind: Cluster metadata: name: pg-cluster spec: instances: 3 storage: size: 10Gi bootstrap: initdb: database: learnflow owner: app_user secret: name: app-user-secret
Create the secret first:
kubectl create secret generic app-user-secret \ --from-literal=username=app_user \ --from-literal=password=$(openssl rand -hex 16)
Connection Secrets
CloudNativePG automatically creates connection secrets:
| Secret | Contents |
|---|---|
| App credentials (recommended) |
| Superuser credentials |
Get Connection String
# Get app credentials kubectl get secret pg-cluster-app -o jsonpath='{.data.uri}' | base64 -d # Get superuser credentials (admin tasks only) kubectl get secret pg-cluster-superuser -o jsonpath='{.data.uri}' | base64 -d
Use in Deployment
env: - name: DATABASE_URL valueFrom: secretKeyRef: name: pg-cluster-app key: uri
Service Endpoints
| Service | Port | Use |
|---|---|---|
| 5432 | Read-Write (primary) |
| 5432 | Read-Only (replicas) |
| 5432 | Any instance |
Connect from Another Namespace
env: - name: DATABASE_URL value: "postgresql://app_user:password@pg-cluster-rw.default.svc.cluster.local:5432/learnflow"
Database Operations
Connect with psql
# Using kubectl cnpg plugin (recommended) kubectl cnpg psql pg-cluster -- -c "SELECT version();" # Or directly kubectl exec -it pg-cluster-1 -- psql -U postgres
Create Database and User
kubectl exec -it pg-cluster-1 -- psql -U postgres <<EOF CREATE DATABASE myapp; CREATE USER myapp_user WITH ENCRYPTED PASSWORD 'secure_password'; GRANT ALL PRIVILEGES ON DATABASE myapp TO myapp_user; \c myapp GRANT ALL ON SCHEMA public TO myapp_user; EOF
Run Migrations
# From local machine kubectl port-forward svc/pg-cluster-rw 5432:5432 & DATABASE_URL="postgresql://postgres:password@localhost:5432/learnflow" alembic upgrade head
Backup Configuration
Backup to S3
apiVersion: postgresql.cnpg.io/v1 kind: Cluster metadata: name: pg-cluster spec: instances: 3 storage: size: 10Gi backup: barmanObjectStore: destinationPath: "s3://my-bucket/pg-backups" s3Credentials: accessKeyId: name: s3-creds key: ACCESS_KEY_ID secretAccessKey: name: s3-creds key: SECRET_ACCESS_KEY wal: compression: gzip data: compression: gzip retentionPolicy: "30d"
Schedule Backups
apiVersion: postgresql.cnpg.io/v1 kind: ScheduledBackup metadata: name: pg-backup-daily spec: schedule: "0 0 * * *" # Daily at midnight backupOwnerReference: cluster cluster: name: pg-cluster
Monitoring
Check Cluster Status
kubectl get cluster pg-cluster kubectl describe cluster pg-cluster kubectl get pods -l cnpg.io/cluster=pg-cluster
View Logs
kubectl logs pg-cluster-1 -f kubectl logs -l cnpg.io/cluster=pg-cluster --all-containers
Prometheus Metrics
With
enablePodMonitor: true, metrics available at:
- Active connectionscnpg_backends_total
- Replica lagcnpg_pg_replication_lag_seconds
- Database sizecnpg_pg_database_size_bytes
Troubleshooting
Cluster Not Ready
kubectl describe cluster pg-cluster kubectl get pods -l cnpg.io/cluster=pg-cluster kubectl logs pg-cluster-1
Connection Issues
# Test connectivity kubectl run pg-client --rm -it --restart=Never \ --image=postgres:17 -- \ psql "postgresql://app_user:password@pg-cluster-rw:5432/learnflow" -c "SELECT 1;"
Common Issues
| Error | Cause | Fix |
|---|---|---|
| PVC pending | No storage class | Add to spec |
| Connection refused | Wrong service name | Use for writes |
| Auth failed | Wrong credentials | Check secret |
| Replica lag high | Heavy writes | Scale up, increase resources |
Cleanup
# Delete cluster (keeps PVCs by default) kubectl delete cluster pg-cluster # Delete PVCs (data loss!) kubectl delete pvc -l cnpg.io/cluster=pg-cluster # Remove operator kubectl delete -f \ https://raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/release-1.28/releases/cnpg-1.28.0.yaml
Verification
Run:
python scripts/verify.py
Related Skills
- Local Minikube cluster setupoperating-k8s-local
- FastAPI services with SQLModelscaffolding-fastapi-dapr
- Kafka for event-driven architecturedeploying-kafka-k8s