Claude-skill-registry kubernetes-manifest-generator
Generate Kubernetes YAML manifests for deployments, services, ingress, configmaps, and other resources with best practices. Triggers on "create Kubernetes manifest", "generate k8s yaml", "kubernetes deployment for", "k8s config".
git clone https://github.com/majiayu000/claude-skill-registry
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data/kubernetes-manifest-generator" ~/.claude/skills/majiayu000-claude-skill-registry-kubernetes-manifest-generator && rm -rf "$T"
skills/data/kubernetes-manifest-generator/SKILL.mdKubernetes Manifest Generator
Generate production-ready Kubernetes YAML manifests with best practices for security, scalability, and reliability.
Output Requirements
File Output:
.yaml files
Format: Valid Kubernetes YAML manifests
Standards: Kubernetes 1.28+
When Invoked
Immediately generate complete Kubernetes manifests. Include resource requests/limits, health checks, and security contexts by default.
Manifest Templates
Complete Application Stack
# namespace.yaml apiVersion: v1 kind: Namespace metadata: name: myapp labels: name: myapp environment: production --- # configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: myapp-config namespace: myapp labels: app: myapp data: NODE_ENV: "production" LOG_LEVEL: "info" API_TIMEOUT: "30000" --- # secret.yaml apiVersion: v1 kind: Secret metadata: name: myapp-secrets namespace: myapp labels: app: myapp type: Opaque stringData: DATABASE_URL: "postgresql://user:pass@host:5432/db" JWT_SECRET: "your-secret-key" # In production, use external secrets or sealed secrets # This is for example only --- # deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: myapp namespace: myapp labels: app: myapp version: v1 spec: replicas: 3 revisionHistoryLimit: 5 selector: matchLabels: app: myapp strategy: type: RollingUpdate rollingUpdate: maxSurge: 1 maxUnavailable: 0 template: metadata: labels: app: myapp version: v1 annotations: prometheus.io/scrape: "true" prometheus.io/port: "8080" prometheus.io/path: "/metrics" spec: serviceAccountName: myapp securityContext: runAsNonRoot: true runAsUser: 1000 runAsGroup: 1000 fsGroup: 1000 containers: - name: myapp image: myregistry/myapp:1.0.0 imagePullPolicy: Always ports: - name: http containerPort: 8080 protocol: TCP env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace envFrom: - configMapRef: name: myapp-config - secretRef: name: myapp-secrets resources: requests: cpu: "100m" memory: "128Mi" limits: cpu: "500m" memory: "512Mi" securityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: true capabilities: drop: - ALL livenessProbe: httpGet: path: /health/live port: http initialDelaySeconds: 30 periodSeconds: 10 timeoutSeconds: 5 failureThreshold: 3 readinessProbe: httpGet: path: /health/ready port: http initialDelaySeconds: 5 periodSeconds: 5 timeoutSeconds: 3 failureThreshold: 3 startupProbe: httpGet: path: /health/live port: http initialDelaySeconds: 10 periodSeconds: 5 timeoutSeconds: 3 failureThreshold: 30 volumeMounts: - name: tmp mountPath: /tmp - name: cache mountPath: /app/.cache volumes: - name: tmp emptyDir: {} - name: cache emptyDir: {} affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: app operator: In values: - myapp topologyKey: kubernetes.io/hostname topologySpreadConstraints: - maxSkew: 1 topologyKey: topology.kubernetes.io/zone whenUnsatisfiable: ScheduleAnyway labelSelector: matchLabels: app: myapp --- # service.yaml apiVersion: v1 kind: Service metadata: name: myapp namespace: myapp labels: app: myapp spec: type: ClusterIP selector: app: myapp ports: - name: http port: 80 targetPort: http protocol: TCP --- # serviceaccount.yaml apiVersion: v1 kind: ServiceAccount metadata: name: myapp namespace: myapp labels: app: myapp automountServiceAccountToken: false --- # hpa.yaml apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: myapp namespace: myapp labels: app: myapp spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: myapp minReplicas: 3 maxReplicas: 10 metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 70 - type: Resource resource: name: memory target: type: Utilization averageUtilization: 80 behavior: scaleDown: stabilizationWindowSeconds: 300 policies: - type: Percent value: 10 periodSeconds: 60 scaleUp: stabilizationWindowSeconds: 0 policies: - type: Percent value: 100 periodSeconds: 15 - type: Pods value: 4 periodSeconds: 15 selectPolicy: Max --- # pdb.yaml apiVersion: policy/v1 kind: PodDisruptionBudget metadata: name: myapp namespace: myapp labels: app: myapp spec: minAvailable: 2 selector: matchLabels: app: myapp --- # ingress.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: myapp namespace: myapp labels: app: myapp annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/ssl-redirect: "true" nginx.ingress.kubernetes.io/proxy-body-size: "10m" cert-manager.io/cluster-issuer: letsencrypt-prod spec: tls: - hosts: - myapp.example.com secretName: myapp-tls rules: - host: myapp.example.com http: paths: - path: / pathType: Prefix backend: service: name: myapp port: name: http --- # networkpolicy.yaml apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: myapp namespace: myapp labels: app: myapp spec: podSelector: matchLabels: app: myapp policyTypes: - Ingress - Egress ingress: - from: - namespaceSelector: matchLabels: name: ingress-nginx - podSelector: matchLabels: app: myapp ports: - protocol: TCP port: 8080 egress: - to: - namespaceSelector: {} ports: - protocol: TCP port: 5432 - protocol: TCP port: 6379 - to: - namespaceSelector: {} podSelector: matchLabels: k8s-app: kube-dns ports: - protocol: UDP port: 53
StatefulSet with Persistent Storage
# statefulset.yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: postgres namespace: database labels: app: postgres spec: serviceName: postgres-headless replicas: 3 podManagementPolicy: OrderedReady updateStrategy: type: RollingUpdate selector: matchLabels: app: postgres template: metadata: labels: app: postgres spec: securityContext: runAsNonRoot: true runAsUser: 999 fsGroup: 999 containers: - name: postgres image: postgres:15-alpine imagePullPolicy: IfNotPresent ports: - name: postgres containerPort: 5432 env: - name: POSTGRES_DB value: "mydb" - name: POSTGRES_USER valueFrom: secretKeyRef: name: postgres-secrets key: username - name: POSTGRES_PASSWORD valueFrom: secretKeyRef: name: postgres-secrets key: password - name: PGDATA value: /var/lib/postgresql/data/pgdata resources: requests: cpu: "250m" memory: "512Mi" limits: cpu: "1000m" memory: "2Gi" volumeMounts: - name: data mountPath: /var/lib/postgresql/data livenessProbe: exec: command: - pg_isready - -U - $(POSTGRES_USER) - -d - $(POSTGRES_DB) initialDelaySeconds: 30 periodSeconds: 10 timeoutSeconds: 5 failureThreshold: 3 readinessProbe: exec: command: - pg_isready - -U - $(POSTGRES_USER) - -d - $(POSTGRES_DB) initialDelaySeconds: 5 periodSeconds: 5 timeoutSeconds: 3 failureThreshold: 3 volumeClaimTemplates: - metadata: name: data labels: app: postgres spec: accessModes: - ReadWriteOnce storageClassName: fast-ssd resources: requests: storage: 100Gi --- # headless-service.yaml apiVersion: v1 kind: Service metadata: name: postgres-headless namespace: database labels: app: postgres spec: type: ClusterIP clusterIP: None selector: app: postgres ports: - name: postgres port: 5432 targetPort: postgres --- # service.yaml apiVersion: v1 kind: Service metadata: name: postgres namespace: database labels: app: postgres spec: type: ClusterIP selector: app: postgres ports: - name: postgres port: 5432 targetPort: postgres
CronJob
# cronjob.yaml apiVersion: batch/v1 kind: CronJob metadata: name: backup-job namespace: myapp labels: app: backup spec: schedule: "0 2 * * *" # Daily at 2 AM timeZone: "UTC" concurrencyPolicy: Forbid successfulJobsHistoryLimit: 3 failedJobsHistoryLimit: 3 startingDeadlineSeconds: 300 jobTemplate: spec: backoffLimit: 3 activeDeadlineSeconds: 3600 ttlSecondsAfterFinished: 86400 template: metadata: labels: app: backup spec: restartPolicy: OnFailure serviceAccountName: backup-sa securityContext: runAsNonRoot: true runAsUser: 1000 containers: - name: backup image: myregistry/backup:1.0.0 imagePullPolicy: Always env: - name: BACKUP_BUCKET value: "my-backup-bucket" envFrom: - secretRef: name: backup-credentials resources: requests: cpu: "100m" memory: "256Mi" limits: cpu: "500m" memory: "512Mi" securityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: true
Common Patterns
External Secrets
apiVersion: external-secrets.io/v1beta1 kind: ExternalSecret metadata: name: myapp-secrets namespace: myapp spec: refreshInterval: 1h secretStoreRef: name: aws-secrets-manager kind: ClusterSecretStore target: name: myapp-secrets creationPolicy: Owner data: - secretKey: DATABASE_URL remoteRef: key: myapp/production property: database_url
Resource Quotas
apiVersion: v1 kind: ResourceQuota metadata: name: myapp-quota namespace: myapp spec: hard: requests.cpu: "4" requests.memory: 8Gi limits.cpu: "8" limits.memory: 16Gi pods: "20" persistentvolumeclaims: "10"
Validation Checklist
Before outputting, verify:
- API versions are current
- Labels applied consistently
- Resource requests AND limits set
- Health probes configured
- Security context is restrictive
- Service selectors match pod labels
- Namespace specified
- Secrets not hardcoded
Example Invocations
Prompt: "Create Kubernetes manifests for a Node.js API with Redis" Output: Complete manifests with Deployment, Service, ConfigMap, HPA.
Prompt: "Generate k8s deployment with blue-green strategy" Output: Complete manifests with two deployments, service switching.
Prompt: "Kubernetes StatefulSet for Elasticsearch cluster" Output: Complete manifests with StatefulSet, Services, PVCs, init containers.