Claude-skill-registry kubekey
Manage Kubernetes clusters with KubeKey: check installation, install KubeKey, create clusters, scale nodes, upgrade clusters, and view configurations. Use when working with Kubernetes cluster deployment, management, or when the user mentions KubeKey, kk, or Kubernetes cluster operations.
git clone https://github.com/majiayu000/claude-skill-registry
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data/kubekey" ~/.claude/skills/majiayu000-claude-skill-registry-kubekey && rm -rf "$T"
skills/data/kubekey/SKILL.mdKubeKey Skill
Description
KubeKey is a tool for deploying Kubernetes clusters. This skill provides capabilities to check, install KubeKey, create and scale Kubernetes clusters, and view cluster configurations.
Capabilities
This skill enables the agent to:
- Check KubeKey installation: Verify if KubeKey is installed and check its version
- Install KubeKey: Download and install KubeKey tool with specified version
- Generate cluster configurations: Create KubeKey cluster configuration files based on user requirements, including:
- Kubernetes version selection
- CNI (Container Network Interface) plugin selection
- CRI (Container Runtime Interface) selection
- Network CIDR configuration
- Node role assignment
- Registry configuration
- Create Kubernetes clusters: Deploy new Kubernetes clusters using KubeKey configuration files
- Scale clusters: Add nodes using
or remove nodes usingkk add nodes
from existing Kubernetes clusterskk delete node - Upgrade clusters: Upgrade Kubernetes and optionally KubeSphere to newer versions
- View cluster configurations: Display and analyze cluster installation configurations
Instructions
When the user needs to work with KubeKey or Kubernetes cluster management:
Checking KubeKey Installation
- Run the check script:
scripts/check_kubekey.sh - The script will check if
command is available in PATHkk - If installed, it will display the version
- If not installed, provide guidance on installation
Installing KubeKey
- Determine the desired version (default: latest)
- Run the install script:
scripts/install_kubekey.sh [VERSION] - The script will:
- Download KubeKey from GitHub releases
- Extract and install to
/usr/local/bin/kk - Verify installation
- Ensure the user has appropriate permissions (may require sudo)
Creating a Kubernetes Cluster Configuration
When the user wants to create a Kubernetes cluster, you need to gather requirements and generate a configuration file. Follow this process:
Step 1: Gather Cluster Requirements
Ask the user or infer from context the following information:
Essential Information:
- Node Information:
- IP addresses of all nodes
- SSH credentials (username, password, or SSH key path)
- Node roles (master/worker/etcd)
- Internal IP addresses (if different from external)
Kubernetes Configuration:
-
Kubernetes Version:
- Common versions: v1.28.x, v1.27.x, v1.26.x, v1.25.x
- If not specified, recommend latest stable (v1.28.x)
- Format:
(with 'v' prefix)v1.28.0
-
Container Runtime Interface (CRI):
- Options:
,docker
,containerd
,cri-oisula - Default:
(recommended for newer K8s versions)containerd - Docker: Traditional, widely used
- Containerd: Lightweight, CNCF standard
- Options:
-
Network Plugin (CNI):
- Options:
,calico
,flannel
,cilium
,kube-ovnweave - Calico: Best for production, supports network policies, BGP routing
- Flannel: Simple, good for small clusters
- Cilium: eBPF-based, high performance
- Kube-OVN: Advanced networking features
- Weave: Simple, automatic mesh networking
- Default:
(if not specified)calico
- Options:
-
Pod CIDR:
- Default:
(if not specified)10.233.64.0/18 - Should not overlap with service CIDR
- Calculate based on expected pod count:
= ~16k pods,/18
= ~65k pods/16
- Default:
-
Service CIDR:
- Default:
(if not specified)10.233.0.0/18 - Should not overlap with pod CIDR or node networks
- Default:
-
Control Plane Endpoint:
- Load balancer domain or IP (for HA)
- Or leave empty for single master
- Port: typically
6443
Optional Advanced Settings:
-
Image Registry:
- Private registry URL (if using private registry)
- Registry mirrors (for faster downloads in China)
- Insecure registries list
-
Proxy Mode:
(default, simpler)iptables
(better performance for large clusters)ipvs
-
Max Pods per Node:
- Default:
110 - Calculate: (available IPs in node CIDR) - 1
- Default:
-
Node CIDR Mask Size:
- Default:
24 - Determines subnet size for each node
- Default:
Step 2: Generate Configuration File
- Use the configuration template in
as a baseexamples/cluster-config.yaml - Use the script
to interactively generate a config, ORscripts/generate_config.sh - Manually create the YAML file with the following structure:
apiVersion: kubekey.kubesphere.io/v1alpha2 kind: Cluster metadata: name: <cluster-name> spec: hosts: - {name: <node-name>, address: <external-ip>, internalAddress: <internal-ip>, user: <ssh-user>, password: "<password>"} # Or use privateKeyPath for SSH key authentication # - {name: <node-name>, address: <external-ip>, internalAddress: <internal-ip>, user: <ssh-user>, privateKeyPath: "<key-path>"} roleGroups: etcd: - <master-node-names> master: - <master-node-names> worker: - <worker-node-names> controlPlaneEndpoint: domain: <lb-domain-or-empty> address: "" port: 6443 kubernetes: version: <k8s-version> # e.g., v1.28.0 imageRepo: kubesphere clusterName: cluster.local masqueradeAll: false maxPods: 110 nodeCidrMaskSize: 24 proxyMode: ipvs # or iptables network: plugin: <cni-plugin> # calico, flannel, cilium, etc. kubePodsCIDR: <pod-cidr> kubeServiceCIDR: <service-cidr> multusCNI: enabled: false registry: privateRegistry: "<registry-url>" namespaceOverride: "" registryMirrors: [] insecureRegistries: [] addons: []
Step 3: Validate Configuration
- Check YAML syntax is valid
- Verify IP addresses are reachable
- Ensure CIDR ranges don't overlap
- Confirm SSH credentials are correct
- Validate node roles are assigned correctly
Step 4: Create the Cluster
- Run:
kk create cluster -f <config-file> - Monitor the installation process
- Handle any errors that occur
- Verify cluster is running:
kubectl get nodes
Configuration Examples by Use Case
Small Development Cluster (3 nodes, 1 master + 2 workers):
- Version: v1.28.0
- CNI: flannel (simpler)
- CRI: containerd
- Pod CIDR: 10.233.64.0/18
- Service CIDR: 10.233.0.0/18
Production Cluster (5+ nodes, HA):
- Version: v1.28.0 (or latest stable)
- CNI: calico (network policies, production-ready)
- CRI: containerd
- Proxy Mode: ipvs
- Control Plane Endpoint: Load balancer required
- Pod CIDR: 10.233.64.0/18 or larger
- Service CIDR: 10.233.0.0/18
High Performance Cluster:
- CNI: cilium (eBPF-based)
- CRI: containerd
- Proxy Mode: ipvs
- Larger CIDR ranges if needed
Scaling a Cluster
KubeKey uses separate commands for adding and deleting nodes:
Adding Nodes to a Cluster
-
Get current cluster configuration:
kk create config --from-cluster -f current-cluster.yamlThis generates a configuration file with all existing nodes from the current cluster.
Note: The
flag is used to generate a config file from an existing cluster. For new clusters, create the config file manually or use--from-cluster
.scripts/generate_config.sh -
Edit the configuration file:
- Add new node information to
sectionspec.hosts - Add new node names to appropriate
(worker, master, etcd)roleGroups - Example:
spec: hosts: - {name: master1, address: 192.168.0.2, ...} # existing - {name: worker1, address: 192.168.0.3, ...} # existing - {name: worker2, address: 192.168.0.4, ...} # existing - {name: worker3, address: 192.168.0.5, ...} # NEW node roleGroups: worker: - worker1 - worker2 - worker3 # NEW node added
- Add new node information to
-
Add nodes:
kk add nodes -f current-cluster.yamlOr use the script:
scripts/add_nodes.sh current-cluster.yaml -
Verify:
kubectl get nodes
Deleting Nodes from a Cluster
-
Drain the node (recommended before deletion):
kubectl drain <node-name> --ignore-daemonsets --delete-emptydir-dataThis safely evicts all pods from the node.
-
Delete the node:
kk delete node <node-name>Or use the script:
scripts/delete_node.sh <node-name> -
Verify:
kubectl get nodes
Important Notes:
- When adding nodes, the config file must include ALL existing nodes plus new ones
- Before deleting a node, ensure workloads are migrated or stopped
- Master nodes should not be deleted unless you have HA setup (3+ masters)
Upgrading a Cluster
KubeKey supports upgrading Kubernetes clusters and optionally KubeSphere. Follow these steps:
Prerequisites for Upgrade
- Backup: Ensure all important data is backed up
- Node Time Sync: All nodes must have synchronized time (use NTP)
- Resource Check: Verify nodes have sufficient resources
- Version Compatibility: Check compatibility between current and target versions
- Low Traffic Period: Perform upgrade during low-traffic periods if possible
Upgrade Process
-
Check Current Versions:
kubectl version --short kk version -
Determine Target Version:
- Kubernetes: Typically upgrade one minor version at a time (e.g., v1.27.x → v1.28.x)
- KubeSphere: Check compatibility with target Kubernetes version
- Common upgrade paths:
- v1.26.x → v1.27.x → v1.28.x
- v1.27.x → v1.28.x
-
Upgrade Options:
Option A: Interactive Upgrade (Recommended)
./scripts/upgrade_cluster.shThe script will prompt for:
- Target Kubernetes version
- Whether to upgrade KubeSphere
- Target KubeSphere version (if applicable)
Option B: Command Line with Options
# Upgrade Kubernetes only ./scripts/upgrade_cluster.sh --k8s-version v1.28.0 # Upgrade Kubernetes and KubeSphere ./scripts/upgrade_cluster.sh --k8s-version v1.28.0 --ks-version v3.4.1 --with-kubesphere # Using configuration file ./scripts/upgrade_cluster.sh --config cluster-config.yaml --k8s-version v1.28.0Option C: Direct KubeKey Command
# Kubernetes only kk upgrade --with-kubernetes --kubernetes-version v1.28.0 # Kubernetes + KubeSphere kk upgrade --with-kubernetes --with-kubesphere \ --kubernetes-version v1.28.0 \ --kubesphere-version v3.4.1 -
Monitor Upgrade Progress:
- The upgrade process typically takes 30-60 minutes
- Monitor node status:
kubectl get nodes -w - Check pod status:
kubectl get pods -A - Review KubeKey logs if issues occur
-
Verify Upgrade:
kubectl version --short kubectl get nodes kubectl get pods -A
Upgrade Considerations
Kubernetes Version Selection:
- Upgrade one minor version at a time (e.g., 1.27 → 1.28, not 1.26 → 1.28)
- Check Kubernetes release notes for breaking changes
- Verify CNI plugin compatibility with new version
- Ensure CRI version is compatible
KubeSphere Upgrade:
- Only upgrade if KubeSphere is installed
- Check KubeSphere compatibility matrix with Kubernetes versions
- Review KubeSphere upgrade documentation for specific versions
- Some KubeSphere features may require specific Kubernetes versions
Rollback:
- KubeKey does not provide automatic rollback
- Manual rollback requires restoring from backups
- Always backup before upgrading
Common Issues:
- Node time synchronization errors → Ensure NTP is configured
- Insufficient resources → Check node CPU/memory/disk
- Network connectivity issues → Verify all nodes are reachable
- Pod eviction failures → Drain nodes manually if needed
Viewing Cluster Configuration
- If a configuration file exists, display its contents
- Explain the key configuration parameters:
- Kubernetes version
- Node specifications (master/worker nodes)
- Network plugins
- Storage configurations
- Authentication settings
- Use
to check KubeKey version and cluster infokk version
Prerequisites
- Linux or macOS system
- SSH access to target nodes (for cluster deployment)
- Root or sudo privileges (for installation)
- Network connectivity to download KubeKey and container images
Common Workflows
Initial Setup
- Check if KubeKey is installed
- If not, install KubeKey
- Verify installation with
kk version
Creating a New Cluster
- Review example configuration
- Create cluster configuration file
- Validate configuration
- Deploy cluster
- Verify cluster status
Adding Nodes to Existing Cluster
- Get current cluster config:
kk create config --from-cluster -f config.yaml - Edit config file to add new nodes to
andspec.hostsroleGroups - Run:
(or usekk add nodes -f config.yaml
)scripts/add_nodes.sh - Verify new nodes joined:
kubectl get nodes
Removing Nodes from Cluster
- Drain the node:
kubectl drain <node-name> --ignore-daemonsets --delete-emptydir-data - Delete the node:
(or usekk delete node <node-name>
)scripts/delete_node.sh - Verify node removed:
kubectl get nodes
Upgrading Cluster
- Check current versions:
kubectl version --short - Determine target Kubernetes version (upgrade one minor version at a time)
- Run upgrade:
(interactive) or with options./scripts/upgrade_cluster.sh - Monitor progress and verify:
andkubectl get nodeskubectl version
Error Handling
- If KubeKey is not found, suggest installation
- If configuration file is invalid, help user fix YAML syntax
- If cluster creation fails, check:
- Network connectivity
- SSH access to nodes
- Node prerequisites (Docker, etc.)
- Disk space and resources
- If adding nodes fails, verify:
- Config file includes ALL existing nodes plus new ones
- New nodes meet requirements (CPU, memory, disk)
- SSH access to new nodes
- Network connectivity
- New nodes are not already in the cluster
- If deleting nodes fails, verify:
- Node name is correct
- Node is not a critical master (unless HA setup)
- Workloads have been drained/migrated
- If upgrade fails, check:
- Node time synchronization (NTP)
- Sufficient resources on all nodes
- Version compatibility (upgrade one minor version at a time)
- Network connectivity to all nodes
- CNI/CRI compatibility with target Kubernetes version
Configuration Reference
For detailed information about all configuration options, see:
- Complete reference of all configuration optionsreferences/config-options.md
- Example configuration fileexamples/cluster-config.yaml
- Interactive configuration generatorscripts/generate_config.sh
Key configuration decisions:
- Kubernetes Version: Latest stable (v1.28.x) unless compatibility required
- CNI Plugin: Calico for production, Flannel for simple deployments
- CRI: Containerd (recommended) or Docker
- Proxy Mode: iptables for small/medium, ipvs for large clusters
- CIDR Ranges: Ensure no overlaps, use /18 for pods and services typically
References
For detailed technical references, see:
- KubeKey Reference Guide - Comprehensive links to KubeKey, Kubernetes, CNI plugins, and related documentation
- KubeKey Commands Reference - Quick reference for all kk commands
- Configuration Options - Detailed configuration options reference
- Example Configuration - Sample cluster configuration file
Key resources:
- KubeKey GitHub: https://github.com/kubesphere/kubekey
- KubeKey Documentation: https://kubesphere.io/docs/installing-on-linux/introduction/kubekey/
- Kubernetes Documentation: https://kubernetes.io/docs/