Claude-skill-registry add-node
Add a new node to the Kubernetes cluster. Use when connecting new hardware, expanding cluster capacity, or setting up worker nodes.
install
source · Clone the upstream repo
git clone https://github.com/majiayu000/claude-skill-registry
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data/add-node" ~/.claude/skills/majiayu000-claude-skill-registry-add-node && rm -rf "$T"
manifest:
skills/data/add-node/SKILL.mdsafety · automated scan (high risk)
This is a pattern-based risk scan, not a security review. Our crawler flagged:
- curl piped into shell
- uses sudo
- makes HTTP requests (curl)
Always read a skill's source content before installing. Patterns alone don't mean the skill is malicious — but they warrant attention.
source content
Add New Node to Cluster
Complete workflow to add a new node to the Kubernetes cluster via Tailscale.
Steps Overview
- Discover the node on Tailscale
- Add to Ansible inventory
- Run bootstrap playbook
- Run preflight checks
- Add to cluster
Instructions
Step 1: Discover the Node
Check if the node is visible on Tailscale:
tailscale status | grep -i <node_name>
If not found, the user needs to install Tailscale on the target:
# On target node: curl -fsSL https://tailscale.com/install.sh | sh sudo tailscale up
Step 2: Get Node Details
Required information:
- Node name: Hostname (should match Tailscale name)
- Role: worker (default) or control-plane
- Reserved CPU: CPU cores for local use (default: "2")
- Reserved Memory: Memory to reserve (default: "4Gi")
- GPU: Has GPU? (default: false)
- Labels: Custom labels (optional)
Step 3: Update Inventory
Add to
ansible/inventory/hosts.yml:
workers: hosts: <node_name>: ansible_host: <tailscale_ip> tailscale_ip: <tailscale_ip> reserved_cpu: "2" reserved_memory: "4Gi" node_labels: node-role: worker workstation: "true" bootstrap: hosts: <node_name>: {}
Step 4: Setup SSH Access
Test connectivity:
ssh -o BatchMode=yes -o ConnectTimeout=5 <tailscale_ip> echo "SSH OK" 2>/dev/null
If fails:
ssh-copy-id <user>@<tailscale_ip>
Step 5: Run Bootstrap Playbook
cd /home/al/git/kubani ansible-playbook -i ansible/inventory/hosts.yml ansible/playbooks/bootstrap_node.yml --limit <node_name>
Step 6: Run Preflight Checks
ansible-playbook -i ansible/inventory/hosts.yml ansible/playbooks/preflight_checks.yml --limit <node_name>
Step 7: Add Node to Cluster
Include control plane node to fetch join token:
ansible-playbook -i ansible/inventory/hosts.yml ansible/playbooks/add_node.yml --limit "<node_name>,sparky"
Step 8: Verify
KUBECONFIG=/home/al/.kube/config kubectl get nodes
Troubleshooting
- SSH failed: Run
manuallyssh-copy-id <user>@<ip> - Bootstrap failed: Usually package or network issue
- Preflight failed: Missing Tailscale auth or network issues
- Add node failed: Check control plane health