Skillshub coreweave-install-auth
install
source · Clone the upstream repo
git clone https://github.com/ComeOnOliver/skillshub
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/ComeOnOliver/skillshub "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/jeremylongshore/claude-code-plugins-plus-skills/coreweave-install-auth" ~/.claude/skills/comeonoliver-skillshub-coreweave-install-auth && rm -rf "$T"
manifest:
skills/jeremylongshore/claude-code-plugins-plus-skills/coreweave-install-auth/SKILL.mdsource content
CoreWeave Install & Auth
Overview
Set up access to CoreWeave Kubernetes Service (CKS). CKS runs bare-metal Kubernetes with NVIDIA GPUs -- no hypervisor overhead. Access is via standard kubeconfig with CoreWeave-issued credentials.
Prerequisites
- CoreWeave account at https://cloud.coreweave.com
v1.28+ installedkubectl- Kubernetes namespace provisioned by CoreWeave
Instructions
Step 1: Download Kubeconfig
- Log in to https://cloud.coreweave.com
- Navigate to API Access > Kubeconfig
- Download the kubeconfig file
# Save kubeconfig mkdir -p ~/.kube cp ~/Downloads/coreweave-kubeconfig.yaml ~/.kube/coreweave # Set as active context export KUBECONFIG=~/.kube/coreweave # Verify connection kubectl get nodes kubectl get namespaces
Step 2: Configure API Token
# CoreWeave API token for programmatic access export COREWEAVE_API_TOKEN="your-api-token" # Store securely echo "COREWEAVE_API_TOKEN=${COREWEAVE_API_TOKEN}" >> .env echo "KUBECONFIG=~/.kube/coreweave" >> .env
Step 3: Verify GPU Access
# List available GPU nodes kubectl get nodes -l gpu.nvidia.com/class -o custom-columns=\ NAME:.metadata.name,GPU:.metadata.labels.gpu\.nvidia\.com/class,\ STATUS:.status.conditions[-1].type # Check GPU allocatable resources kubectl describe nodes | grep -A5 "Allocatable:" | grep nvidia
Step 4: Test with a Simple GPU Pod
# test-gpu.yaml apiVersion: v1 kind: Pod metadata: name: gpu-test spec: restartPolicy: Never containers: - name: cuda-test image: nvidia/cuda:12.2.0-base-ubuntu22.04 command: ["nvidia-smi"] resources: limits: nvidia.com/gpu: 1 affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: gpu.nvidia.com/class operator: In values: ["A100_PCIE_80GB"]
kubectl apply -f test-gpu.yaml kubectl logs gpu-test # Should show nvidia-smi output kubectl delete pod gpu-test
Error Handling
| Error | Cause | Solution |
|---|---|---|
| Wrong kubeconfig | Verify KUBECONFIG path |
| Missing namespace permissions | Contact CoreWeave support |
| No GPU nodes found | Wrong node labels | Check labels |
| Pod stuck Pending | GPU capacity exhausted | Try different GPU type or region |
Resources
Next Steps
Proceed to
coreweave-hello-world to deploy your first inference service.