Claude-code-plugins-plus-skills coreweave-install-auth

install
source · Clone the upstream repo
git clone https://github.com/jeremylongshore/claude-code-plugins-plus-skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/jeremylongshore/claude-code-plugins-plus-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/plugins/saas-packs/coreweave-pack/skills/coreweave-install-auth" ~/.claude/skills/jeremylongshore-claude-code-plugins-plus-skills-coreweave-install-auth && rm -rf "$T"
manifest: plugins/saas-packs/coreweave-pack/skills/coreweave-install-auth/SKILL.md
source content

CoreWeave Install & Auth

Overview

Set up access to CoreWeave Kubernetes Service (CKS). CKS runs bare-metal Kubernetes with NVIDIA GPUs -- no hypervisor overhead. Access is via standard kubeconfig with CoreWeave-issued credentials.

Prerequisites

Instructions

Step 1: Download Kubeconfig

  1. Log in to https://cloud.coreweave.com
  2. Navigate to API Access > Kubeconfig
  3. Download the kubeconfig file
# Save kubeconfig
mkdir -p ~/.kube
cp ~/Downloads/coreweave-kubeconfig.yaml ~/.kube/coreweave

# Set as active context
export KUBECONFIG=~/.kube/coreweave

# Verify connection
kubectl get nodes
kubectl get namespaces

Step 2: Configure API Token

# CoreWeave API token for programmatic access
export COREWEAVE_API_TOKEN="your-api-token"

# Store securely
echo "COREWEAVE_API_TOKEN=${COREWEAVE_API_TOKEN}" >> .env
echo "KUBECONFIG=~/.kube/coreweave" >> .env

Step 3: Verify GPU Access

# List available GPU nodes
kubectl get nodes -l gpu.nvidia.com/class -o custom-columns=\
NAME:.metadata.name,GPU:.metadata.labels.gpu\.nvidia\.com/class,\
STATUS:.status.conditions[-1].type

# Check GPU allocatable resources
kubectl describe nodes | grep -A5 "Allocatable:" | grep nvidia

Step 4: Test with a Simple GPU Pod

# test-gpu.yaml
apiVersion: v1
kind: Pod
metadata:
  name: gpu-test
spec:
  restartPolicy: Never
  containers:
    - name: cuda-test
      image: nvidia/cuda:12.2.0-base-ubuntu22.04
      command: ["nvidia-smi"]
      resources:
        limits:
          nvidia.com/gpu: 1
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchExpressions:
              - key: gpu.nvidia.com/class
                operator: In
                values: ["A100_PCIE_80GB"]
kubectl apply -f test-gpu.yaml
kubectl logs gpu-test  # Should show nvidia-smi output
kubectl delete pod gpu-test

Error Handling

ErrorCauseSolution
Unable to connect to the server
Wrong kubeconfigVerify KUBECONFIG path
Forbidden
Missing namespace permissionsContact CoreWeave support
No GPU nodes foundWrong node labelsCheck
gpu.nvidia.com/class
labels
Pod stuck PendingGPU capacity exhaustedTry different GPU type or region

Resources

Next Steps

Proceed to

coreweave-hello-world
to deploy your first inference service.