Skillshub coreweave-migration-deep-dive
install
source · Clone the upstream repo
git clone https://github.com/ComeOnOliver/skillshub
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/ComeOnOliver/skillshub "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/jeremylongshore/claude-code-plugins-plus-skills/coreweave-migration-deep-dive" ~/.claude/skills/comeonoliver-skillshub-coreweave-migration-deep-dive && rm -rf "$T"
manifest:
skills/jeremylongshore/claude-code-plugins-plus-skills/coreweave-migration-deep-dive/SKILL.mdsource content
CoreWeave Migration Deep Dive
Cost Comparison
| Instance | AWS | CoreWeave | Savings |
|---|---|---|---|
| 1x A100 80GB | ~$3.60/hr (p4d) | ~$2.21/hr | ~39% |
| 8x A100 80GB | ~$32/hr (p4d.24xl) | ~$17.70/hr | ~45% |
| 1x H100 80GB | ~$6.50/hr (p5) | ~$4.76/hr | ~27% |
Migration Steps
Phase 1: Containerize
# If running on bare EC2/GCE, containerize first docker build -t inference-server:v1 . docker push ghcr.io/myorg/inference-server:v1
Phase 2: Adapt YAML for CoreWeave
Key changes from AWS EKS / GKE:
- Node affinity: Use
instead ofgpu.nvidia.com/classnvidia.com/gpu.product - Storage: Use CoreWeave storage classes (
)shared-ssd-ord1 - Networking: CoreWeave provides flat networking within VPC
Phase 3: Parallel Deploy
Run both old and new infrastructure simultaneously, gradually shift traffic.
Phase 4: Cut Over
Decommission old GPU instances after validation period.
Common Gotchas
| Issue | Solution |
|---|---|
| Different CUDA drivers | Match container CUDA to CoreWeave node drivers |
| Storage migration | Use rclone or rsync to move data to CoreWeave PVC |
| DNS changes | Update ingress/load balancer DNS |
| IAM differences | CoreWeave uses kubeconfig, not IAM roles |
Resources
Next Steps
This completes the CoreWeave skill pack. Start with
coreweave-install-auth for new deployments.