Feynman runpod-compute
Provision and manage GPU pods on RunPod for long-running experiments. Use when the user needs persistent GPU compute with SSH access, large datasets, or multi-step experiments.
install
source · Clone the upstream repo
git clone https://github.com/getcompanion-ai/feynman
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/getcompanion-ai/feynman "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/runpod-compute" ~/.claude/skills/getcompanion-ai-feynman-runpod-compute && rm -rf "$T"
manifest:
skills/runpod-compute/SKILL.mdsource content
RunPod Compute
Use
runpodctl CLI for persistent GPU pods with SSH access.
Setup
brew install runpod/runpodctl/runpodctl # macOS runpodctl config --apiKey=YOUR_KEY
Commands
| Command | Description |
|---|---|
| Create a pod |
| List all pods |
| Stop (preserves volume) |
| Resume a stopped pod |
| Terminate and delete |
| List available GPU types and prices |
| Transfer files to/from pods |
| Receive transferred files |
SSH access
ssh root@<IP> -p <PORT> -i ~/.ssh/id_ed25519
Get connection details from
runpodctl get pod <id>. Pods must expose port 22/tcp.
GPU types
NVIDIA GeForce RTX 4090, NVIDIA RTX A6000, NVIDIA A40, NVIDIA A100 80GB PCIe, NVIDIA H100 80GB HBM3
When to use
- Long-running experiments needing persistent state
- Large dataset processing
- Multi-step work with SSH access between iterations
- Always stop or remove pods after experiments
- Check availability:
command -v runpodctl