Feynman runpod-compute

Provision and manage GPU pods on RunPod for long-running experiments. Use when the user needs persistent GPU compute with SSH access, large datasets, or multi-step experiments.

install
source · Clone the upstream repo
git clone https://github.com/getcompanion-ai/feynman
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/getcompanion-ai/feynman "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/runpod-compute" ~/.claude/skills/getcompanion-ai-feynman-runpod-compute && rm -rf "$T"
manifest: skills/runpod-compute/SKILL.md
source content

RunPod Compute

Use

runpodctl
CLI for persistent GPU pods with SSH access.

Setup

brew install runpod/runpodctl/runpodctl   # macOS
runpodctl config --apiKey=YOUR_KEY

Commands

CommandDescription
runpodctl create pod --gpuType "NVIDIA A100 80GB PCIe" --imageName "runpod/pytorch:2.4.0-py3.11-cuda12.4.1-devel-ubuntu22.04" --name experiment
Create a pod
runpodctl get pod
List all pods
runpodctl stop pod <id>
Stop (preserves volume)
runpodctl start pod <id>
Resume a stopped pod
runpodctl remove pod <id>
Terminate and delete
runpodctl gpu list
List available GPU types and prices
runpodctl send <file>
Transfer files to/from pods
runpodctl receive <code>
Receive transferred files

SSH access

ssh root@<IP> -p <PORT> -i ~/.ssh/id_ed25519

Get connection details from

runpodctl get pod <id>
. Pods must expose port
22/tcp
.

GPU types

NVIDIA GeForce RTX 4090
,
NVIDIA RTX A6000
,
NVIDIA A40
,
NVIDIA A100 80GB PCIe
,
NVIDIA H100 80GB HBM3

When to use

  • Long-running experiments needing persistent state
  • Large dataset processing
  • Multi-step work with SSH access between iterations
  • Always stop or remove pods after experiments
  • Check availability:
    command -v runpodctl