Babysitter Reinforcement Learning Skill
RL training for robot control using simulation with sim-to-real transfer
install
source · Clone the upstream repo
git clone https://github.com/a5c-ai/babysitter
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/a5c-ai/babysitter "$T" && mkdir -p ~/.claude/skills && cp -r "$T/library/specializations/robotics-simulation/skills/rl-robotics" ~/.claude/skills/a5c-ai-babysitter-reinforcement-learning-skill && rm -rf "$T"
manifest:
library/specializations/robotics-simulation/skills/rl-robotics/SKILL.mdsource content
Reinforcement Learning Skill
Overview
Expert skill for training reinforcement learning agents for robot control tasks, including environment design, training pipelines, and sim-to-real transfer.
Capabilities
- Configure Gym/Gymnasium environments for robots
- Set up Stable Baselines3 training (PPO, SAC, TD3)
- Implement custom observation and action spaces
- Design reward shaping strategies
- Configure parallel environment training
- Implement domain randomization for sim-to-real
- Set up curriculum learning
- Configure vision-based RL with CNNs
- Implement policy distillation
- Export policies for deployment (ONNX, TorchScript)
Target Processes
- rl-robot-control.js
- imitation-learning.js
- sim-to-real-validation.js
- nn-model-optimization.js
Dependencies
- Stable Baselines3
- Gymnasium
- Isaac Gym
- rsl_rl
Usage Context
This skill is invoked when processes require RL-based robot control, learning from simulation, or transferring learned policies to real robots.
Output Artifacts
- Gymnasium environment implementations
- Training configurations
- Reward function designs
- Domain randomization configs
- Trained policy checkpoints
- Deployment-ready models (ONNX)