Claude-scientific-skills stable-baselines3
Production-ready reinforcement learning algorithms (PPO, SAC, DQN, TD3, DDPG, A2C) with scikit-learn-like API. Use for standard RL experiments, quick prototyping, and well-documented algorithm implementations. Best for single-agent RL with Gymnasium environments. For high-performance parallel training, multi-agent systems, or custom vectorized environments, use pufferlib instead.
git clone https://github.com/K-Dense-AI/scientific-agent-skills
T=$(mktemp -d) && git clone --depth=1 https://github.com/K-Dense-AI/scientific-agent-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/scientific-skills/stable-baselines3" ~/.claude/skills/k-dense-ai-claude-scientific-skills-stable-baselines3 && rm -rf "$T"
scientific-skills/stable-baselines3/SKILL.mdStable Baselines3
Overview
Stable Baselines3 (SB3) is a PyTorch-based library providing reliable implementations of reinforcement learning algorithms. This skill provides comprehensive guidance for training RL agents, creating custom environments, implementing callbacks, and optimizing training workflows using SB3's unified API.
Core Capabilities
1. Training RL Agents
Basic Training Pattern:
import gymnasium as gym from stable_baselines3 import PPO # Create environment env = gym.make("CartPole-v1") # Initialize agent model = PPO("MlpPolicy", env, verbose=1) # Train the agent model.learn(total_timesteps=10000) # Save the model model.save("ppo_cartpole") # Load the model (without prior instantiation) model = PPO.load("ppo_cartpole", env=env)
Important Notes:
is a lower bound; actual training may exceed this due to batch collectiontotal_timesteps- Use
as a static method, not on an existing instancemodel.load() - The replay buffer is NOT saved with the model to save space
Algorithm Selection: Use
references/algorithms.md for detailed algorithm characteristics and selection guidance. Quick reference:
- PPO/A2C: General-purpose, supports all action space types, good for multiprocessing
- SAC/TD3: Continuous control, off-policy, sample-efficient
- DQN: Discrete actions, off-policy
- HER: Goal-conditioned tasks
See
scripts/train_rl_agent.py for a complete training template with best practices.
2. Custom Environments
Requirements: Custom environments must inherit from
gymnasium.Env and implement:
: Define action_space and observation_space__init__()
: Return initial observation and info dictreset(seed, options)
: Return observation, reward, terminated, truncated, infostep(action)
: Visualization (optional)render()
: Cleanup resourcesclose()
Key Constraints:
- Image observations must be
in range [0, 255]np.uint8 - Use channel-first format when possible (channels, height, width)
- SB3 normalizes images automatically by dividing by 255
- Set
in policy_kwargs if pre-normalizednormalize_images=False - SB3 does NOT support
orDiscrete
spaces withMultiDiscretestart!=0
Validation:
from stable_baselines3.common.env_checker import check_env check_env(env, warn=True)
See
scripts/custom_env_template.py for a complete custom environment template and references/custom_environments.md for comprehensive guidance.
3. Vectorized Environments
Purpose: Vectorized environments run multiple environment instances in parallel, accelerating training and enabling certain wrappers (frame-stacking, normalization).
Types:
- DummyVecEnv: Sequential execution on current process (for lightweight environments)
- SubprocVecEnv: Parallel execution across processes (for compute-heavy environments)
Quick Setup:
from stable_baselines3.common.env_util import make_vec_env # Create 4 parallel environments env = make_vec_env("CartPole-v1", n_envs=4, vec_env_cls=SubprocVecEnv) model = PPO("MlpPolicy", env, verbose=1) model.learn(total_timesteps=25000)
Off-Policy Optimization: When using multiple environments with off-policy algorithms (SAC, TD3, DQN), set
gradient_steps=-1 to perform one gradient update per environment step, balancing wall-clock time and sample efficiency.
API Differences:
returns only observations (info available inreset()
)vec_env.reset_infos
returns 4-tuple:step()
not 5-tuple(obs, rewards, dones, infos)- Environments auto-reset after episodes
- Terminal observations available via
infos[env_idx]["terminal_observation"]
See
references/vectorized_envs.md for detailed information on wrappers and advanced usage.
4. Callbacks for Monitoring and Control
Purpose: Callbacks enable monitoring metrics, saving checkpoints, implementing early stopping, and custom training logic without modifying core algorithms.
Common Callbacks:
- EvalCallback: Evaluate periodically and save best model
- CheckpointCallback: Save model checkpoints at intervals
- StopTrainingOnRewardThreshold: Stop when target reward reached
- ProgressBarCallback: Display training progress with timing
Custom Callback Structure:
from stable_baselines3.common.callbacks import BaseCallback class CustomCallback(BaseCallback): def _on_training_start(self): # Called before first rollout pass def _on_step(self): # Called after each environment step # Return False to stop training return True def _on_rollout_end(self): # Called at end of rollout pass
Available Attributes:
: The RL algorithm instanceself.model
: Total environment stepsself.num_timesteps
: The training environmentself.training_env
Chaining Callbacks:
from stable_baselines3.common.callbacks import CallbackList callback = CallbackList([eval_callback, checkpoint_callback, custom_callback]) model.learn(total_timesteps=10000, callback=callback)
See
references/callbacks.md for comprehensive callback documentation.
5. Model Persistence and Inspection
Saving and Loading:
# Save model model.save("model_name") # Save normalization statistics (if using VecNormalize) vec_env.save("vec_normalize.pkl") # Load model model = PPO.load("model_name", env=env) # Load normalization statistics vec_env = VecNormalize.load("vec_normalize.pkl", vec_env)
Parameter Access:
# Get parameters params = model.get_parameters() # Set parameters model.set_parameters(params) # Access PyTorch state dict state_dict = model.policy.state_dict()
6. Evaluation and Recording
Evaluation:
from stable_baselines3.common.evaluation import evaluate_policy mean_reward, std_reward = evaluate_policy( model, env, n_eval_episodes=10, deterministic=True )
Video Recording:
from stable_baselines3.common.vec_env import VecVideoRecorder # Wrap environment with video recorder env = VecVideoRecorder( env, "videos/", record_video_trigger=lambda x: x % 2000 == 0, video_length=200 )
See
scripts/evaluate_agent.py for a complete evaluation and recording template.
7. Advanced Features
Learning Rate Schedules:
def linear_schedule(initial_value): def func(progress_remaining): # progress_remaining goes from 1 to 0 return progress_remaining * initial_value return func model = PPO("MlpPolicy", env, learning_rate=linear_schedule(0.001))
Multi-Input Policies (Dict Observations):
model = PPO("MultiInputPolicy", env, verbose=1)
Use when observations are dictionaries (e.g., combining images with sensor data).
Hindsight Experience Replay:
from stable_baselines3 import SAC, HerReplayBuffer model = SAC( "MultiInputPolicy", env, replay_buffer_class=HerReplayBuffer, replay_buffer_kwargs=dict( n_sampled_goal=4, goal_selection_strategy="future", ), )
TensorBoard Integration:
model = PPO("MlpPolicy", env, tensorboard_log="./tensorboard/") model.learn(total_timesteps=10000)
Workflow Guidance
Starting a New RL Project:
- Define the problem: Identify observation space, action space, and reward structure
- Choose algorithm: Use
for selection guidancereferences/algorithms.md - Create/adapt environment: Use
if neededscripts/custom_env_template.py - Validate environment: Always run
before trainingcheck_env() - Set up training: Use
as starting templatescripts/train_rl_agent.py - Add monitoring: Implement callbacks for evaluation and checkpointing
- Optimize performance: Consider vectorized environments for speed
- Evaluate and iterate: Use
for assessmentscripts/evaluate_agent.py
Common Issues:
- Memory errors: Reduce
for off-policy algorithms or use fewer parallel environmentsbuffer_size - Slow training: Consider SubprocVecEnv for parallel environments
- Unstable training: Try different algorithms, tune hyperparameters, or check reward scaling
- Import errors: Ensure
is installed:stable_baselines3uv pip install stable-baselines3[extra]
Resources
scripts/
: Complete training script template with best practicestrain_rl_agent.py
: Agent evaluation and video recording templateevaluate_agent.py
: Custom Gym environment templatecustom_env_template.py
references/
: Detailed algorithm comparison and selection guidealgorithms.md
: Comprehensive custom environment creation guidecustom_environments.md
: Complete callback system referencecallbacks.md
: Vectorized environment usage and wrappersvectorized_envs.md
Installation
# Basic installation uv pip install stable-baselines3 # With extra dependencies (Tensorboard, etc.) uv pip install stable-baselines3[extra]