Agent-almanac setup-automl-pipeline
git clone https://github.com/pjt222/agent-almanac
T=$(mktemp -d) && git clone --depth=1 https://github.com/pjt222/agent-almanac "$T" && mkdir -p ~/.claude/skills && cp -r "$T/i18n/wenyan-lite/skills/setup-automl-pipeline" ~/.claude/skills/pjt222-agent-almanac-setup-automl-pipeline-95f239 && rm -rf "$T"
i18n/wenyan-lite/skills/setup-automl-pipeline/SKILL.mdSetup AutoML Pipeline
See Extended Examples for complete configuration files and templates.
Automate hyperparameter tuning and model selection using Optuna or Ray Tune with efficient search strategies.
When to Use
- Starting new ML project and need to quickly find good model configurations
- Retraining existing model with new data and want to re-optimize hyperparameters
- Comparing multiple algorithms and their optimal configurations
- Limited time for manual tuning but need near-optimal performance
- Team lacks deep expertise in specific algorithm hyperparameters
- Need reproducible and documented optimization process
Inputs
- Required: Training dataset with features and labels
- Required: Validation dataset for objective evaluation
- Required: Model type(s) to optimize (e.g., XGBoost, LightGBM, neural network)
- Required: Optimization objective (metric to maximize/minimize)
- Required: Compute budget (time or number of trials)
- Optional: Search space constraints (min/max values for hyperparameters)
- Optional: Prior knowledge of good hyperparameter ranges
Procedure
Step 1: Install Dependencies and Set Up Environment
Install Optuna or Ray Tune with appropriate backends.
# Create virtual environment python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate # Option 1: Optuna (simpler, good for single-machine) pip install optuna optuna-dashboard pip install scikit-learn xgboost lightgbm # Option 2: Ray Tune (distributed, good for multi-machine/GPU) pip install "ray[tune]" optuna hyperopt bayesian-optimization pip install torch torchvision # if optimizing neural networks # Visualization and tracking pip install mlflow tensorboard plotly
Create project structure:
mkdir -p automl/{configs,experiments,models,results}
Expected: Clean environment with required packages installed, no dependency conflicts.
On failure: Use Python 3.8-3.11 (compatibility issues with 3.12+), if CUDA errors occur install CPU-only versions first, on M1/M2 Mac use conda instead of pip for scikit-learn.
Step 2: Define Search Space and Objective (Optuna)
Create configuration for hyperparameter search.
# automl/optuna_config.py import optuna from optuna.pruners import HyperbandPruner from optuna.samplers import TPESampler import xgboost as xgb from sklearn.metrics import roc_auc_score, mean_squared_error import numpy as np # ... (see EXAMPLES.md for complete implementation)
Expected: Search space covers reasonable hyperparameter ranges, objective function runs without errors, pruning stops unpromising trials early.
On failure: If trials crash, reduce search space (e.g., lower max n_estimators), verify data has no NaN/inf values, check memory usage (reduce batch size if OOM), ensure eval_metric matches task type.
Step 3: Run Optimization with Advanced Samplers
Execute hyperparameter search with efficient sampling strategies.
# automl/run_optimization.py import optuna from optuna.samplers import TPESampler, CmaEsSampler, NSGAIISampler from optuna.pruners import HyperbandPruner, MedianPruner, SuccessiveHalvingPruner import joblib import pandas as pd from pathlib import Path # ... (see EXAMPLES.md for complete implementation)
Expected: Optimization completes with 50-70% of trials pruned early, best parameters found, visualization plots generated showing convergence.
On failure: If no pruning happens, verify objective reports intermediate values correctly, if optimization doesn't improve try different sampler (TPE → CmaES), if crashes with n_jobs>1 use n_jobs=1 for debugging.
Step 4: Set Up Ray Tune for Distributed Optimization (Alternative)
Use Ray Tune for multi-GPU or multi-node optimization.
# automl/ray_tune_config.py from ray import tune from ray.tune.schedulers import ASHAScheduler, PopulationBasedTraining from ray.tune.search.optuna import OptunaSearch from ray.tune.search import ConcurrencyLimiter import xgboost as xgb from sklearn.metrics import roc_auc_score import os # ... (see EXAMPLES.md for complete implementation)
Expected: Ray Tune runs trials in parallel across CPUs/GPUs, ASHA scheduler stops bad trials early, best configuration found and logged.
On failure: If Ray crashes, start with
ray.init(num_cpus=2, num_gpus=0) for debugging, reduce concurrent trials if OOM, check that train function doesn't modify shared data, use tune.report() not return for metrics.
Step 5: Track Experiments with MLflow
Integrate with MLflow for experiment tracking and model registry.
# automl/mlflow_tracking.py import mlflow import mlflow.xgboost from mlflow.tracking import MlflowClient import optuna from pathlib import Path # ... (see EXAMPLES.md for complete implementation)
Expected: All trials logged to MLflow with parameters and metrics, best model registered in MLflow registry, experiments viewable in MLflow UI.
On failure: Start MLflow UI with
mlflow ui --backend-store-uri file:./automl/mlruns, check write permissions to mlruns directory, if registration fails verify model registry is configured, ensure model artifact size < 2GB.
Step 6: Deploy Best Model and Monitor Performance
Save optimized model and set up monitoring.
# automl/deploy_model.py import joblib import json from pathlib import Path import optuna import xgboost as xgb # ... (see EXAMPLES.md for complete implementation)
Expected: Model saved in production-ready format, configuration documented, inference script created for deployment.
On failure: If model file too large (>100MB), consider model compression or feature selection, verify model loads correctly in fresh Python session, test inference script with sample data before deployment.
Validation
- Optuna/Ray Tune installs without dependency conflicts
- Search space includes reasonable hyperparameter ranges
- Objective function runs successfully for single trial
- Optimization completes 50+ trials within time budget
- Pruning stops 40-70% of unpromising trials early
- Best parameters improve over default configuration by >5%
- Visualizations show convergence (optimization history flattens)
- MLflow logs all trials with parameters and metrics
- Final model saved and loads correctly
- Deployment package includes all necessary files
Common Pitfalls
- Overfitting to validation set: Running 1000s of trials implicitly optimizes for validation set; use holdout test set or time-based split for final evaluation
- Ignoring feature engineering: AutoML finds best hyperparameters but doesn't create features; invest in feature engineering first
- Search space too wide: Unbounded or very wide ranges waste trials on unrealistic values; use domain knowledge to constrain
- Not using early stopping: Training full epochs for every trial is wasteful; enable early stopping in objective function
- Ignoring compute costs: 100 trials × 10 minutes = 16 hours; consider compute budget when setting n_trials
- Categorical features not encoded: Most algorithms need numeric features; encode categoricals before optimization
- Imbalanced data: Default metrics may mislead with class imbalance; use F1, AUC, or custom metrics
- Not saving intermediate results: Crashes lose all progress; use persistent storage (Optuna SQLite, MLflow) to resume
Related Skills
- MLflow experiment tracking and versioningtrack-ml-experiments
- Airflow/Kubeflow for production AutoML pipelinesorchestrate-ml-pipeline