Agent-almanac track-ml-experiments
git clone https://github.com/pjt222/agent-almanac
T=$(mktemp -d) && git clone --depth=1 https://github.com/pjt222/agent-almanac "$T" && mkdir -p ~/.claude/skills && cp -r "$T/i18n/es/skills/track-ml-experiments" ~/.claude/skills/pjt222-agent-almanac-track-ml-experiments-b28c4b && rm -rf "$T"
i18n/es/skills/track-ml-experiments/SKILL.mdTrack ML Experiments
See Extended Examples for complete configuration files and templates.
Set up MLflow tracking server and implement comprehensive experiment tracking with metrics, parameters, and artifacts.
Cuándo Usar
- Starting a new machine learning project requiring experiment tracking
- Migrating from manual experiment logs to automated tracking
- Comparing multiple model training runs systematically
- Sharing experiment results with team members
- Building reproducible ML workflows with full lineage tracking
- Integrating experiment tracking into CI/CD pipelines
Entradas
- Requerido: Python environment with ML framework (sklearn, pytorch, tensorflow, xgboost)
- Requerido: MLflow installation (
)pip install mlflow - Opcional: Remote storage backend (S3, Azure Blob, GCS) for artifacts
- Opcional: Database backend (PostgreSQL, MySQL) for metadata storage
- Opcional: Authentication credentials for remote backends
Procedimiento
Paso 1: Initialize MLflow Tracking Server
Set up the MLflow tracking server with appropriate backend stores.
# Option 1: Local file-based tracking (development) mkdir -p mlruns export MLFLOW_TRACKING_URI="file:./mlruns" # Option 2: SQLite backend with local artifacts mlflow server \ --backend-store-uri sqlite:///mlflow.db \ --default-artifact-root ./mlartifacts \ # ... (see EXAMPLES.md for complete implementation)
Create a configuration file for team sharing:
# mlflow_config.py import os MLFLOW_TRACKING_URI = os.getenv( "MLFLOW_TRACKING_URI", "http://mlflow-server.company.com:5000" ) # ... (see EXAMPLES.md for complete implementation)
Esperado: MLflow UI accessible at specified host:port, showing empty experiments list. Server logs confirm successful startup without errors.
En caso de fallo: Check port availability with
netstat -tulpn | grep 5000, verify database connection strings, ensure S3 credentials are configured (aws configure), check firewall rules for remote access.
Paso 2: Configure Autologging for ML Frameworks
Enable framework-specific autologging to capture metrics, parameters, and models automatically.
# training_script.py import mlflow from mlflow_config import MLFLOW_TRACKING_URI, MLFLOW_EXPERIMENT_NAME # Set tracking URI mlflow.set_tracking_uri(MLFLOW_TRACKING_URI) mlflow.set_experiment(MLFLOW_EXPERIMENT_NAME) # ... (see EXAMPLES.md for complete implementation)
For PyTorch:
import mlflow.pytorch mlflow.pytorch.autolog( log_every_n_epoch=1, log_every_n_step=None, log_models=True, disable=False, exclusive=False, # ... (see EXAMPLES.md for complete implementation)
Esperado: Run appears in MLflow UI with all hyperparameters, metrics (training/validation loss, accuracy), model artifacts, and input examples automatically logged.
En caso de fallo: Verify MLflow version compatibility with ML framework (
mlflow.sklearn.autolog() requires MLflow ≥1.20), check if autologging is supported for your model type, disable autologging and use manual logging as fallback, inspect logs with mlflow.set_tracking_uri() for connection errors.
Paso 3: Implement Comprehensive Manual Logging
Add custom metrics, parameters, artifacts, and tags for complete experiment documentation.
# comprehensive_tracking.py import mlflow import numpy as np import matplotlib.pyplot as plt from pathlib import Path def train_and_log_model(params, X_train, y_train, X_test, y_test): """ # ... (see EXAMPLES.md for complete implementation)
Esperado: MLflow UI displays rich experiment information including step-by-step metrics, visualization artifacts, model signature, input examples, and comprehensive tags for filtering and searching.
En caso de fallo: Check artifact storage permissions (
aws s3 ls s3://bucket/path), verify matplotlib backend for figure logging (plt.switch_backend('Agg')), ensure JSON-serializable data types for log_dict, check disk space for local artifact storage.
Paso 4: Compare Runs and Generate Reports
Use MLflow's comparison tools to analyze multiple experiments.
# compare_runs.py import mlflow from mlflow.tracking import MlflowClient client = MlflowClient() def compare_experiments(experiment_name, metric_name="test_accuracy", top_n=5): """ # ... (see EXAMPLES.md for complete implementation)
Command-line comparison:
# Compare runs using MLflow CLI mlflow runs compare --experiment-name customer-churn \ --order-by "metrics.test_accuracy DESC" \ --max-results 10 # Export run data to CSV mlflow experiments csv --experiment-name customer-churn \ --output experiments.csv
Esperado: Console output shows sorted runs with key metrics, HTML report generated with formatted comparison table, CSV file contains all run data for further analysis.
En caso de fallo: Verify experiment exists with
mlflow experiments list, check metric names match exactly (case-sensitive), ensure runs have completed successfully (check run status), verify file write permissions for output files.
Paso 5: Configure Remote Artifact Storage
Set up S3/Azure/GCS backends for scalable artifact management.
# artifact_storage_config.py import mlflow import os def configure_s3_backend(): """ Configure S3 for artifact storage. """ # ... (see EXAMPLES.md for complete implementation)
Docker Compose for MLflow with PostgreSQL and S3:
# docker-compose.yml version: '3.8' services: postgres: image: postgres:14 environment: POSTGRES_DB: mlflow # ... (see EXAMPLES.md for complete implementation)
Esperado: Artifacts upload successfully to remote storage, MLflow UI shows artifact links pointing to S3/Azure/GCS URIs, downloading artifacts from UI works correctly.
En caso de fallo: Verify cloud credentials with
aws s3 ls or az storage blob list, check bucket/container permissions (need write access), ensure MLflow installed with cloud extras (pip install mlflow[extras]), test network connectivity to storage endpoints, check CORS settings for browser access.
Paso 6: Implement Experiment Lifecycle Management
Set up automated cleanup, archival, and organization policies.
# lifecycle_management.py import mlflow from mlflow.tracking import MlflowClient from datetime import datetime, timedelta client = MlflowClient() def archive_old_experiments(days_old=90): # ... (see EXAMPLES.md for complete implementation)
Esperado: Old experiments moved to deleted state, failed runs removed from active list, best runs tagged for easy filtering in UI, storage space reclaimed.
En caso de fallo: Check experiment permissions (must be owner to delete), verify runs are actually in FAILED status, ensure metric exists for all runs being ranked, check database connectivity for bulk operations, verify sufficient permissions for artifact deletion in remote storage.
Validación
- MLflow tracking server accessible via web UI
- Experiments created and runs logged successfully
- Autologging captures framework-specific metrics automatically
- Custom metrics, parameters, and artifacts logged correctly
- Comparison queries return expected top runs
- Remote artifact storage configured and functional
- Artifacts downloadable from UI and programmatically
- Run filtering and searching works with tags
- HTML comparison reports generated without errors
- Lifecycle management scripts execute successfully
Errores Comunes
- Connection timeouts: MLflow server not accessible from training scripts - verify
environment variable, check firewall rules, ensure server is runningMLFLOW_TRACKING_URI - Artifact upload failures: S3/Azure credentials not configured or bucket doesn't exist - test cloud CLI access first, verify bucket permissions
- Missing metrics: Autologging disabled or unsupported framework version - check MLflow version compatibility, fall back to manual logging
- Run clutter: Too many experimental runs polluting UI - implement tagging strategy early, use lifecycle management scripts regularly
- Large artifacts: Logging entire datasets causes storage bloat - log only samples or references, use external data versioning (DVC)
- Inconsistent naming: Parameters logged with different names across runs - standardize naming conventions in config file
- Database locks: SQLite doesn't support concurrent writes - use PostgreSQL/MySQL for multi-user environments
- Autolog conflicts: Multiple autolog configurations interfere - use
or disable conflicting autologsexclusive=True
Habilidades Relacionadas
- Register tracked models in MLflow Model Registryregister-ml-model
- Version datasets using DVC for reproducible experimentsversion-ml-data
- Integrate experiment tracking into automated ML pipelinessetup-automl-pipeline
- Deploy best-performing tracked models to productiondeploy-ml-model-serving
- Combine experiment tracking with workflow orchestrationorchestrate-ml-pipeline