Agent-almanac track-ml-experiments

install
source · Clone the upstream repo
git clone https://github.com/pjt222/agent-almanac
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/pjt222/agent-almanac "$T" && mkdir -p ~/.claude/skills && cp -r "$T/i18n/de/skills/track-ml-experiments" ~/.claude/skills/pjt222-agent-almanac-track-ml-experiments-e1fb3c && rm -rf "$T"
manifest: i18n/de/skills/track-ml-experiments/SKILL.md
source content

ML-Experimente verfolgen

See Extended Examples for complete configuration files and templates.

Einrichten MLflow tracking server and implement comprehensive experiment tracking with metrics, parameters, and artifacts.

Wann verwenden

  • Starting a new maschinelles Lernen project requiring experiment tracking
  • Migrating from manual experiment logs to automated tracking
  • Comparing multiple model training runs systematically
  • Sharing experiment results with team members
  • Building reproducible ML workflows with full lineage tracking
  • Integrating experiment tracking into CI/CD pipelines

Eingaben

  • Erforderlich: Python environment with ML framework (sklearn, pytorch, tensorflow, xgboost)
  • Erforderlich: MLflow installation (
    pip install mlflow
    )
  • Optional: Remote storage backend (S3, Azure Blob, GCS) for artifacts
  • Optional: Database backend (PostgreSQL, MySQL) for metadata storage
  • Optional: Authentication Zugangsdaten for remote backends

Vorgehensweise

Schritt 1: Initialize MLflow Tracking Server

Einrichten the MLflow tracking server with appropriate backend stores.

# Option 1: Local file-based tracking (development)
mkdir -p mlruns
export MLFLOW_TRACKING_URI="file:./mlruns"

# Option 2: SQLite backend with local artifacts
mlflow server \
  --backend-store-uri sqlite:///mlflow.db \
  --default-artifact-root ./mlartifacts \
# ... (see EXAMPLES.md for complete implementation)

Erstellen a configuration file for team sharing:

# mlflow_config.py
import os

MLFLOW_TRACKING_URI = os.getenv(
    "MLFLOW_TRACKING_URI",
    "http://mlflow-server.company.com:5000"
)

# ... (see EXAMPLES.md for complete implementation)

Erwartet: MLflow UI accessible at specified host:port, showing empty experiments list. Server logs confirm successful startup ohne errors.

Bei Fehler: Check port availability with

netstat -tulpn | grep 5000
, verify database connection strings, ensure S3 Zugangsdaten are configured (
aws configure
), check firewall rules for remote access.

Schritt 2: Konfigurieren Autologging for ML Frameworks

Aktivieren framework-specific autologging to capture metrics, parameters, and models automatisch.

# training_script.py
import mlflow
from mlflow_config import MLFLOW_TRACKING_URI, MLFLOW_EXPERIMENT_NAME

# Set tracking URI
mlflow.set_tracking_uri(MLFLOW_TRACKING_URI)
mlflow.set_experiment(MLFLOW_EXPERIMENT_NAME)

# ... (see EXAMPLES.md for complete implementation)

For PyTorch:

import mlflow.pytorch

mlflow.pytorch.autolog(
    log_every_n_epoch=1,
    log_every_n_step=None,
    log_models=True,
    disable=False,
    exclusive=False,
# ... (see EXAMPLES.md for complete implementation)

Erwartet: Ausfuehren appears in MLflow UI with all hyperparameters, metrics (training/validation loss, accuracy), model artifacts, and input examples automatisch logged.

Bei Fehler: Verifizieren MLflow version compatibility with ML framework (

mlflow.sklearn.autolog()
requires MLflow ≥1.20), check if autologging is supported for your model type, disable autologging and use manual logging as fallback, inspect logs with
mlflow.set_tracking_uri()
for connection errors.

Schritt 3: Implementieren Comprehensive Manual Logging

Hinzufuegen custom metrics, parameters, artifacts, and tags for complete experiment documentation.

# comprehensive_tracking.py
import mlflow
import numpy as np
import matplotlib.pyplot as plt
from pathlib import Path

def train_and_log_model(params, X_train, y_train, X_test, y_test):
    """
# ... (see EXAMPLES.md for complete implementation)

Erwartet: MLflow UI displays rich experiment information einschliesslich step-by-step metrics, visualization artifacts, model signature, input examples, and comprehensive tags for filtering and searching.

Bei Fehler: Check artifact storage Berechtigungs (

aws s3 ls s3://bucket/path
), verify matplotlib backend for figure logging (
plt.switch_backend('Agg')
), ensure JSON-serializable data types for log_dict, check disk space for local artifact storage.

Schritt 4: Vergleichen Runs and Generieren Reports

Use MLflow's comparison tools to analyze multiple experiments.

# compare_runs.py
import mlflow
from mlflow.tracking import MlflowClient

client = MlflowClient()

def compare_experiments(experiment_name, metric_name="test_accuracy", top_n=5):
    """
# ... (see EXAMPLES.md for complete implementation)

Command-line comparison:

# Compare runs using MLflow CLI
mlflow runs compare --experiment-name customer-churn \
  --order-by "metrics.test_accuracy DESC" \
  --max-results 10

# Export run data to CSV
mlflow experiments csv --experiment-name customer-churn \
  --output experiments.csv

Erwartet: Console output shows sorted runs with key metrics, HTML report generated with formatted comparison table, CSV file contains all run data for further analysis.

Bei Fehler: Verifizieren experiment exists with

mlflow experiments list
, check metric names match exactly (case-sensitive), ensure runs have completed erfolgreich (check run status), verify file write Berechtigungs for output files.

Schritt 5: Konfigurieren Remote Artifact Storage

Einrichten S3/Azure/GCS backends for scalable artifact management.

# artifact_storage_config.py
import mlflow
import os

def configure_s3_backend():
    """
    Configure S3 for artifact storage.
    """
# ... (see EXAMPLES.md for complete implementation)

Docker Compose for MLflow with PostgreSQL and S3:

# docker-compose.yml
version: '3.8'

services:
  postgres:
    image: postgres:14
    environment:
      POSTGRES_DB: mlflow
# ... (see EXAMPLES.md for complete implementation)

Erwartet: Artifacts upload erfolgreich to remote storage, MLflow UI shows artifact links pointing to S3/Azure/GCS URIs, downloading artifacts from UI works korrekt.

Bei Fehler: Verifizieren cloud Zugangsdaten with

aws s3 ls
or
az storage blob list
, check bucket/container Berechtigungs (need write access), ensure MLflow installed with cloud extras (
pip install mlflow[extras]
), test network connectivity to storage endpoints, check CORS settings for browser access.

Schritt 6: Implementieren Experiment Lifecycle Management

Einrichten automated cleanup, archival, and organization policies.

# lifecycle_management.py
import mlflow
from mlflow.tracking import MlflowClient
from datetime import datetime, timedelta

client = MlflowClient()

def archive_old_experiments(days_old=90):
# ... (see EXAMPLES.md for complete implementation)

Erwartet: Old experiments moved to deleted state, failed runs removed from active list, best runs tagged for easy filtering in UI, storage space reclaimed.

Bei Fehler: Check experiment Berechtigungs (muss owner to delete), verify runs are actually in FAILED status, ensure metric exists for all runs being ranked, check database connectivity for bulk operations, verify sufficient Berechtigungs for artifact deletion in remote storage.

Validierung

  • MLflow tracking server accessible via web UI
  • Experiments created and runs logged erfolgreich
  • Autologging captures framework-specific metrics automatisch
  • Custom metrics, parameters, and artifacts logged korrekt
  • Comparison queries return expected top runs
  • Remote artifact storage configured and functional
  • Artifacts downloadable from UI and programmatically
  • Ausfuehren filtering and searching works with tags
  • HTML comparison reports generated ohne errors
  • Lifecycle management scripts execute erfolgreich

Haeufige Stolperfallen

  • Connection timeouts: MLflow server not accessible from training scripts - verify
    MLFLOW_TRACKING_URI
    Umgebungsvariable, check firewall rules, ensure server is running
  • Artifact upload failures: S3/Azure Zugangsdaten not configured or bucket doesn't exist - test cloud CLI access first, verify bucket Berechtigungs
  • Missing metrics: Autologging disabled or unsupported framework version - check MLflow version compatibility, fall back to manual logging
  • Ausfuehren clutter: Too many experimental runs polluting UI - implement tagging strategy early, use lifecycle management scripts regularly
  • Large artifacts: Logging entire datasets causes storage bloat - log only samples or references, use external data versioning (DVC)
  • Inconsistent naming: Parameters logged with different names across runs - standardize naming conventions in config file
  • Database locks: SQLite doesn't support concurrent writes - use PostgreSQL/MySQL for multi-user environments
  • Autolog conflicts: Multiple autolog configurations interfere - use
    exclusive=True
    or disable conflicting autologs

Verwandte Skills

  • register-ml-model
    - Registrieren tracked models in MLflow Modellieren Registry
  • version-ml-data
    - Version datasets using DVC for reproducible experiments
  • setup-automl-pipeline
    - Integrieren experiment tracking into automated ML pipelines
  • deploy-ml-model-serving
    - Bereitstellen best-performing tracked models to production
  • orchestrate-ml-pipeline
    - Kombinieren experiment tracking with workflow orchestration