Skills comet-ml
Expert guidance for Comet ML, the platform for tracking machine learning experiments, managing models, and monitoring production ML systems. Helps developers log experiments, compare model versions, and build reproducible ML pipelines with automatic code/data versioning.
git clone https://github.com/TerminalSkills/skills
T=$(mktemp -d) && git clone --depth=1 https://github.com/TerminalSkills/skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/comet-ml" ~/.claude/skills/terminalskills-skills-comet-ml && rm -rf "$T"
skills/comet-ml/SKILL.md- pip install
- references API keys
Comet ML — ML Experiment Tracking & Model Management
Overview
Comet ML, the platform for tracking machine learning experiments, managing models, and monitoring production ML systems. Helps developers log experiments, compare model versions, and build reproducible ML pipelines with automatic code/data versioning.
Instructions
Experiment Tracking
# train.py — Track a training run with Comet ML import comet_ml from comet_ml import Experiment import torch from torch import nn, optim # Initialize experiment — auto-logs git info, environment, and dependencies experiment = Experiment( api_key=os.environ["COMET_API_KEY"], project_name="text-classifier", workspace="my-team", auto_metric_logging=True, auto_param_logging=True, auto_histogram_weight_logging=True, log_code=True, # Logs the full source code ) # Log hyperparameters experiment.log_parameters({ "model": "distilbert-base-uncased", "learning_rate": 2e-5, "batch_size": 32, "epochs": 5, "optimizer": "AdamW", "warmup_steps": 100, "weight_decay": 0.01, "max_seq_length": 256, }) # Log dataset information experiment.log_dataset_hash(train_df) # Hash for reproducibility experiment.log_parameter("train_samples", len(train_df)) experiment.log_parameter("val_samples", len(val_df)) # Training loop with metric logging for epoch in range(5): model.train() running_loss = 0 for batch_idx, (inputs, labels) in enumerate(train_loader): optimizer.zero_grad() outputs = model(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() running_loss += loss.item() # Log batch-level metrics experiment.log_metric("batch_loss", loss.item(), step=epoch * len(train_loader) + batch_idx) # Log epoch-level metrics avg_loss = running_loss / len(train_loader) experiment.log_metric("train_loss", avg_loss, epoch=epoch) # Validation val_loss, val_acc, val_f1 = evaluate(model, val_loader) experiment.log_metrics({ "val_loss": val_loss, "val_accuracy": val_acc, "val_f1": val_f1, }, epoch=epoch) print(f"Epoch {epoch}: loss={avg_loss:.4f} val_acc={val_acc:.4f}") # Log the model experiment.log_model("text-classifier", "./model_weights.pt") # Log confusion matrix experiment.log_confusion_matrix( y_true=val_labels, y_predicted=val_predictions, labels=["negative", "neutral", "positive"], ) # Log sample predictions for i in range(10): experiment.log_text(f"Input: {val_texts[i]}\nPredicted: {val_predictions[i]}\nActual: {val_labels[i]}") # Add tags for filtering in the dashboard experiment.add_tags(["v2", "distilbert", "sentiment"]) experiment.end()
Model Registry
# Register and version models for deployment from comet_ml import API api = API(api_key=os.environ["COMET_API_KEY"]) # Register a model from an experiment experiment = api.get_experiment("my-team", "text-classifier", "abc123") # Register to model registry experiment.register_model( model_name="text-classifier-prod", version="1.2.0", description="DistilBERT fine-tuned on customer reviews, F1=0.92", tags=["production-ready"], status="Production", # Staging | Production | Archived ) # Retrieve a registered model model = api.get_model(workspace="my-team", model_name="text-classifier-prod") latest = model.get_details("1.2.0") print(f"Model: {latest['modelName']} v{latest['version']}") print(f"Status: {latest['status']}") # Download model assets model.download("1.2.0", output_folder="./model_artifacts/") # Compare two model versions experiments = api.query("my-team", "text-classifier", Expression("val_f1", ">", 0.85)) for exp in experiments: print(f"{exp.id}: F1={exp.get_metric('val_f1')}")
LLM Tracking
# Track LLM/prompt experiments from comet_ml import Experiment from comet_ml.integration.openai import comet_openai_trace # Auto-log OpenAI API calls experiment = Experiment(project_name="llm-eval") comet_openai_trace(experiment) # Patches OpenAI client # All OpenAI calls are automatically logged with: # - Prompt text and system message # - Model used and parameters # - Response text # - Token usage and cost # - Latency response = openai.chat.completions.create( model="gpt-4o", messages=[{"role": "user", "content": "Summarize this text..."}], ) # ↑ Automatically logged to Comet dashboard # Manual prompt logging experiment.log_text("prompt_template", "Summarize the following: {text}") experiment.log_metric("prompt_tokens", response.usage.prompt_tokens) experiment.log_metric("completion_tokens", response.usage.completion_tokens) experiment.log_metric("total_cost", calculate_cost(response.usage))
Comparing Experiments
# Query and compare experiments programmatically from comet_ml import API api = API() # Get all experiments in a project experiments = api.get_experiments("my-team", "text-classifier") # Filter by metric best = sorted(experiments, key=lambda e: e.get_metric("val_f1") or 0, reverse=True)[:5] for exp in best: params = exp.get_parameters_summary() metrics = exp.get_metrics_summary() print(f"Experiment: {exp.name}") print(f" Model: {params.get('model')}") print(f" F1: {metrics.get('val_f1')}") print(f" Accuracy: {metrics.get('val_accuracy')}")
Installation
pip install comet-ml # With framework integrations pip install comet-ml[pytorch] pip install comet-ml[tensorflow] pip install comet-ml[sklearn]
Examples
Example 1: Setting up an evaluation pipeline for a RAG application
User request:
I have a RAG chatbot that answers questions from our docs. Set up Comet Ml to evaluate answer quality.
The agent creates an evaluation suite with appropriate metrics (faithfulness, relevance, answer correctness), configures test datasets from real user questions, runs baseline evaluations, and sets up CI integration so evaluations run on every prompt or retrieval change.
Example 2: Comparing model performance across prompts
User request:
We're testing GPT-4o vs Claude on our customer support prompts. Set up a comparison with Comet Ml.
The agent creates a structured experiment with the existing prompt set, configures both model providers, defines scoring criteria specific to customer support (accuracy, tone, completeness), runs the comparison, and generates a summary report with statistical significance indicators.
Guidelines
- Log everything from the start — It's cheaper to log and not need it than to re-run experiments because you forgot a metric
- Use auto-logging — Enable
andauto_metric_logging
to capture framework metrics automaticallyauto_param_logging - Tag experiments — Use tags (
) for filtering; tag by architecture, dataset version, or purposeexperiment.add_tags() - Model registry for deployment — Don't deploy from experiment artifacts; register models with versions and status tracking
- Log dataset hashes — Use
for reproducibility; know exactly which data produced which resultslog_dataset_hash() - Confusion matrices for classification — Always log confusion matrices; accuracy alone misses class imbalances
- Compare before deploying — Use Comet's comparison view to verify new models outperform current production across all metrics
- LLM cost tracking — Log token usage and calculated costs for every LLM call; costs add up fast in production