Claude-code-plugins-plus model-evaluation-metrics

install
source · Clone the upstream repo
git clone https://github.com/jeremylongshore/claude-code-plugins-plus-skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/jeremylongshore/claude-code-plugins-plus-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/planned-skills/generated/07-ml-training/model-evaluation-metrics" ~/.claude/skills/jeremylongshore-claude-code-plugins-plus-model-evaluation-metrics && rm -rf "$T"
manifest: planned-skills/generated/07-ml-training/model-evaluation-metrics/SKILL.md
source content

Model Evaluation Metrics

Purpose

This skill provides automated assistance for model evaluation metrics tasks within the ML Training domain.

When to Use

This skill activates automatically when you:

  • Mention "model evaluation metrics" in your request
  • Ask about model evaluation metrics patterns or best practices
  • Need help with machine learning training skills covering data preparation, model training, hyperparameter tuning, and experiment tracking.

Capabilities

  • Provides step-by-step guidance for model evaluation metrics
  • Follows industry best practices and patterns
  • Generates production-ready code and configurations
  • Validates outputs against common standards

Example Triggers

  • "Help me with model evaluation metrics"
  • "Set up model evaluation metrics"
  • "How do I implement model evaluation metrics?"

Related Skills

Part of the ML Training skill category. Tags: ml, training, pytorch, tensorflow, sklearn