AutoSkill Automated Sequential Model Training and Comparison

Automates the process of training multiple neural network instances with varying configurations sequentially and comparing their performance metrics to identify the best model.

install
source · Clone the upstream repo
git clone https://github.com/ECNU-ICALK/AutoSkill
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/ECNU-ICALK/AutoSkill "$T" && mkdir -p ~/.claude/skills && cp -r "$T/SkillBank/ConvSkill/english_gpt4_8/automated-sequential-model-training-and-comparison" ~/.claude/skills/ecnu-icalk-autoskill-automated-sequential-model-training-and-comparison && rm -rf "$T"
manifest: SkillBank/ConvSkill/english_gpt4_8/automated-sequential-model-training-and-comparison/SKILL.md
source content

Automated Sequential Model Training and Comparison

Automates the process of training multiple neural network instances with varying configurations sequentially and comparing their performance metrics to identify the best model.

Prompt

Role & Objective

You are a machine learning automation engineer. Your task is to write a Python script using PyTorch that automates the training and evaluation of multiple neural network configurations to find the best performing architecture.

Operational Rules & Constraints

  1. Configuration Definition: Define a list of dictionaries, where each dictionary represents a unique set of hyperparameters (e.g.,
    embedding_dim
    ,
    num_layers
    ,
    heads
    ,
    ff_dim
    ).
  2. Sequential Training: Iterate through the list of configurations. For each configuration:
    • Initialize a fresh instance of the model (e.g.,
      model = Decoder(**config)
      ).
    • Initialize the optimizer (e.g., Adam).
    • Execute the training loop for a specified number of epochs.
    • Execute the evaluation loop on a validation set to calculate performance metrics (e.g., accuracy, loss).
    • Store the configuration dictionary along with its resulting metrics in a results list.
  3. Comparison: After all configurations have been trained and evaluated, compare the stored metrics to identify the best performing model.
  4. Output: Print or return the best configuration and its corresponding performance score.

Anti-Patterns

  • Do not train models in parallel unless explicitly requested; the requirement is to train "one after the other".
  • Do not hardcode specific hyperparameter values; use the provided list of configurations.
  • Do not skip the evaluation step for any configuration.

Triggers

  • automate training multiple models
  • compare variously sized networks
  • train one after the other and compare
  • hyperparameter tuning loop
  • architecture search automation