Vibe-Skills explaining-machine-learning-models
install
source · Clone the upstream repo
git clone https://github.com/foryourhealth111-pixel/Vibe-Skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/foryourhealth111-pixel/Vibe-Skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/bundled/skills/explaining-machine-learning-models" ~/.claude/skills/foryourhealth111-pixel-vibe-skills-explaining-machine-learning-models && rm -rf "$T"
manifest:
bundled/skills/explaining-machine-learning-models/SKILL.mdsource content
Model Explainability Tool
Positioning
Treat this skill as an explicit/manual helper for interpretability work.
When to Use
Use this skill when:
- Understand why a machine learning model made a specific prediction.
- Identify the most important features influencing a model's output.
- Debug model performance issues by identifying unexpected feature interactions.
- Communicate model insights to non-technical stakeholders.
- Ensure fairness and transparency in model predictions.
Not For / Boundaries
- Model training and hyperparameter search: use
training-machine-learning-models - Benchmark comparison and threshold selection: use
evaluating-machine-learning-models - Leakage or prediction-time audits: use
ml-data-leakage-guard
Typical Outputs
- Feature importance or attribution summaries
- Local explanation workflow for a concrete prediction
- Notes on caveats, instability, or misleading explanations
Related Skills
for SHAP-specific workflowsshap
when the question is whether the model is good enoughevaluating-machine-learning-models