Claude-skill-registry Confidence Scoring

See the main Model Explainability skill for comprehensive coverage of confidence scoring and calibration.

install
source · Clone the upstream repo
git clone https://github.com/majiayu000/claude-skill-registry
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data/confidence-scoring" ~/.claude/skills/majiayu000-claude-skill-registry-confidence-scoring && rm -rf "$T"
manifest: skills/data/confidence-scoring/SKILL.md
source content

Confidence Scoring

This skill is covered in detail in the main Model Explainability skill.

Please refer to:

44-ai-governance/model-explainability/SKILL.md

That skill covers:

  • SHAP and LIME for feature importance
  • Confidence scoring and interpretation
  • Calibration techniques
  • Explainability for different model types
  • LLM-specific explainability
  • Presenting explanations to users
  • Tools (SHAP, LIME, InterpretML, Captum)
  • Real-world explainability examples

For confidence-specific topics, also see:

  • Confidence thresholds in
    44-ai-governance/human-approval-flows
  • Model risk management in
    44-ai-governance/model-risk-management

Related Skills

  • 44-ai-governance/model-explainability
    (Main skill)
  • 44-ai-governance/human-approval-flows
  • 44-ai-governance/model-risk-management