Antigravity-awesome-skills hugging-face-community-evals
Run local evaluations for Hugging Face Hub models with inspect-ai or lighteval.
git clone https://github.com/sickn33/antigravity-awesome-skills
T=$(mktemp -d) && git clone --depth=1 https://github.com/sickn33/antigravity-awesome-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/plugins/antigravity-awesome-skills/skills/hugging-face-community-evals" ~/.claude/skills/sickn33-antigravity-awesome-skills-hugging-face-community-evals-e49847 && rm -rf "$T"
plugins/antigravity-awesome-skills/skills/hugging-face-community-evals/SKILL.md- dumps environment variables
Overview
When to Use
Use this skill for local model evaluation, backend selection, and GPU smoke tests outside the Hugging Face Jobs workflow.
This skill is for running evaluations against models on the Hugging Face Hub on local hardware.
It covers:
with local inferenceinspect-ai
with local inferencelighteval- choosing between
, Hugging Face Transformers, andvllmaccelerate - smoke tests, task selection, and backend fallback strategy
It does not cover:
- Hugging Face Jobs orchestration
- model-card or
editsmodel-index - README table extraction
- Artificial Analysis imports
generation or publishing.eval_results- PR creation or community-evals automation
If the user wants to run the same eval remotely on Hugging Face Jobs, hand off to the
hugging-face-jobs skill and pass it one of the local scripts in this skill.
If the user wants to publish results into the community evals workflow, stop after generating the evaluation run and hand off that publishing step to
~/code/community-evals.
All paths below are relative to the directory containing this
.SKILL.md
When To Use Which Script
| Use case | Script |
|---|---|
Local eval on a Hub model via inference providers | |
Local GPU eval with using or Transformers | |
Local GPU eval with using or | |
| Extra command patterns | |
Prerequisites
- Prefer
for local execution.uv run - Set
for gated/private models.HF_TOKEN - For local GPU runs, verify GPU access before starting:
uv --version printenv HF_TOKEN >/dev/null nvidia-smi
If
nvidia-smi is unavailable, either:
- use
for lighter provider-backed evaluation, orscripts/inspect_eval_uv.py - hand off to the
skill if the user wants remote compute.hugging-face-jobs
Core Workflow
- Choose the evaluation framework.
- Use
when you want explicit task control and inspect-native flows.inspect-ai - Use
when the benchmark is naturally expressed as a lighteval task string, especially leaderboard-style tasks.lighteval
- Use
- Choose the inference backend.
- Prefer
for throughput on supported architectures.vllm - Use Hugging Face Transformers (
) or--backend hf
as compatibility fallbacks.accelerate
- Prefer
- Start with a smoke test.
: addinspect-ai
or similar.--limit 10
: addlighteval
.--max-samples 10
- Scale up only after the smoke test passes.
- If the user wants remote execution, hand off to
with the same script + args.hugging-face-jobs
Quick Start
Option A: inspect-ai with local inference providers path
Best when the model is already supported by Hugging Face Inference Providers and you want the lowest local setup overhead.
uv run scripts/inspect_eval_uv.py \ --model meta-llama/Llama-3.2-1B \ --task mmlu \ --limit 20
Use this path when:
- you want a quick local smoke test
- you do not need direct GPU control
- the task already exists in
inspect-evals
Option B: inspect-ai on Local GPU
Best when you need to load the Hub model directly, use
vllm, or fall back to Transformers for unsupported architectures.
Local GPU:
uv run scripts/inspect_vllm_uv.py \ --model meta-llama/Llama-3.2-1B \ --task gsm8k \ --limit 20
Transformers fallback:
uv run scripts/inspect_vllm_uv.py \ --model microsoft/phi-2 \ --task mmlu \ --backend hf \ --trust-remote-code \ --limit 20
Option C: lighteval on Local GPU
Best when the task is naturally expressed as a
lighteval task string, especially Open LLM Leaderboard style benchmarks.
Local GPU:
uv run scripts/lighteval_vllm_uv.py \ --model meta-llama/Llama-3.2-3B-Instruct \ --tasks "leaderboard|mmlu|5,leaderboard|gsm8k|5" \ --max-samples 20 \ --use-chat-template
accelerate fallback:
uv run scripts/lighteval_vllm_uv.py \ --model microsoft/phi-2 \ --tasks "leaderboard|mmlu|5" \ --backend accelerate \ --trust-remote-code \ --max-samples 20
Remote Execution Boundary
This skill intentionally stops at local execution and backend selection.
If the user wants to:
- run these scripts on Hugging Face Jobs
- pick remote hardware
- pass secrets to remote jobs
- schedule recurring runs
- inspect / cancel / monitor jobs
then switch to the
skill and pass it one of these scripts plus the chosen arguments.hugging-face-jobs
Task Selection
inspect-ai examples:
mmlugsm8khellaswagarc_challengetruthfulqawinograndehumaneval
lighteval task strings use suite|task|num_fewshot:
leaderboard|mmlu|5leaderboard|gsm8k|5leaderboard|arc_challenge|25lighteval|hellaswag|0
Multiple
lighteval tasks can be comma-separated in --tasks.
Backend Selection
- Prefer
for fast GPU inference on supported architectures.inspect_vllm_uv.py --backend vllm - Use
wheninspect_vllm_uv.py --backend hf
does not support the model.vllm - Prefer
for throughput on supported models.lighteval_vllm_uv.py --backend vllm - Use
as the compatibility fallback.lighteval_vllm_uv.py --backend accelerate - Use
when Inference Providers already cover the model and you do not need direct GPU control.inspect_eval_uv.py
Hardware Guidance
| Model size | Suggested local hardware |
|---|---|
| consumer GPU / Apple Silicon / small dev GPU |
| stronger local GPU |
| high-memory local GPU or hand off to |
For smoke tests, prefer cheaper local runs plus
--limit or --max-samples.
Troubleshooting
- CUDA or vLLM OOM:
- reduce
--batch-size - reduce
--gpu-memory-utilization - switch to a smaller model for the smoke test
- if necessary, hand off to
hugging-face-jobs
- reduce
- Model unsupported by
:vllm- switch to
for--backend hfinspect-ai - switch to
for--backend acceleratelighteval
- switch to
- Gated/private repo access fails:
- verify
HF_TOKEN
- verify
- Custom model code required:
- add
--trust-remote-code
- add
Examples
See:
for local command patternsexamples/USAGE_EXAMPLES.mdscripts/inspect_eval_uv.pyscripts/inspect_vllm_uv.pyscripts/lighteval_vllm_uv.py
Limitations
- Use this skill only when the task clearly matches the scope described above.
- Do not treat the output as a substitute for environment-specific validation, testing, or expert review.
- Stop and ask for clarification if required inputs, permissions, safety boundaries, or success criteria are missing.