Skills provider-probe
install
source · Clone the upstream repo
git clone https://github.com/openclaw/skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/andyrenxu7255/provider-probe" ~/.claude/skills/openclaw-skills-provider-probe && rm -rf "$T"
OpenClaw · Install into ~/.openclaw/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.openclaw/skills && cp -r "$T/skills/andyrenxu7255/provider-probe" ~/.openclaw/skills/openclaw-skills-provider-probe && rm -rf "$T"
manifest:
skills/andyrenxu7255/provider-probe/SKILL.mdtags
source content
Provider Probe
Use this skill to investigate model providers behind OpenAI-compatible base URLs.
When to use
Trigger this skill when the user asks to:
- verify whether a provider's claimed model is real
- inspect a baseURL for hidden/mixed model pools
- compare multiple providers for the same claimed model
- determine whether a provider is better suited as primary or fallback
- create a trust/stability report for model routing
Core method
Always use a layered evidence approach:
- Read provider config or ask for baseURL + apiKey + claimed model id.
- Call
and inspect whether the returned pool contains mixed vendors or suspicious aliases./models - Check metadata like
, model naming conventions, and whether one baseURL exposes many unrelated model families.owned_by - Probe both
and/responses
with minimal prompts./chat/completions - Run short capability tests and repeated stability tests.
- Summarize with a confidence rating rather than absolute certainty.
Confidence labels
- High confidence real / most likely genuine: stable, coherent endpoint behavior, believable output structure, low ambiguity.
- Medium confidence / likely routed or wrapped: works, but signs suggest aggregation, aliasing, or proxy adaptation.
- Low confidence / unusable now: 404, repeated timeout, incompatible shape, or too little evidence.
Output contract
Always report:
- 当前做到哪了 / what was tested
- 当前阻塞点 / what remains uncertain
- 下一步动作 / recommended next step
For final results, include:
- Config facts
findings/models- Endpoint compatibility findings
- Repeated stability findings
- Capability/format findings
- Final trust judgment
- Recommendation: primary / fallback / avoid
Tooling
Prefer the bundled script for deterministic testing:
scripts/provider_probe.py
Usage:
python3 scripts/provider_probe.py --config /root/.openclaw/openclaw.json --providers ypemc omgteam vpsai --model gpt-5.4
Or probe a custom URL directly:
python3 scripts/provider_probe.py --base-url https://example.com/v1 --api-key sk-xxx --model gpt-5.4
Interpretation heuristics
Treat a provider as a likely aggregation pool when several of these appear together:
returns many unrelated model families/models
values are mixed or inconsistentowned_by- the claimed model id looks like a routing alias rather than a canonical model id
and/responses
compatibility is uneven or surprising/chat/completions- behavior is stable enough to work but not coherent enough to look like a single official upstream
Files
- Reference checklist:
references/provider-probe-checklist.md - Probe script:
scripts/provider_probe.py