Marketplace model-discovery

Fetch current model names from AI providers (Anthropic, OpenAI, Gemini, Ollama), classify them into tiers (fast/default/heavy), and detect new models. Use when needing up-to-date model IDs for API calls or when other skills reference model names.

install
source · Clone the upstream repo
git clone https://github.com/aiskillstore/marketplace
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/aiskillstore/marketplace "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/consiliency/model-discovery" ~/.claude/skills/aiskillstore-marketplace-model-discovery && rm -rf "$T"
manifest: skills/consiliency/model-discovery/SKILL.md
source content

Model Discovery Skill

Fetch the most recent model names from AI providers using their APIs. Includes tier classification (fast/default/heavy) for routing decisions and automatic detection of new models.

Variables

VariableDefaultDescription
CACHE_TTL_HOURS24How long to cache model lists before refreshing
ENABLED_ANTHROPICtrueFetch Claude models from Anthropic API
ENABLED_OPENAItrueFetch GPT models from OpenAI API
ENABLED_GEMINItrueFetch Gemini models from Google API
ENABLED_OLLAMAtrueFetch local models from Ollama
OLLAMA_HOSThttp://localhost:11434Ollama API endpoint
AUTO_CLASSIFYtrueAuto-classify new models using pattern matching

Instructions

MANDATORY - Follow the Workflow steps below in order. Do not skip steps.

  • Before referencing model names in any skill, check if fresh data exists
  • Use tier mappings to select appropriate models (fast for speed, heavy for capability)
  • Check for new models periodically and classify them

Red Flags - STOP and Reconsider

If you're about to:

  • Hardcode a model version like
    gpt-5.2
    or
    claude-sonnet-4-5
  • Use model names from memory without checking current availability
  • Call APIs without checking if API keys are configured
  • Skip new model classification when prompted

STOP -> Read the appropriate cookbook file -> Use the fetch script

Workflow

Fetching Models

  1. Determine which provider(s) you need models from
  2. Check if cached model list exists:
    cache/models.json
  3. If cache is fresh (< CACHE_TTL_HOURS old), use cached data
  4. If stale/missing, run:
    uv run python scripts/fetch_models.py --force
  5. CHECKPOINT: Verify no API errors in output
  6. Use the model IDs as needed

Checking for New Models

  1. Run:
    uv run python scripts/check_new_models.py --json
  2. If new models found, review the output
  3. For auto-classification:
    uv run python scripts/check_new_models.py --auto
  4. For interactive classification:
    uv run python scripts/check_new_models.py
  5. CHECKPOINT: All models assigned to tiers (fast/default/heavy)

Getting Tier Recommendations

  1. Read:
    config/model_tiers.json
    for current tier mappings
  2. Use the appropriate model for task complexity:
    • fast: Simple tasks, high throughput, cost-sensitive
    • default: General purpose, balanced
    • heavy: Complex reasoning, research, difficult tasks

Model Tier Reference

Anthropic Claude

TierModelCLI Name
fastclaude-haiku-4-5haiku
defaultclaude-sonnet-4-5sonnet
heavyclaude-opus-4-5opus

OpenAI

TierModelNotes
fastgpt-5.2-miniSpeed optimized
defaultgpt-5.2Balanced flagship
heavygpt-5.2-proMaximum capability

Codex (for coding):

TierModel
fastgpt-5.2-codex-mini
defaultgpt-5.2-codex
heavygpt-5.2-codex-max

Google Gemini

TierModelContext
fastgemini-3-flash-liteSee API output
defaultgemini-3-proSee API output
heavygemini-3-deep-thinkSee API output

Ollama (Local)

TierSuggested ModelNotes
fastphi3.5:latestSmall; fast
defaultllama3.2:latestBalanced
heavyllama3.3:70bLarge; requires GPU

CLI Mappings (for spawn:agent skill)

CLI ToolFastDefaultHeavy
claude-codehaikusonnetopus
codex-cligpt-5.2-codex-minigpt-5.2-codexgpt-5.2-codex-max
gemini-cligemini-3-flash-litegemini-3-progemini-3-deep-think
cursor-cligpt-5.2sonnet-4.5sonnet-4.5-thinking
opencode-clianthropic/claude-haiku-4-5anthropic/claude-sonnet-4-5anthropic/claude-opus-4-5
copilot-cliclaude-sonnet-4.5claude-sonnet-4.5claude-sonnet-4.5

Quick Reference

Scripts

# Fetch all models (uses cache if fresh)
uv run python scripts/fetch_models.py

# Force refresh from APIs
uv run python scripts/fetch_models.py --force

# Fetch and check for new models
uv run python scripts/fetch_models.py --force --check-new

# Check for new unclassified models (JSON output for agents)
uv run python scripts/check_new_models.py --json

# Auto-classify new models using patterns
uv run python scripts/check_new_models.py --auto

# Interactive classification
uv run python scripts/check_new_models.py

Config Files

FilePurpose
config/model_tiers.json
Static tier mappings and CLI model names
config/known_models.json
Registry of all classified models with timestamps
cache/models.json
Cached API responses

API Endpoints

ProviderEndpointAuth
Anthropic
GET /v1/models
x-api-key
header
OpenAI
GET /v1/models
Bearer token
Gemini
GET /v1beta/models
?key=
param
Ollama
GET /api/tags
None

Output Examples

Fetch Models Output

{
  "fetched_at": "2025-12-17T05:53:25Z",
  "providers": {
    "anthropic": [{"id": "claude-opus-4-5", "name": "Claude Opus 4.5"}],
    "openai": [{"id": "gpt-5.2", "name": "gpt-5.2"}],
    "gemini": [{"id": "models/gemini-3-pro", "name": "Gemini 3 Pro"}],
    "ollama": [{"id": "phi3.5:latest", "name": "phi3.5:latest"}]
  }
}

Check New Models Output (--json)

{
  "timestamp": "2025-12-17T06:00:00Z",
  "has_new_models": true,
  "total_new": 2,
  "by_provider": {
    "openai": {
      "count": 2,
      "models": [
        {"id": "gpt-5.2-mini", "inferred_tier": "fast", "needs_classification": false},
        {"id": "gpt-5.2-pro", "inferred_tier": "heavy", "needs_classification": false}
      ]
    }
  }
}

Integration

Other skills should reference this skill for model names:

## Model Names

For current model names and tiers, use the `model-discovery` skill:
- Tiers: Read `config/model_tiers.json`
- Fresh data: Run `uv run python scripts/fetch_models.py`
- New models: Run `uv run python scripts/check_new_models.py --json`

**Do not hardcode model version numbers** - they become stale quickly.

New Model Detection

When new models are detected:

  1. The script will report them with suggested tiers based on naming patterns
  2. Models matching these patterns are auto-classified:
    • heavy:
      -pro
      ,
      -opus
      ,
      -max
      ,
      thinking
      ,
      deep-research
    • fast:
      -mini
      ,
      -nano
      ,
      -flash
      ,
      -lite
      ,
      -haiku
    • default: Base model names without modifiers
  3. Models not matching patterns require manual classification
  4. Specialty models (TTS, audio, transcribe) are auto-excluded

Agent Query for New Models

When checking for new models programmatically:

# Returns exit code 1 if new models need attention
uv run python scripts/check_new_models.py --json

# Example agent workflow
if ! uv run python scripts/check_new_models.py --json > /tmp/new_models.json 2>&1; then
    echo "New models detected - review /tmp/new_models.json"
fi