Awesome-omni-skill add-driver
Scaffold a new LLM provider driver for Prompture. Creates sync + async driver classes, registers them in the driver registry, adds settings, env template, setup.py extras, package exports, discovery integration, and models.dev pricing. Use when adding support for a new LLM provider.
git clone https://github.com/diegosouzapw/awesome-omni-skill
T=$(mktemp -d) && git clone --depth=1 https://github.com/diegosouzapw/awesome-omni-skill "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data-ai/add-driver" ~/.claude/skills/diegosouzapw-awesome-omni-skill-add-driver && rm -rf "$T"
skills/data-ai/add-driver/SKILL.mdAdd a New LLM Driver
Scaffolds all files needed to integrate a new LLM provider into Prompture.
Before Starting
Ask the user for:
- Provider name (lowercase, used as registry key and
prefix)provider/model - SDK package name on PyPI and minimum version (or
/requests
for raw HTTP)httpx - Default model ID
- Authentication — API key env var name, endpoint URL, or both
- API compatibility — OpenAI-compatible (
), custom SDK, or proprietary HTTP/v1/chat/completions - Lazy or eager import — lazy if SDK is optional, eager if it's in
install_requires
Also look up the provider on models.dev to determine:
- models.dev provider name (e.g.,
for Claude,"anthropic"
for Grok,"xai"
for Moonshot)"moonshotai" - Whether models.dev has entries — if yes, pricing comes from models.dev live data (set
). If no, add hardcoded pricing.MODEL_PRICING = {}
Files to Create or Modify (11 total)
1. NEW: prompture/drivers/{provider}_driver.py
(sync driver)
prompture/drivers/{provider}_driver.pySee references/driver-template.md for the full skeleton.
Key rules:
- Subclass
(NOT justCostMixin, Driver
)Driver - Set class-level capability flags:
,supports_json_mode
,supports_json_schema
,supports_tool_use
,supports_streaming
,supports_visionsupports_messages - Use
to get per-modelself._get_model_config(provider, model)
andtokens_param
from models.devsupports_temperature - Use
— do NOT manually compute costsself._calculate_cost(provider, model, prompt_tokens, completion_tokens) - Use
before API calls to warn about unsupported featuresself._validate_model_capabilities(provider, model, ...) - If models.dev has this provider's data, set
(empty — pricing comes live from models.dev)MODEL_PRICING = {}
returnsgenerate(){"text": str, "meta": dict}
MUST contain:meta
,prompt_tokens
,completion_tokens
,total_tokens
,cost
,raw_responsemodel_name- Implement
,generate_messages()
, andgenerate_messages_with_tools()
for full feature supportgenerate_messages_stream() - Optional SDK: wrap import in try/except, raise
pointing toImportErrorpip install prompture[{provider}]
2. NEW: prompture/drivers/async_{provider}_driver.py
(async driver)
prompture/drivers/async_{provider}_driver.pyMirror of the sync driver using
AsyncDriver base class:
- Subclass
CostMixin, AsyncDriver - Same capability flags as the sync driver
- Share
from the sync driver:MODEL_PRICINGMODEL_PRICING = {Provider}Driver.MODEL_PRICING - Use
for HTTP calls (or async SDK methods)httpx.AsyncClient - All generate methods are
async def - Streaming returns
AsyncIterator[dict[str, Any]]
3. prompture/drivers/__init__.py
prompture/drivers/__init__.py- Add sync import:
from .{provider}_driver import {Provider}Driver - Add async import:
from .async_{provider}_driver import Async{Provider}Driver - Register sync driver with
:register_driver()register_driver( "{provider}", lambda model=None: {Provider}Driver( api_key=settings.{provider}_api_key, model=model or settings.{provider}_model, ), overwrite=True, ) - Add
and"{Provider}Driver"
to"Async{Provider}Driver"__all__
4. prompture/__init__.py
prompture/__init__.py- Add
to the{Provider}Driver
import line.drivers - Add
to"{Provider}Driver"
under__all__# Drivers
5. prompture/settings.py
prompture/settings.pyAdd inside
Settings class:
# {Provider} {provider}_api_key: Optional[str] = None {provider}_model: str = "default-model" # Add endpoint if the provider supports custom endpoints: # {provider}_endpoint: str = "https://api.example.com/v1"
6. prompture/discovery.py
prompture/discovery.pyTwo changes required:
a) Add to
dict and configuration check:provider_classes
- Import the driver class at the top of the file
- Add to
:provider_classes"{provider}": {Provider}Driver - Add configuration check in the
block:is_configured
For local/endpoint-only providers (like ollama), use endpoint presence instead.elif provider == "{provider}": if settings.{provider}_api_key or os.getenv("{PROVIDER}_API_KEY"): is_configured = True
b) This ensures
returns the provider's models from both:get_available_models()
- Static detection:
keys (or empty if pricing is from models.dev)MODEL_PRICING - models.dev enrichment: via
inPROVIDER_MAP
(see step 7)model_rates.py
7. prompture/model_rates.py
— PROVIDER_MAP
prompture/model_rates.pyPROVIDER_MAPIf models.dev has this provider's data, add the mapping:
PROVIDER_MAP: dict[str, str] = { ... "{provider}": "{models_dev_name}", # e.g., "moonshot": "moonshotai" }
This enables:
- Live pricing via
— used byget_model_rates()CostMixin._calculate_cost() - Capability metadata via
— used byget_model_capabilities()
and_get_model_config()_validate_model_capabilities() - Model discovery via
— called byget_all_provider_models()
to list all available modelsdiscovery.py
To find the correct models.dev name, check:
https://models.dev/{models_dev_name}
If models.dev does NOT have this provider, skip this step. The driver will use hardcoded
MODEL_PRICING for costs and return None for capabilities.
8. setup.py
/ pyproject.toml
setup.pypyproject.tomlIf optional: add
"{provider}": ["{sdk}>={version}"] to extras_require.
If required: add to install_requires.
9. .env.copy
.env.copyAdd section:
# {Provider} Configuration {PROVIDER}_API_KEY=your-api-key-here {PROVIDER}_MODEL=default-model
10. CLAUDE.md
CLAUDE.mdAdd
{provider} to the driver list in the Module Layout bullet.
11. OPTIONAL: examples/{provider}_example.py
examples/{provider}_example.pyFollow the existing example pattern (see
grok_example.py or groq_example.py):
- Two extraction examples: default instruction + custom instruction
- Show different models if available
- Print JSON output and token usage statistics
Important: Reasoning Model Handling
If the provider has reasoning models (models with
reasoning: true on models.dev):
- Check
before sendingcaps.is_reasoning
— reasoning models often don't support itresponse_format - Handle
field in responses (both regular and streaming)reasoning_content - Some reasoning models don't support
— respecttemperature
fromsupports_temperature_get_model_config()
Example pattern (see
moonshot_driver.py):
if options.get("json_mode"): from ..model_rates import get_model_capabilities caps = get_model_capabilities("{provider}", model) is_reasoning = caps is not None and caps.is_reasoning is True model_supports_structured = ( caps is None or caps.supports_structured_output is not False ) and not is_reasoning if model_supports_structured: # Send response_format ...
How models.dev Integration Works
User calls extract_and_jsonify("moonshot/kimi-k2.5", ...) │ ├─► core.py checks driver.supports_json_mode → decides json_mode │ ├─► driver._get_model_config("moonshot", "kimi-k2.5") │ └─► model_rates.get_model_capabilities("moonshot", "kimi-k2.5") │ └─► PROVIDER_MAP["moonshot"] → "moonshotai" │ └─► models.dev data["moonshotai"]["models"]["kimi-k2.5"] │ └─► Returns: supports_temperature, is_reasoning, context_window, etc. │ ├─► driver._calculate_cost("moonshot", "kimi-k2.5", tokens...) │ └─► model_rates.get_model_rates("moonshot", "kimi-k2.5") │ └─► Same lookup → returns {input: 0.6, output: 3.0} per 1M tokens │ └─► discovery.get_available_models() └─► Iterates PROVIDER_MAP → get_all_provider_models("moonshotai") └─► Returns all model IDs under the provider
Model Name Resolution
Model names are always provider-scoped. The format is
"provider/model_id".
→ looks upget_driver_for_model("openrouter/qwen-2.5")
in the driver registry"openrouter"
→ looks in models.dev underget_model_capabilities("openrouter", "qwen-2.5")data["openrouter"]["models"]["qwen-2.5"]
→ looks in models.dev underget_model_capabilities("modelscope", "qwen-2.5")data["modelscope"]["models"]["qwen-2.5"]
The same model ID under different providers is not ambiguous — each provider has its own namespace in both the driver registry and models.dev data.
Verification
# Import check python -c "from prompture import {Provider}Driver; print('OK')" python -c "from prompture.drivers import Async{Provider}Driver; print('OK')" # Registry check python -c "from prompture.drivers import get_driver_for_model; d = get_driver_for_model('{provider}/test'); print(type(d).__name__, d.model)" # Discovery check python -c "from prompture import get_available_models; ms = [m for m in get_available_models() if m.startswith('{provider}/')]; print(f'Found {{len(ms)}} models'); print(ms[:5])" # Run tests pytest tests/ -x -q