Skills Deep Infra
Configure DeepInfra model routing with provider auth, model selection, fallback chains, and cost-aware defaults for stable open-source and frontier model workflows.
git clone https://github.com/openclaw/skills
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/ats3v/deep-infra" ~/.claude/skills/clawdbot-skills-deep-infra && rm -rf "$T"
skills/ats3v/deep-infra/SKILL.mdSetup
On first use, read
setup.md to align activation boundaries, reliability goals, and routing preferences before making configuration changes.
When to Use
Use this skill when the user wants to connect an OpenAI-compatible workflow to DeepInfra, choose open-source and frontier models by task type, set safe fallbacks, and control cost drift over time.
Architecture
Memory lives in
~/deep-infra/. See memory-template.md for structure.
~/deep-infra/ ├── memory.md # Active routing profile and constraints ├── providers.md # Confirmed provider and auth choices ├── routing-rules.md # Task -> model and fallback policy ├── incidents.md # Outages, rate limits, and recovery notes └── budgets.md # Spend guardrails and optimization actions
Quick Reference
Use the smallest relevant file for the current task.
| Topic | File |
|---|---|
| Setup and activation preferences | |
| Memory template | |
| Authentication and provider wiring | |
| Routing patterns by workload | |
| Reliability and fallback handling | |
| Cost controls and spend reviews | |
Core Rules
1. Start from Workload Classes, Not Model Hype
- Classify requests first: coding, analysis, extraction, summarization, or long-context synthesis.
- Map each class to a primary model and a fallback before changing any defaults.
2. Keep Authentication Explicit and Verifiable
- Use
from the local environment, never pasted into logs or chat memory.DEEPINFRA_API_KEY - Validate auth with a minimal request before applying routing changes.
3. Design Fallbacks for Failure Modes, Not Convenience
- Separate fallback reasons: rate limit, provider outage, latency spike, or output quality failure.
- Keep at least one fallback from a different model family for resilience.
4. Leverage Open-Source Model Diversity
- DeepInfra hosts models from many providers (DeepSeek, Moonshot, MiniMax, StepFun, NVIDIA, and more).
- Use model diversity to build resilient fallback chains across independent model families.
5. Enforce Cost Boundaries Before Throughput Tuning
- Set cost ceilings by task class and check expected token burn before broad rollout.
- Route low-stakes tasks to cheaper models and reserve premium models for high-impact tasks.
6. Change One Layer at a Time
- Modify either model selection, fallback policy, or budget limits in a single iteration.
- After each change, run a quick verification prompt set and record outcome.
7. Record Decisions for Repeatability
- Save the final routing policy, rationale, and known tradeoffs in memory.
- Reuse proven policies instead of repeatedly rebuilding from scratch.
Common Traps
- Choosing one model for every task -> higher cost and unstable quality under varied workloads.
- Using same-family fallback chain only -> cascading failures during model-specific incidents.
- Ignoring token limits for long inputs -> truncated responses and hidden quality loss.
- Changing routing and budgets simultaneously -> unclear root cause when quality drops.
- Running without verification prompts -> broken routing detected only after user-facing failures.
External Endpoints
These endpoints are used only to discover model metadata and execute routed inference requests under explicit user task intent.
| Endpoint | Data Sent | Purpose |
|---|---|---|
| https://api.deepinfra.com/v1/openai/models | none or auth header | Discover current model catalog and metadata |
| https://api.deepinfra.com/v1/openai/chat/completions | user prompt content and selected model id | Execute routed inference requests |
No other data is sent externally.
Security & Privacy
Data that leaves your machine:
- Prompt text and selected model metadata sent to DeepInfra when inference is requested.
Data that stays local:
- Routing notes and preferences under
.~/deep-infra/ - Local environment variable references and verification logs.
This skill does NOT:
- Request raw API keys in chat.
- Store plaintext secrets in skill memory files.
- Modify files outside
for its own state.~/deep-infra/
Trust
By using this skill, prompt content is sent to DeepInfra for model execution. Only install if you trust this service with your data.
Related Skills
Install with
clawhub install <slug> if user confirms:
— API request design, payload shaping, and response validation patternsapi
— credential handling and auth troubleshooting workflowsauth
— model comparison and selection guidancemodels
— runtime health checks and incident tracking practicesmonitoring
Feedback
- If useful:
clawhub star deep-infra - Stay updated:
clawhub sync