Skills portkey
install
source · Clone the upstream repo
git clone https://github.com/TerminalSkills/skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/TerminalSkills/skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/portkey" ~/.claude/skills/terminalskills-skills-portkey && rm -rf "$T"
manifest:
skills/portkey/SKILL.mdsafety · automated scan (medium risk)
This is a pattern-based risk scan, not a security review. Our crawler flagged:
- pip install
- references .env files
- references API keys
Always read a skill's source content before installing. Patterns alone don't mean the skill is malicious — but they warrant attention.
source content
Portkey — AI Gateway for Production LLM Apps
You are an expert in Portkey, the AI gateway that sits between your app and LLM providers. You help developers add caching, fallbacks, load balancing, request retries, guardrails, semantic caching, budget limits, and observability to LLM calls — using a single unified API that works with 200+ models from OpenAI, Anthropic, Google, and open-source providers.
Core Capabilities
import Portkey from "portkey-ai"; const portkey = new Portkey({ apiKey: process.env.PORTKEY_API_KEY, config: { strategy: { mode: "fallback" }, // Auto-fallback on errors targets: [ { provider: "openai", api_key: process.env.OPENAI_KEY, override_params: { model: "gpt-4o" }, weight: 0.7, }, { provider: "anthropic", api_key: process.env.ANTHROPIC_KEY, override_params: { model: "claude-sonnet-4-20250514" }, weight: 0.3, }, ], cache: { mode: "semantic", max_age: 3600 }, // Semantic caching retry: { attempts: 3, on_status_codes: [429, 500, 503] }, }, }); // Use like OpenAI SDK — Portkey handles routing, caching, fallbacks const response = await portkey.chat.completions.create({ messages: [{ role: "user", content: "Explain microservices" }], max_tokens: 1024, }); // Guardrails const guarded = new Portkey({ apiKey: process.env.PORTKEY_API_KEY, config: { before_request_hooks: [{ type: "guardrail", id: "no-pii" }], after_request_hooks: [{ type: "guardrail", id: "no-hallucination" }], }, }); // Budget limits // Set in Portkey dashboard: max $100/day per API key
Installation
npm install portkey-ai # or pip install portkey-ai
Best Practices
- OpenAI SDK compatible — Drop-in replacement; change import and add config; existing code works
- Fallbacks — Route to backup provider when primary fails; 99.99% effective uptime
- Semantic caching — Cache similar (not just identical) queries; 40-60% cache hit rate typical
- Load balancing — Split traffic across providers by weight; optimize cost vs quality
- Retry with backoff — Auto-retry on 429/500/503; configurable attempts and status codes
- Guardrails — PII detection, content moderation, hallucination checks; pre and post request
- Budget limits — Set per-key spending caps; prevent runaway costs from bugs or abuse
- Observability — Dashboard shows latency, cost, tokens, errors per provider; no additional SDK