Claude-code-plugins openrouter-openai-compat
install
source · Clone the upstream repo
git clone https://github.com/jeremylongshore/claude-code-plugins-plus-skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/jeremylongshore/claude-code-plugins-plus-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/plugins/saas-packs/openrouter-pack/skills/openrouter-openai-compat" ~/.claude/skills/jeremylongshore-claude-code-plugins-openrouter-openai-compat && rm -rf "$T"
manifest:
plugins/saas-packs/openrouter-pack/skills/openrouter-openai-compat/SKILL.mdsource content
OpenRouter OpenAI Compatibility
Overview
OpenRouter implements the OpenAI Chat Completions API specification (
/v1/chat/completions). Existing OpenAI SDK code works with OpenRouter by changing two values: base_url and api_key. This gives you access to 400+ models from all providers through the same SDK interface.
The Two-Line Migration
Python (Before)
from openai import OpenAI client = OpenAI(api_key=os.environ["OPENAI_API_KEY"]) # OpenAI direct response = client.chat.completions.create( model="gpt-4o", messages=[{"role": "user", "content": "Hello"}], )
Python (After)
from openai import OpenAI client = OpenAI( base_url="https://openrouter.ai/api/v1", # Changed api_key=os.environ["OPENROUTER_API_KEY"], # Changed default_headers={ "HTTP-Referer": "https://your-app.com", # Added (optional) "X-Title": "Your App", # Added (optional) }, ) response = client.chat.completions.create( model="openai/gpt-4o", # Prefix with provider namespace messages=[{"role": "user", "content": "Hello"}], )
TypeScript (After)
import OpenAI from "openai"; const client = new OpenAI({ baseURL: "https://openrouter.ai/api/v1", apiKey: process.env.OPENROUTER_API_KEY, defaultHeaders: { "HTTP-Referer": "https://your-app.com", "X-Title": "Your App" }, }); const res = await client.chat.completions.create({ model: "openai/gpt-4o", messages: [{ role: "user", content: "Hello" }], });
Model ID Mapping
| OpenAI Direct | OpenRouter ID |
|---|---|
| |
| |
| |
| |
| |
You also gain access to non-OpenAI models through the same SDK:
# Same client, any provider response = client.chat.completions.create( model="anthropic/claude-3.5-sonnet", # Anthropic messages=[{"role": "user", "content": "Hello"}], ) response = client.chat.completions.create( model="google/gemini-2.0-flash", # Google messages=[{"role": "user", "content": "Hello"}], )
What Works Identically
| Feature | Status | Notes |
|---|---|---|
| Fully supported | Main endpoint, all parameters |
| Fully supported | SSE format identical to OpenAI |
/ | Supported | OpenRouter transforms for non-OpenAI providers |
| Supported | Basic JSON mode |
| Supported | Strict schema mode |
, , | Supported | Standard parameters |
sequences | Supported | Array of stop strings |
(multiple completions) | Supported | Multiple choices |
What Differs
| Feature | Difference | Workaround |
|---|---|---|
| Model IDs | Prefixed with | Update model strings |
param | Not used | Remove from client init |
| Embeddings | Limited support | Use direct provider or dedicated embedding service |
| Fine-tuned models | Not directly accessible | Use provider's fine-tuned model ID if hosted |
| Model-dependent | Check model capabilities via |
| Responses API | Beta support | Use endpoint |
OpenRouter-Only Features
These are available through the same SDK but are unique to OpenRouter:
# Model fallbacks (try models in order) response = client.chat.completions.create( model="anthropic/claude-3.5-sonnet", messages=[{"role": "user", "content": "Hello"}], extra_body={ "models": [ "anthropic/claude-3.5-sonnet", "openai/gpt-4o", "google/gemini-2.0-flash", ], "route": "fallback", }, ) # Provider preferences response = client.chat.completions.create( model="anthropic/claude-3.5-sonnet", messages=[{"role": "user", "content": "Hello"}], extra_body={ "provider": { "order": ["anthropic"], # Prefer Anthropic direct "allow_fallbacks": True, "sort": "price", # Cheapest first }, }, ) # Plugins (web search, response healing) response = client.chat.completions.create( model="openai/gpt-4o", messages=[{"role": "user", "content": "What happened today?"}], extra_body={ "plugins": [{"id": "web"}], # Enable real-time web search }, )
Dual-Provider Pattern
import os from openai import OpenAI def create_client(provider: str = "openrouter") -> OpenAI: if provider == "openai": return OpenAI(api_key=os.environ["OPENAI_API_KEY"]) return OpenAI( base_url="https://openrouter.ai/api/v1", api_key=os.environ["OPENROUTER_API_KEY"], default_headers={"HTTP-Referer": "https://your-app.com"}, ) # Switch providers without changing application code client = create_client(os.environ.get("LLM_PROVIDER", "openrouter"))
Error Handling
| Issue | Cause | Fix |
|---|---|---|
| 400 unsupported parameter | Model doesn't support a parameter | Conditionally set params based on model capabilities |
| Different response quality | Non-OpenAI model handles prompt differently | Adjust prompts per model family; test before switching |
Missing | OpenRouter ignores org-level auth | Remove from client init |
Enterprise Considerations
- Use environment variables to switch between direct OpenAI and OpenRouter without code changes
- Test your full prompt suite across providers before migrating production traffic
- Monitor response quality and latency after migration; some prompts may need tuning
- OpenRouter normalizes the API across providers, but subtle behavioral differences exist between model families
- Use
for OpenRouter-specific features (provider preferences, plugins, fallbacks)extra_body