Claude-code-plugins openrouter-openai-compat

install
source · Clone the upstream repo
git clone https://github.com/jeremylongshore/claude-code-plugins-plus-skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/jeremylongshore/claude-code-plugins-plus-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/plugins/saas-packs/openrouter-pack/skills/openrouter-openai-compat" ~/.claude/skills/jeremylongshore-claude-code-plugins-openrouter-openai-compat && rm -rf "$T"
manifest: plugins/saas-packs/openrouter-pack/skills/openrouter-openai-compat/SKILL.md
source content

OpenRouter OpenAI Compatibility

Overview

OpenRouter implements the OpenAI Chat Completions API specification (

/v1/chat/completions
). Existing OpenAI SDK code works with OpenRouter by changing two values:
base_url
and
api_key
. This gives you access to 400+ models from all providers through the same SDK interface.

The Two-Line Migration

Python (Before)

from openai import OpenAI

client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])  # OpenAI direct
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Hello"}],
)

Python (After)

from openai import OpenAI

client = OpenAI(
    base_url="https://openrouter.ai/api/v1",              # Changed
    api_key=os.environ["OPENROUTER_API_KEY"],              # Changed
    default_headers={
        "HTTP-Referer": "https://your-app.com",            # Added (optional)
        "X-Title": "Your App",                             # Added (optional)
    },
)
response = client.chat.completions.create(
    model="openai/gpt-4o",  # Prefix with provider namespace
    messages=[{"role": "user", "content": "Hello"}],
)

TypeScript (After)

import OpenAI from "openai";

const client = new OpenAI({
  baseURL: "https://openrouter.ai/api/v1",
  apiKey: process.env.OPENROUTER_API_KEY,
  defaultHeaders: { "HTTP-Referer": "https://your-app.com", "X-Title": "Your App" },
});

const res = await client.chat.completions.create({
  model: "openai/gpt-4o",
  messages: [{ role: "user", content: "Hello" }],
});

Model ID Mapping

OpenAI DirectOpenRouter ID
gpt-4o
openai/gpt-4o
gpt-4o-mini
openai/gpt-4o-mini
gpt-4-turbo
openai/gpt-4-turbo
o1
openai/o1
o1-mini
openai/o1-mini

You also gain access to non-OpenAI models through the same SDK:

# Same client, any provider
response = client.chat.completions.create(
    model="anthropic/claude-3.5-sonnet",  # Anthropic
    messages=[{"role": "user", "content": "Hello"}],
)

response = client.chat.completions.create(
    model="google/gemini-2.0-flash",  # Google
    messages=[{"role": "user", "content": "Hello"}],
)

What Works Identically

FeatureStatusNotes
chat.completions.create
Fully supportedMain endpoint, all parameters
stream: true
Fully supportedSSE format identical to OpenAI
tools
/
tool_choice
SupportedOpenRouter transforms for non-OpenAI providers
response_format: { type: "json_object" }
SupportedBasic JSON mode
response_format: { type: "json_schema" }
SupportedStrict schema mode
temperature
,
top_p
,
max_tokens
SupportedStandard parameters
stop
sequences
SupportedArray of stop strings
n
(multiple completions)
SupportedMultiple choices

What Differs

FeatureDifferenceWorkaround
Model IDsPrefixed with
provider/
Update model strings
organization
param
Not usedRemove from client init
EmbeddingsLimited supportUse direct provider or dedicated embedding service
Fine-tuned modelsNot directly accessibleUse provider's fine-tuned model ID if hosted
logprobs
Model-dependentCheck model capabilities via
/api/v1/models
Responses APIBeta supportUse
/api/v1/responses
endpoint

OpenRouter-Only Features

These are available through the same SDK but are unique to OpenRouter:

# Model fallbacks (try models in order)
response = client.chat.completions.create(
    model="anthropic/claude-3.5-sonnet",
    messages=[{"role": "user", "content": "Hello"}],
    extra_body={
        "models": [
            "anthropic/claude-3.5-sonnet",
            "openai/gpt-4o",
            "google/gemini-2.0-flash",
        ],
        "route": "fallback",
    },
)

# Provider preferences
response = client.chat.completions.create(
    model="anthropic/claude-3.5-sonnet",
    messages=[{"role": "user", "content": "Hello"}],
    extra_body={
        "provider": {
            "order": ["anthropic"],             # Prefer Anthropic direct
            "allow_fallbacks": True,
            "sort": "price",                    # Cheapest first
        },
    },
)

# Plugins (web search, response healing)
response = client.chat.completions.create(
    model="openai/gpt-4o",
    messages=[{"role": "user", "content": "What happened today?"}],
    extra_body={
        "plugins": [{"id": "web"}],  # Enable real-time web search
    },
)

Dual-Provider Pattern

import os
from openai import OpenAI

def create_client(provider: str = "openrouter") -> OpenAI:
    if provider == "openai":
        return OpenAI(api_key=os.environ["OPENAI_API_KEY"])
    return OpenAI(
        base_url="https://openrouter.ai/api/v1",
        api_key=os.environ["OPENROUTER_API_KEY"],
        default_headers={"HTTP-Referer": "https://your-app.com"},
    )

# Switch providers without changing application code
client = create_client(os.environ.get("LLM_PROVIDER", "openrouter"))

Error Handling

IssueCauseFix
400 unsupported parameterModel doesn't support a parameterConditionally set params based on model capabilities
Different response qualityNon-OpenAI model handles prompt differentlyAdjust prompts per model family; test before switching
Missing
organization
OpenRouter ignores org-level authRemove
organization
from client init

Enterprise Considerations

  • Use environment variables to switch between direct OpenAI and OpenRouter without code changes
  • Test your full prompt suite across providers before migrating production traffic
  • Monitor response quality and latency after migration; some prompts may need tuning
  • OpenRouter normalizes the API across providers, but subtle behavioral differences exist between model families
  • Use
    extra_body
    for OpenRouter-specific features (provider preferences, plugins, fallbacks)

References