Skillshub anth-migration-deep-dive
install
source · Clone the upstream repo
git clone https://github.com/ComeOnOliver/skillshub
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/ComeOnOliver/skillshub "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/jeremylongshore/claude-code-plugins-plus-skills/anth-migration-deep-dive" ~/.claude/skills/comeonoliver-skillshub-anth-migration-deep-dive && rm -rf "$T"
manifest:
skills/jeremylongshore/claude-code-plugins-plus-skills/anth-migration-deep-dive/SKILL.mdsource content
Anthropic Migration Deep Dive
Overview
Migration strategies for switching to Claude from OpenAI, Google, or other LLM providers, including API mapping, prompt translation, and multi-provider abstraction.
OpenAI to Anthropic API Mapping
| OpenAI | Anthropic | Notes |
|---|---|---|
| | Different response shape |
| | Different model IDs |
| | Same format |
/ | | Similar but different schema key names |
| | Different naming |
| | Different access path |
→ yields chunks | → SSE events | Different event format |
System message in | parameter (separate) | Claude separates system prompt |
(multiple completions) | Not supported | Use multiple requests |
| Not supported | N/A |
Side-by-Side Code Comparison
# === OpenAI === from openai import OpenAI client = OpenAI() response = client.chat.completions.create( model="gpt-4", messages=[ {"role": "system", "content": "You are helpful."}, {"role": "user", "content": "Hello"} ], max_tokens=1024, temperature=0.7 ) text = response.choices[0].message.content # === Anthropic === import anthropic client = anthropic.Anthropic() response = client.messages.create( model="claude-sonnet-4-20250514", system="You are helpful.", # System prompt is separate messages=[ {"role": "user", "content": "Hello"} ], max_tokens=1024, # Required (not optional) temperature=0.7 ) text = response.content[0].text
Tool Use Migration
# OpenAI tools format openai_tools = [{ "type": "function", "function": { "name": "get_weather", "parameters": {"type": "object", "properties": {"city": {"type": "string"}}} } }] # Anthropic tools format — flatter structure anthropic_tools = [{ "name": "get_weather", "description": "Get weather for a city", # Required in Anthropic "input_schema": {"type": "object", "properties": {"city": {"type": "string"}}} }]
Multi-Provider Abstraction
from abc import ABC, abstractmethod class LLMProvider(ABC): @abstractmethod def complete(self, prompt: str, system: str = "", **kwargs) -> str: ... class AnthropicProvider(LLMProvider): def __init__(self): import anthropic self.client = anthropic.Anthropic() def complete(self, prompt: str, system: str = "", **kwargs) -> str: msg = self.client.messages.create( model=kwargs.get("model", "claude-sonnet-4-20250514"), max_tokens=kwargs.get("max_tokens", 1024), system=system, messages=[{"role": "user", "content": prompt}] ) return msg.content[0].text class OpenAIProvider(LLMProvider): def __init__(self): from openai import OpenAI self.client = OpenAI() def complete(self, prompt: str, system: str = "", **kwargs) -> str: messages = [] if system: messages.append({"role": "system", "content": system}) messages.append({"role": "user", "content": prompt}) resp = self.client.chat.completions.create( model=kwargs.get("model", "gpt-4"), messages=messages, max_tokens=kwargs.get("max_tokens", 1024) ) return resp.choices[0].message.content
Migration Checklist
- Map model names (GPT-4 → Claude Sonnet, GPT-3.5 → Claude Haiku)
- Move system prompts from
tomessages[]
parametersystem - Update response access path (
→.choices[0].message.content
).content[0].text - Make
explicit (required in Anthropic, optional in OpenAI)max_tokens - Update tool definitions to Anthropic format
- Test prompt behavior (Claude may respond differently to same prompts)
- Update error handling for Anthropic error types
Resources
Next Steps
For advanced debugging, see
anth-advanced-troubleshooting.