Awesome-omni-skill openai-api
OpenAI API integration for building AI-powered applications. Use when working with OpenAI's Chat Completions API, Python SDK (openai), TypeScript SDK (openai), tool use/function calling, vision/image inputs, streaming responses, DALL-E image generation, Whisper audio transcription, text-to-speech, embeddings, Assistants API, fine-tuning, or any OpenAI API integration task. Triggers on mentions of OpenAI, GPT-4, GPT-4o, GPT-5, o1, o3, o4, DALL-E, Whisper, Sora, or OpenAI SDK usage.
install
source · Clone the upstream repo
git clone https://github.com/diegosouzapw/awesome-omni-skill
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/diegosouzapw/awesome-omni-skill "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/ai-agents/openai-api" ~/.claude/skills/diegosouzapw-awesome-omni-skill-openai-api && rm -rf "$T"
manifest:
skills/ai-agents/openai-api/SKILL.mdsafety · automated scan (medium risk)
This is a pattern-based risk scan, not a security review. Our crawler flagged:
- pip install
- references API keys
Always read a skill's source content before installing. Patterns alone don't mean the skill is malicious — but they warrant attention.
source content
OpenAI API
Build AI applications using OpenAI's APIs with Python or TypeScript SDKs.
Quick Start
Installation
# Python pip install openai # TypeScript/Node.js npm install openai
Client Setup
Python:
from openai import OpenAI client = OpenAI() # Uses OPENAI_API_KEY env var # Or: client = OpenAI(api_key="sk-...")
TypeScript:
import OpenAI from 'openai'; const client = new OpenAI(); // Uses OPENAI_API_KEY env var // Or: new OpenAI({ apiKey: 'sk-...' })
Chat Completions
Basic chat completion:
Python:
response = client.chat.completions.create( model="gpt-4o", messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Hello!"} ] ) print(response.choices[0].message.content)
TypeScript:
const response = await client.chat.completions.create({ model: 'gpt-4o', messages: [ { role: 'system', content: 'You are a helpful assistant.' }, { role: 'user', content: 'Hello!' } ] }); console.log(response.choices[0].message.content);
Streaming
Python:
stream = client.chat.completions.create( model="gpt-4o", messages=[{"role": "user", "content": "Tell me a story"}], stream=True ) for chunk in stream: if chunk.choices[0].delta.content: print(chunk.choices[0].delta.content, end="")
TypeScript:
const stream = await client.chat.completions.create({ model: 'gpt-4o', messages: [{ role: 'user', content: 'Tell me a story' }], stream: true }); for await (const chunk of stream) { process.stdout.write(chunk.choices[0]?.delta?.content || ''); }
Tool Use / Function Calling
Python:
tools = [{ "type": "function", "function": { "name": "get_weather", "description": "Get current weather for a location", "parameters": { "type": "object", "properties": { "location": {"type": "string", "description": "City name"} }, "required": ["location"] } } }] response = client.chat.completions.create( model="gpt-4o", messages=[{"role": "user", "content": "What's the weather in Paris?"}], tools=tools ) # Check if tool call requested if response.choices[0].message.tool_calls: tool_call = response.choices[0].message.tool_calls[0] # Execute function, then send result back messages.append(response.choices[0].message) messages.append({ "role": "tool", "tool_call_id": tool_call.id, "content": '{"temp": 22, "condition": "sunny"}' })
TypeScript:
const tools: OpenAI.ChatCompletionTool[] = [{ type: 'function', function: { name: 'get_weather', description: 'Get current weather for a location', parameters: { type: 'object', properties: { location: { type: 'string', description: 'City name' } }, required: ['location'] } } }]; const response = await client.chat.completions.create({ model: 'gpt-4o', messages: [{ role: 'user', content: "What's the weather in Paris?" }], tools }); if (response.choices[0].message.tool_calls) { const toolCall = response.choices[0].message.tool_calls[0]; // Execute function, then continue conversation }
Vision (Image Input)
Python:
response = client.chat.completions.create( model="gpt-4o", messages=[{ "role": "user", "content": [ {"type": "text", "text": "What's in this image?"}, {"type": "image_url", "image_url": {"url": "https://example.com/image.jpg"}} ] }] )
For base64 images:
"url": "data:image/jpeg;base64,{base64_string}"
Structured Outputs (JSON Mode)
Python:
from pydantic import BaseModel class CalendarEvent(BaseModel): name: str date: str participants: list[str] response = client.beta.chat.completions.parse( model="gpt-4o", messages=[{"role": "user", "content": "Create a meeting for tomorrow"}], response_format=CalendarEvent ) event = response.choices[0].message.parsed
TypeScript (with Zod):
import { zodResponseFormat } from 'openai/helpers/zod'; import { z } from 'zod'; const CalendarEvent = z.object({ name: z.string(), date: z.string(), participants: z.array(z.string()) }); const response = await client.beta.chat.completions.parse({ model: 'gpt-4o', messages: [{ role: 'user', content: 'Create a meeting for tomorrow' }], response_format: zodResponseFormat(CalendarEvent, 'event') }); const event = response.choices[0].message.parsed;
Models
Chat/Completion Models
| Model | Best For |
|---|---|
| Latest flagship, best quality |
| Premium tier for complex tasks |
| Previous flagship, excellent quality |
| Cost-effective GPT-5 |
| Lightweight GPT-5 |
| Strong general purpose |
| Cost-effective GPT-4.1 |
| Lightweight GPT-4.1 |
| Fast, vision support |
| Cost-effective, simpler tasks |
Reasoning Models
| Model | Best For |
|---|---|
| Latest reasoning, efficient |
| Strong reasoning |
| Reasoning with lower cost |
| Complex reasoning, math, code |
| Premium reasoning tier |
Specialized Models
| Model | Purpose |
|---|---|
| Real-time voice conversations |
| Audio input/output |
| Web search integration |
/ | Image understanding |
/ | Video generation |
| Image generation |
| Audio transcription |
/ | Text-to-speech |
| Text embeddings |
Feature References
- Advanced chat patterns: See references/chat-completions.md
- Image generation (DALL-E): See references/images.md
- Audio (Whisper/TTS): See references/audio.md
- Embeddings: See references/embeddings.md
- Assistants API: See references/assistants.md
- Fine-tuning: See references/fine-tuning.md
Error Handling
Python:
from openai import APIError, RateLimitError, APIConnectionError try: response = client.chat.completions.create(...) except RateLimitError: # Implement backoff/retry pass except APIConnectionError: # Network issue pass except APIError as e: print(f"API error: {e.status_code} - {e.message}")
TypeScript:
import OpenAI from 'openai'; try { const response = await client.chat.completions.create({...}); } catch (error) { if (error instanceof OpenAI.RateLimitError) { // Implement backoff/retry } else if (error instanceof OpenAI.APIConnectionError) { // Network issue } else if (error instanceof OpenAI.APIError) { console.error(`API error: ${error.status} - ${error.message}`); } }
Common Parameters
| Parameter | Description |
|---|---|
| 0-2, lower = deterministic, higher = creative |
| Maximum response length |
| Nucleus sampling alternative to temperature |
| Stop sequences to end generation |
| Number of completions to generate |