Awesome-omni-skill claude-api

Anthropic Claude API integration for building AI-powered applications. Use when working with Anthropic's Messages API, Claude SDKs (Python or TypeScript), tool use/function calling, vision/image inputs, streaming responses, prompt caching, message batches, token counting, extended thinking, PDF processing, or any Claude API integration task.

install
source · Clone the upstream repo
git clone https://github.com/diegosouzapw/awesome-omni-skill
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/diegosouzapw/awesome-omni-skill "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/development/claude-api" ~/.claude/skills/diegosouzapw-awesome-omni-skill-claude-api-c61d91 && rm -rf "$T"
manifest: skills/development/claude-api/SKILL.md
source content

Claude API

Build applications with Anthropic's Claude API using official Python and TypeScript SDKs.

Base URL:

https://api.anthropic.com

Authentication

All requests require these headers (SDKs handle automatically):

HeaderValue
x-api-key
API key from Console
anthropic-version
2023-06-01
content-type
application/json

Quick Start

Python

from anthropic import Anthropic

client = Anthropic()  # Uses ANTHROPIC_API_KEY env var

message = client.messages.create(
    model="claude-sonnet-4-5-20250929",
    max_tokens=1024,
    messages=[{"role": "user", "content": "Hello, Claude"}]
)
print(message.content[0].text)

TypeScript

import Anthropic from "@anthropic-ai/sdk";

const client = new Anthropic();  // Uses ANTHROPIC_API_KEY env var

const message = await client.messages.create({
  model: "claude-sonnet-4-5-20250929",
  max_tokens: 1024,
  messages: [{ role: "user", content: "Hello, Claude" }],
});
console.log(message.content[0].text);

Available Models

ModelModel IDBest ForPricing
Claude Sonnet 4.5
claude-sonnet-4-5-20250929
Complex agents, coding (recommended)$3/$15 per MTok
Claude Haiku 4.5
claude-haiku-4-5-20251001
Fast, lightweight tasks$1/$5 per MTok
Claude Opus 4.5
claude-opus-4-5-20251101
Maximum intelligence$5/$25 per MTok

For complete model details, platform IDs (AWS Bedrock, GCP Vertex AI), and legacy models, see references/models.md.

Available APIs

APIEndpointPurpose
Messages
POST /v1/messages
Conversational interactions
Message Batches
POST /v1/messages/batches
Async bulk processing (50% cost reduction)
Token Counting
POST /v1/messages/count_tokens
Count tokens before sending
Models
GET /v1/models
List available models
Files (beta)
/v1/files
Upload/manage files across calls

For full API details, rate limits, and third-party platforms, see references/api-overview.md.

Core Capabilities

Messages API

Basic chat completions, multi-turn conversations, system prompts, streaming responses.

Tool Use

Define tools with JSON schemas, handle tool calls, return results to Claude.

Vision

Send images (base64 or URL), analyze multiple images, extract information.

Streaming

Real-time token streaming with event handlers or async iterators.

Token Counting

Count tokens before sending requests to manage costs and rate limits.

API & SDK References

Advanced Features

Common Patterns

Conversation Loop with Tools

messages = [{"role": "user", "content": user_input}]

while True:
    response = client.messages.create(
        model="claude-sonnet-4-5-20250929",
        max_tokens=1024,
        tools=tools,
        messages=messages
    )

    if response.stop_reason == "end_turn":
        break

    if response.stop_reason == "tool_use":
        messages.append({"role": "assistant", "content": response.content})
        tool_results = []
        for block in response.content:
            if block.type == "tool_use":
                result = execute_tool(block.name, block.input)
                tool_results.append({
                    "type": "tool_result",
                    "tool_use_id": block.id,
                    "content": result
                })
        messages.append({"role": "user", "content": tool_results})

Error Handling with Retries

from anthropic import RateLimitError
import time

def call_with_retry(fn, max_retries=3):
    for attempt in range(max_retries):
        try:
            return fn()
        except RateLimitError as e:
            if attempt == max_retries - 1:
                raise
            wait = int(e.response.headers.get("retry-after", 60))
            time.sleep(wait)

Required Parameters

Every

messages.create()
call requires:

ParameterDescription
model
Model ID (e.g.,
claude-sonnet-4-5-20250929
)
max_tokens
Maximum tokens in response
messages
Array of message objects with
role
and
content