Marketplace claude-api

Build, debug, and optimize Claude API / Anthropic SDK apps. Apps built with this skill should include prompt caching. TRIGGER when: code imports anthropic/@anthropic-ai/sdk; user asks to use the Claude API, Anthropic SDKs, or Managed Agents (/v1/agents, /v1/sessions, /v1/environments). DO NOT TRIGGER when: code imports `openai`/other AI SDK, general programming, or ML/data-science tasks.

install
source · Clone the upstream repo
git clone https://github.com/aiskillstore/marketplace
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/aiskillstore/marketplace "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/claude-api/claude-api" ~/.claude/skills/aiskillstore-marketplace-claude-api && rm -rf "$T"
manifest: skills/claude-api/claude-api/SKILL.md
source content

Building LLM-Powered Applications with Claude

This skill helps you build LLM-powered applications with Claude. Choose the right surface based on your needs, detect the project language, then read the relevant language-specific documentation.

Before You Start

Scan the target file (or, if no target file, the prompt and project) for non-Anthropic provider markers —

import openai
,
from openai
,
langchain_openai
,
OpenAI(
,
gpt-4
,
gpt-5
, file names like
agent-openai.py
or
*-generic.py
, or any explicit instruction to keep the code provider-neutral. If you find any, stop and tell the user that this skill produces Claude/Anthropic SDK code; ask whether they want to switch the file to Claude or want a non-Claude implementation. Do not edit a non-Anthropic file with Anthropic SDK calls.

Output Requirement

When the user asks you to add, modify, or implement a Claude feature, your code must call Claude through one of:

  1. The official Anthropic SDK for the project's language (
    anthropic
    ,
    @anthropic-ai/sdk
    ,
    com.anthropic.*
    , etc.). This is the default whenever a supported SDK exists for the project.
  2. Raw HTTP (
    curl
    ,
    requests
    ,
    fetch
    ,
    httpx
    , etc.) — only when the user explicitly asks for cURL/REST/raw HTTP, the project is a shell/cURL project, or the language has no official SDK.

Never mix the two — don't reach for

requests
/
fetch
in a Python or TypeScript project just because it feels lighter. Never fall back to OpenAI-compatible shims.

Never guess SDK usage. Function names, class names, namespaces, method signatures, and import paths must come from explicit documentation — either the

{lang}/
files in this skill or the official SDK repositories or documentation links listed in
shared/live-sources.md
. If the binding you need is not explicitly documented in the skill files, WebFetch the relevant SDK repo from
shared/live-sources.md
before writing code. Do not infer Ruby/Java/Go/PHP/C# APIs from cURL shapes or from another language's SDK.

Defaults

Unless the user requests otherwise:

For the Claude model version, please use Claude Opus 4.6, which you can access via the exact model string

claude-opus-4-6
. Please default to using adaptive thinking (
thinking: {type: "adaptive"}
) for anything remotely complicated. And finally, please default to streaming for any request that may involve long input, long output, or high
max_tokens
— it prevents hitting request timeouts. Use the SDK's
.get_final_message()
/
.finalMessage()
helper to get the complete response if you don't need to handle individual stream events


Subcommands

If the User Request at the bottom of this prompt is a bare subcommand string (no prose), search every Subcommands table in this document — including any in sections appended below — and follow the matching Action column directly. This lets users invoke specific flows via

/claude-api <subcommand>
. If no table in the document matches, treat the request as normal prose.

<!-- Subcommand tables are defined per-section below; this header block contains only the dispatch rule so that feature-gated sections can add their own tables without leaking strings into ungated builds. -->

Language Detection

Before reading code examples, determine which language the user is working in:

  1. Look at project files to infer the language:

    • *.py
      ,
      requirements.txt
      ,
      pyproject.toml
      ,
      setup.py
      ,
      Pipfile
      Python — read from
      python/
    • *.ts
      ,
      *.tsx
      ,
      package.json
      ,
      tsconfig.json
      TypeScript — read from
      typescript/
    • *.js
      ,
      *.jsx
      (no
      .ts
      files present) → TypeScript — JS uses the same SDK, read from
      typescript/
    • *.java
      ,
      pom.xml
      ,
      build.gradle
      Java — read from
      java/
    • *.kt
      ,
      *.kts
      ,
      build.gradle.kts
      Java — Kotlin uses the Java SDK, read from
      java/
    • *.scala
      ,
      build.sbt
      Java — Scala uses the Java SDK, read from
      java/
    • *.go
      ,
      go.mod
      Go — read from
      go/
    • *.rb
      ,
      Gemfile
      Ruby — read from
      ruby/
    • *.cs
      ,
      *.csproj
      C# — read from
      csharp/
    • *.php
      ,
      composer.json
      PHP — read from
      php/
  2. If multiple languages detected (e.g., both Python and TypeScript files):

    • Check which language the user's current file or question relates to
    • If still ambiguous, ask: "I detected both Python and TypeScript files. Which language are you using for the Claude API integration?"
  3. If language can't be inferred (empty project, no source files, or unsupported language):

    • Use AskUserQuestion with options: Python, TypeScript, Java, Go, Ruby, cURL/raw HTTP, C#, PHP
    • If AskUserQuestion is unavailable, default to Python examples and note: "Showing Python examples. Let me know if you need a different language."
  4. If unsupported language detected (Rust, Swift, C++, Elixir, etc.):

    • Suggest cURL/raw HTTP examples from
      curl/
      and note that community SDKs may exist
    • Offer to show Python or TypeScript examples as reference implementations
  5. If user needs cURL/raw HTTP examples, read from

    curl/
    .

Language-Specific Feature Support

LanguageTool RunnerManaged AgentsNotes
PythonYes (beta)Yes (beta)Full support —
@beta_tool
decorator
TypeScriptYes (beta)Yes (beta)Full support —
betaZodTool
+ Zod
JavaYes (beta)Yes (beta)Beta tool use with annotated classes
GoYes (beta)Yes (beta)
BetaToolRunner
in
toolrunner
pkg
RubyYes (beta)Yes (beta)
BaseTool
+
tool_runner
in beta
C#NoNoOfficial SDK
PHPYes (beta)Yes (beta)
BetaRunnableTool
+
toolRunner()
cURLN/AYes (beta)Raw HTTP, no SDK features

Managed Agents code examples: dedicated language-specific READMEs are provided for Python, TypeScript, Go, Ruby, PHP, Java, and cURL (

{lang}/managed-agents/README.md
,
curl/managed-agents.md
). Read your language's README plus the language-agnostic
shared/managed-agents-*.md
concept files. Agents are persistent — create once, reference by ID. Store the agent ID returned by
agents.create
and pass it to every subsequent
sessions.create
; do not call
agents.create
in the request path. The Anthropic CLI is one convenient way to create agents and environments from version-controlled YAML — its URL is in
shared/live-sources.md
. If a binding you need isn't shown in the README, WebFetch the relevant entry from
shared/live-sources.md
rather than guess. C# does not currently have Managed Agents support; use cURL-style raw HTTP requests against the API.


Which Surface Should I Use?

Start simple. Default to the simplest tier that meets your needs. Single API calls and workflows handle most use cases — only reach for agents when the task genuinely requires open-ended, model-driven exploration.

Use CaseTierRecommended SurfaceWhy
Classification, summarization, extraction, Q&ASingle LLM callClaude APIOne request, one response
Batch processing or embeddingsSingle LLM callClaude APISpecialized endpoints
Multi-step pipelines with code-controlled logicWorkflowClaude API + tool useYou orchestrate the loop
Custom agent with your own toolsAgentClaude API + tool useMaximum flexibility
Server-managed stateful agent with workspaceAgentManaged AgentsAnthropic runs the loop and hosts the tool-execution sandbox
Persisted, versioned agent configsAgentManaged AgentsAgents are stored objects; sessions pin to a version
Long-running multi-turn agent with file mountsAgentManaged AgentsPer-session containers, SSE event stream, Skills + MCP

Note: Managed Agents is the right choice when you want Anthropic to run the agent loop and host the container where tools execute — file ops, bash, code execution all run in the per-session workspace. If you want to host the compute yourself or run your own custom tool runtime, Claude API + tool use is the right choice — use the tool runner for automatic loop handling, or the manual loop for fine-grained control (approval gates, custom logging, conditional execution).

Third-party providers (Amazon Bedrock, Google Vertex AI, Microsoft Foundry): Managed Agents is not available on Bedrock, Vertex, or Foundry. If you are deploying through any third-party provider, use Claude API + tool use for all use cases — including ones where Managed Agents would otherwise be the recommended surface.

Decision Tree

What does your application need?

0. Are you deploying through Amazon Bedrock, Google Vertex AI, or Microsoft Foundry?
   └── Yes → Claude API (+ tool use for agents) — Managed Agents is 1P only.
   No → continue.

1. Single LLM call (classification, summarization, extraction, Q&A)
   └── Claude API — one request, one response

2. Do you want Anthropic to run the agent loop and host a per-session
   container where Claude executes tools (bash, file ops, code)?
   └── Yes → Managed Agents — server-managed sessions, persisted agent configs,
       SSE event stream, Skills + MCP, file mounts.
       Examples: "stateful coding agent with a workspace per task",
                 "long-running research agent that streams events to a UI",
                 "agent with persisted, versioned config used across many sessions"

3. Workflow (multi-step, code-orchestrated, with your own tools)
   └── Claude API with tool use — you control the loop

4. Open-ended agent (model decides its own trajectory, your own tools, you host the compute)
   └── Claude API agentic loop (maximum flexibility)

Should I Build an Agent?

Before choosing the agent tier, check all four criteria:

  • Complexity — Is the task multi-step and hard to fully specify in advance? (e.g., "turn this design doc into a PR" vs. "extract the title from this PDF")
  • Value — Does the outcome justify higher cost and latency?
  • Viability — Is Claude capable at this task type?
  • Cost of error — Can errors be caught and recovered from? (tests, review, rollback)

If the answer is "no" to any of these, stay at a simpler tier (single call or workflow).


Architecture

Everything goes through

POST /v1/messages
. Tools and output constraints are features of this single endpoint — not separate APIs.

User-defined tools — You define tools (via decorators, Zod schemas, or raw JSON), and the SDK's tool runner handles calling the API, executing your functions, and looping until Claude is done. For full control, you can write the loop manually.

Server-side tools — Anthropic-hosted tools that run on Anthropic's infrastructure. Code execution is fully server-side (declare it in

tools
, Claude runs code automatically). Computer use can be server-hosted or self-hosted.

Structured outputs — Constrains the Messages API response format (

output_config.format
) and/or tool parameter validation (
strict: true
). The recommended approach is
client.messages.parse()
which validates responses against your schema automatically. Note: the old
output_format
parameter is deprecated; use
output_config: {format: {...}}
on
messages.create()
.

Supporting endpoints — Batches (

POST /v1/messages/batches
), Files (
POST /v1/files
), Token Counting, and Models (
GET /v1/models
,
GET /v1/models/{id}
— live capability/context-window discovery) feed into or support Messages API requests.


Current Models (cached: 2026-02-17)

ModelModel IDContextInput $/1MOutput $/1M
Claude Opus 4.6
claude-opus-4-6
200K (1M beta)$5.00$25.00
Claude Sonnet 4.6
claude-sonnet-4-6
200K (1M beta)$3.00$15.00
Claude Haiku 4.5
claude-haiku-4-5
200K$1.00$5.00

ALWAYS use

claude-opus-4-6
unless the user explicitly names a different model. This is non-negotiable. Do not use
claude-sonnet-4-6
,
claude-sonnet-4-5
, or any other model unless the user literally says "use sonnet" or "use haiku". Never downgrade for cost — that's the user's decision, not yours.

CRITICAL: Use only the exact model ID strings from the table above — they are complete as-is. Do not append date suffixes. For example, use

claude-sonnet-4-5
, never
claude-sonnet-4-5-20250514
or any other date-suffixed variant you might recall from training data. If the user requests an older model not in the table (e.g., "opus 4.5", "sonnet 3.7"), read
shared/models.md
for the exact ID — do not construct one yourself.

A note: if any of the model strings above look unfamiliar to you, that's to be expected — that just means they were released after your training data cutoff. Rest assured they are real models; we wouldn't mess with you like that.

Live capability lookup: The table above is cached. When the user asks "what's the context window for X", "does X support vision/thinking/effort", or "which models support Y", query the Models API (

client.models.retrieve(id)
/
client.models.list()
) — see
shared/models.md
for the field reference and capability-filter examples.


Thinking & Effort (Quick Reference)

Opus 4.6 — Adaptive thinking (recommended): Use

thinking: {type: "adaptive"}
. Claude dynamically decides when and how much to think. No
budget_tokens
needed —
budget_tokens
is deprecated on Opus 4.6 and Sonnet 4.6 and must not be used. Adaptive thinking also automatically enables interleaved thinking (no beta header needed). When the user asks for "extended thinking", a "thinking budget", or
budget_tokens
: always use Opus 4.6 with
thinking: {type: "adaptive"}
. The concept of a fixed token budget for thinking is deprecated — adaptive thinking replaces it. Do NOT use
budget_tokens
and do NOT switch to an older model.

Effort parameter (GA, no beta header): Controls thinking depth and overall token spend via

output_config: {effort: "low"|"medium"|"high"|"max"}
(inside
output_config
, not top-level). Default is
high
(equivalent to omitting it).
max
is Opus 4.6 only. Works on Opus 4.5, Opus 4.6, and Sonnet 4.6. Will error on Sonnet 4.5 / Haiku 4.5. Combine with adaptive thinking for the best cost-quality tradeoffs. Lower effort means fewer and more-consolidated tool calls, less preamble, and terser confirmations —
medium
is often a favorable balance; use
max
when correctness matters more than cost; use
low
for subagents or simple tasks.

Sonnet 4.6: Supports adaptive thinking (

thinking: {type: "adaptive"}
).
budget_tokens
is deprecated on Sonnet 4.6 — use adaptive thinking instead.

Older models (only if explicitly requested): If the user specifically asks for Sonnet 4.5 or another older model, use

thinking: {type: "enabled", budget_tokens: N}
.
budget_tokens
must be less than
max_tokens
(minimum 1024). Never choose an older model just because the user mentions
budget_tokens
— use Opus 4.6 with adaptive thinking instead.


Compaction (Quick Reference)

Beta, Opus 4.6 and Sonnet 4.6. For long-running conversations that may exceed the 200K context window, enable server-side compaction. The API automatically summarizes earlier context when it approaches the trigger threshold (default: 150K tokens). Requires beta header

compact-2026-01-12
.

Critical: Append

response.content
(not just the text) back to your messages on every turn. Compaction blocks in the response must be preserved — the API uses them to replace the compacted history on the next request. Extracting only the text string and appending that will silently lose the compaction state.

See

{lang}/claude-api/README.md
(Compaction section) for code examples. Full docs via WebFetch in
shared/live-sources.md
.


Prompt Caching (Quick Reference)

Prefix match. Any byte change anywhere in the prefix invalidates everything after it. Render order is

tools
system
messages
. Keep stable content first (frozen system prompt, deterministic tool list), put volatile content (timestamps, per-request IDs, varying questions) after the last
cache_control
breakpoint.

Top-level auto-caching (

cache_control: {type: "ephemeral"}
on
messages.create()
) is the simplest option when you don't need fine-grained placement. Max 4 breakpoints per request. Minimum cacheable prefix is ~1024 tokens — shorter prefixes silently won't cache.

Verify with

usage.cache_read_input_tokens
— if it's zero across repeated requests, a silent invalidator is at work (
datetime.now()
in system prompt, unsorted JSON, varying tool set).

For placement patterns, architectural guidance, and the silent-invalidator audit checklist: read

shared/prompt-caching.md
. Language-specific syntax:
{lang}/claude-api/README.md
(Prompt Caching section).


Managed Agents (Beta)

Managed Agents is a third surface: server-managed stateful agents with Anthropic-hosted tool execution. You create a persisted, versioned Agent config (

POST /v1/agents
), then start Sessions that reference it. Each session provisions a container as the agent's workspace — bash, file ops, and code execution run there; the agent loop itself runs on Anthropic's orchestration layer and acts on the container via tools. The session streams events; you send messages and tool results back.

Managed Agents is first-party only. It is not available on Amazon Bedrock, Google Vertex AI, or Microsoft Foundry. For agents on third-party providers, use Claude API + tool use.

Mandatory flow: Agent (once) → Session (every run).

model
/
system
/
tools
live on the agent, never the session. See
shared/managed-agents-overview.md
for the full reading guide, beta headers, and pitfalls.

Beta headers:

managed-agents-2026-04-01
— the SDK sets this automatically for all
client.beta.{agents,environments,sessions,vaults}.*
calls. Skills API uses
skills-2025-10-02
and Files API uses
files-api-2025-04-14
, but you don't need to explicitly pass those in for endpoints other than
/v1/skills
and
/v1/files
.

Subcommands — invoke directly with

/claude-api <subcommand>
:

SubcommandAction
managed-agents-onboard
Walk the user through setting up a Managed Agent from scratch. Read
shared/managed-agents-onboarding.md
immediately
and follow its interview script: mental model → know-or-explore branch → template config → session setup → emit code. Do not summarize — run the interview.

Reading guide: Start with

shared/managed-agents-overview.md
, then the topical
shared/managed-agents-*.md
files (core, environments, tools, events, client-patterns, onboarding, api-reference). For Python, TypeScript, Go, Ruby, PHP, and Java, read
{lang}/managed-agents/README.md
for code examples. For cURL, read
curl/managed-agents.md
. Agents are persistent — create once, reference by ID. Store the agent ID returned by
agents.create
and pass it to every subsequent
sessions.create
; do not call
agents.create
in the request path. The Anthropic CLI is one convenient way to create agents and environments from version-controlled YAML (URL in
shared/live-sources.md
). If a binding you need isn't shown in the language README, WebFetch the relevant entry from
shared/live-sources.md
rather than guess. C# does not currently have Managed Agents support; use raw HTTP from
curl/managed-agents.md
as a reference.

When the user wants to set up a Managed Agent from scratch (e.g. "how do I get started", "walk me through creating one", "set up a new agent"): read

shared/managed-agents-onboarding.md
and run its interview — same flow as the
managed-agents-onboard
subcommand.

When the user asks "how do I write the client code for X": reach for

shared/managed-agents-client-patterns.md
— covers lossless stream reconnect,
processed_at
queued/processed gate, interrupt,
tool_confirmation
round-trip, the correct idle/terminated break gate, post-idle status race, stream-first ordering, file-mount gotchas, keeping credentials host-side via custom tools, etc.


Reading Guide

After detecting the language, read the relevant files based on what the user needs:

Quick Task Reference

Single text classification/summarization/extraction/Q&A: → Read only

{lang}/claude-api/README.md

Chat UI or real-time response display: → Read

{lang}/claude-api/README.md
+
{lang}/claude-api/streaming.md

Long-running conversations (may exceed context window): → Read

{lang}/claude-api/README.md
— see Compaction section

Prompt caching / optimize caching / "why is my cache hit rate low": → Read

shared/prompt-caching.md
+
{lang}/claude-api/README.md
(Prompt Caching section)

Function calling / tool use / agents: → Read

{lang}/claude-api/README.md
+
shared/tool-use-concepts.md
+
{lang}/claude-api/tool-use.md

Agent design (tool surface, context management, caching strategy): → Read

shared/agent-design.md

Batch processing (non-latency-sensitive): → Read

{lang}/claude-api/README.md
+
{lang}/claude-api/batches.md

File uploads across multiple requests: → Read

{lang}/claude-api/README.md
+
{lang}/claude-api/files-api.md

Managed Agents (server-managed stateful agents with workspace): → Read

shared/managed-agents-overview.md
+ the rest of the
shared/managed-agents-*.md
files. For Python, TypeScript, Go, Ruby, PHP, and Java, read
{lang}/managed-agents/README.md
for code examples. For cURL, read
curl/managed-agents.md
. Agents are persistent — create once, reference by ID. Store the agent ID returned by
agents.create
and pass it to every subsequent
sessions.create
; do not call
agents.create
in the request path. The Anthropic CLI is one convenient way to create agents and environments from version-controlled YAML (URL in
shared/live-sources.md
). If a binding you need isn't shown in the language README, WebFetch the relevant entry from
shared/live-sources.md
rather than guess. C# does not currently support Managed Agents — use raw HTTP from
curl/managed-agents.md
as a reference.

Claude API (Full File Reference)

Read the language-specific Claude API folder (

{language}/claude-api/
):

  1. {language}/claude-api/README.md
    Read this first. Installation, quick start, common patterns, error handling.
  2. shared/tool-use-concepts.md
    — Read when the user needs function calling, code execution, memory, or structured outputs. Covers conceptual foundations.
  3. shared/agent-design.md
    — Read when designing an agent: bash vs. dedicated tools, programmatic tool calling, tool search/skills, context editing vs. compaction vs. memory, caching principles.
  4. {language}/claude-api/tool-use.md
    — Read for language-specific tool use code examples (tool runner, manual loop, code execution, memory, structured outputs).
  5. {language}/claude-api/streaming.md
    — Read when building chat UIs or interfaces that display responses incrementally.
  6. {language}/claude-api/batches.md
    — Read when processing many requests offline (not latency-sensitive). Runs asynchronously at 50% cost.
  7. {language}/claude-api/files-api.md
    — Read when sending the same file across multiple requests without re-uploading.
  8. shared/prompt-caching.md
    — Read when adding or optimizing prompt caching. Covers prefix-stability design, breakpoint placement, and anti-patterns that silently invalidate cache.
  9. shared/error-codes.md
    — Read when debugging HTTP errors or implementing error handling.
  10. shared/live-sources.md
    — WebFetch URLs for fetching the latest official documentation.

Note: For Java, Go, Ruby, C#, PHP, and cURL — these have a single file each covering all basics. Read that file plus

shared/tool-use-concepts.md
and
shared/error-codes.md
as needed.

Note: For the Managed Agents file reference, see the

## Managed Agents (Beta)
section above — it lists every
shared/managed-agents-*.md
file and the language-specific READMEs.


When to Use WebFetch

Use WebFetch to get the latest documentation when:

  • User asks for "latest" or "current" information
  • Cached data seems incorrect
  • User asks about features not covered here

Live documentation URLs are in

shared/live-sources.md
.

Common Pitfalls

  • Don't truncate inputs when passing files or content to the API. If the content is too long to fit in the context window, notify the user and discuss options (chunking, summarization, etc.) rather than silently truncating.
  • Opus 4.6 / Sonnet 4.6 thinking: Use
    thinking: {type: "adaptive"}
    — do NOT use
    budget_tokens
    (deprecated on both Opus 4.6 and Sonnet 4.6). For older models,
    budget_tokens
    must be less than
    max_tokens
    (minimum 1024). This will throw an error if you get it wrong.
  • Opus 4.6 prefill removed: Assistant message prefills (last-assistant-turn prefills) return a 400 error on Opus 4.6. Use structured outputs (
    output_config.format
    ) or system prompt instructions to control response format instead.
  • max_tokens
    defaults:
    Don't lowball
    max_tokens
    — hitting the cap truncates output mid-thought and requires a retry. For non-streaming requests, default to
    ~16000
    (keeps responses under SDK HTTP timeouts). For streaming requests, default to
    ~64000
    (timeouts aren't a concern, so give the model room). Only go lower when you have a hard reason: classification (
    ~256
    ), cost caps, or deliberately short outputs.
  • 128K output tokens: Opus 4.6 supports up to 128K
    max_tokens
    , but the SDKs require streaming for values that large to avoid HTTP timeouts. Use
    .stream()
    with
    .get_final_message()
    /
    .finalMessage()
    .
  • Tool call JSON parsing (Opus 4.6): Opus 4.6 may produce different JSON string escaping in tool call
    input
    fields (e.g., Unicode or forward-slash escaping). Always parse tool inputs with
    json.loads()
    /
    JSON.parse()
    — never do raw string matching on the serialized input.
  • Structured outputs (all models): Use
    output_config: {format: {...}}
    instead of the deprecated
    output_format
    parameter on
    messages.create()
    . This is a general API change, not 4.6-specific.
  • Don't reimplement SDK functionality: The SDK provides high-level helpers — use them instead of building from scratch. Specifically: use
    stream.finalMessage()
    instead of wrapping
    .on()
    events in
    new Promise()
    ; use typed exception classes (
    Anthropic.RateLimitError
    , etc.) instead of string-matching error messages; use SDK types (
    Anthropic.MessageParam
    ,
    Anthropic.Tool
    ,
    Anthropic.Message
    , etc.) instead of redefining equivalent interfaces.
  • Don't define custom types for SDK data structures: The SDK exports types for all API objects. Use
    Anthropic.MessageParam
    for messages,
    Anthropic.Tool
    for tool definitions,
    Anthropic.ToolUseBlock
    /
    Anthropic.ToolResultBlockParam
    for tool results,
    Anthropic.Message
    for responses. Defining your own
    interface ChatMessage { role: string; content: unknown }
    duplicates what the SDK already provides and loses type safety.
  • Report and document output: For tasks that produce reports, documents, or visualizations, the code execution sandbox has
    python-docx
    ,
    python-pptx
    ,
    matplotlib
    ,
    pillow
    , and
    pypdf
    pre-installed. Claude can generate formatted files (DOCX, PDF, charts) and return them via the Files API — consider this for "report" or "document" type requests instead of plain stdout text.