Awesome-omni-skill vercel-ai-sdk
Comprehensive guide to Vercel AI SDK for building AI-powered applications with text generation, streaming, tool calling, autonomous agents, and React UI integration. Use when working with AI SDK, Vercel AI, useChat, streamText, generateText, generateObject, tool calling, LLM tools, AI agents, streaming chat, chatbots, AI Elements, PromptInput, Message components, streamUI, or building conversational interfaces with language models. Focuses on Anthropic Claude with extended thinking, prompt caching, and code execution.
git clone https://github.com/diegosouzapw/awesome-omni-skill
T=$(mktemp -d) && git clone --depth=1 https://github.com/diegosouzapw/awesome-omni-skill "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/ai-agents/vercel-ai-sdk" ~/.claude/skills/diegosouzapw-awesome-omni-skill-vercel-ai-sdk && rm -rf "$T"
skills/ai-agents/vercel-ai-sdk/SKILL.md- references .env files
Vercel AI SDK Expert
Build production-ready AI applications with streaming, tool calling, autonomous agents, and polished UI components. This skill covers the AI SDK Core (text generation, structured output, tools), AI SDK UI (React hooks), AI SDK RSC (experimental server components), and AI Elements (pre-built UI components).
Installation
# Core SDK + Anthropic provider (recommended) npm install ai @ai-sdk/anthropic zod # React hooks for UI npm install @ai-sdk/react # AI Elements components (requires shadcn/ui) npx ai-elements@latest add message prompt-input conversation
Prerequisites:
- Node.js 18+
- Next.js (App Router recommended)
- Anthropic API key:
inANTHROPIC_API_KEY.env.local - For AI Elements: shadcn/ui, Tailwind CSS 4, React 19
Quick Start: Chat with Tools
// app/api/chat/route.ts import { anthropic } from "@ai-sdk/anthropic"; import { streamText, tool } from "ai"; import { z } from "zod"; export async function POST(req: Request) { const { messages } = await req.json(); const result = streamText({ model: anthropic("claude-sonnet-4-5"), messages, tools: { getWeather: tool({ description: "Get weather for a location", parameters: z.object({ location: z.string(), }), execute: async ({ location }) => ({ temperature: 72, condition: "sunny", }), }), }, }); return result.toDataStreamResponse(); }
// app/page.tsx 'use client'; import { useChat } from '@ai-sdk/react'; import { Message, PromptInput } from '@/components/ai-elements'; export default function Chat() { const { messages, sendMessage, status } = useChat(); return ( <div> {messages.map(msg => ( <Message key={msg.id} from={msg.role}> {msg.content} </Message> ))} <PromptInput onSubmit={({ text }) => sendMessage({ content: text })} disabled={status === 'streaming'} /> </div> ); }
Core Concepts
1. Text Generation
- Stream responses for interactive UIsstreamText()
- Real-time token streaming to client
- Tool calling with automatic execution
- Callbacks:
,onChunk
,onFinishonStepFinish - Returns:
,toDataStreamResponse()
,textStreamfullStream
- Synchronous generation for non-interactive tasksgenerateText()
- Await full completion (drafts, summaries, agents)
- Tool calling with multiple rounds
- Returns:
,text
,toolCalls
,toolResults
,usagesteps
- Structured output with Zod schema validationgenerateObject()
- Extract typed data from unstructured input
- Returns:
,object
,usagewarnings
See references/core-api.md for full API reference.
2. Tool Calling
Define tools with
tool() helper:
tool({ description: "Present options to user as buttons", parameters: z.object({ question: z.string(), options: z.array(z.string()), }), execute: async ({ question, options }) => { return { selected: null }; // UI-only tool }, });
Key Patterns:
- Multi-step: Use
ormaxSteps
for tool loopsstopWhen - UI-only tools: Return minimal data, render in client
- Programmatic tools (Anthropic): Tools callable from code execution
See references/agents.md for workflow patterns.
3. Autonomous Agents
- Multi-step autonomous agentToolLoopAgent
import { ToolLoopAgent } from "ai/agents"; const agent = new ToolLoopAgent({ model: anthropic("claude-sonnet-4-5"), instructions: systemPrompt, tools: { searchDocs, updateSpec, scheduleCall }, stopWhen: (result) => { // Stop when UI-only tool called return result.steps.some( (step) => "toolCalls" in step && step.toolCalls.some((tc) => tc.toolName === "scheduleCall"), ); }, }); const result = await agent.execute({ messages }); return createAgentUIStreamResponse({ result }).toUIMessageStreamResponse();
See references/agents.md for loop control and oyster patterns.
4. React Hooks (AI SDK UI)
- Complete chat interface state managementuseChat()
- Messages: Array of
with parts (text, tool-call, tool-result)UIMessage
: Send with optional files, metadatasendMessage()- Status:
'ready' | 'submitted' | 'streaming' | 'error' - Transports: DefaultChatTransport (HTTP), DirectChatTransport (in-process)
const { messages, sendMessage, status, stop, reload } = useChat({ api: "/api/chat", onToolCall: ({ toolCall }) => { if (toolCall.toolName === "updateSpec") { setSpec(toolCall.input); } }, });
See references/ui-hooks.md for full hook APIs.
5. AI Elements Components
Pre-built, customizable components built on shadcn/ui:
- Container with role-based styling (<Message>
)from="user" | "assistant"
- Markdown rendering with syntax highlighting<MessageResponse>
- Input with file upload, voice, submit button<PromptInput>
- Auto-scrolling container with empty states<Conversation>
- Collapsible thinking/reasoning display<Reasoning>
- Tool execution visualization<Tool>
Install:
npx ai-elements@latest add <component-name>
See references/elements-components.md for full component APIs.
6. React Server Components (Experimental)
⚠️ Not production-ready - Use AI SDK UI for production apps.
- Stream React components from serverstreamUI()
const result = streamUI({ model: anthropic('claude-sonnet-4-5'), prompt: 'Show me a chart', tools: { showChart: tool({ parameters: z.object({ data: z.array(z.number()) }), generate: function* ({ data }) { yield <Spinner />; return <Chart data={data} />; }, }), }, }); return result.value; // ReactNode
See references/rsc-api.md for RSC APIs.
Anthropic-Specific Features
While AI SDK is provider-agnostic, Anthropic Claude offers unique capabilities:
Extended Thinking
Models reason through complex problems before responding:
streamText({ model: anthropic("claude-sonnet-4"), providerOptions: { anthropic: { thinking: { type: "enabled", budgetTokens: 12000 }, }, }, });
Render in UI with
<Reasoning> component or access reasoningText in results.
Prompt Caching
Cache system prompts, long contexts, or tool definitions:
const systemMessage: CoreSystemMessage = { role: "system", content: largeContext, experimental_providerMetadata: { anthropic: { cacheControl: { type: "ephemeral", ttl: "1h" } }, }, };
Reduces cost and latency for repeated contexts (>1024 tokens).
Code Execution
Run Python in sandboxed containers:
import { anthropic } from "@ai-sdk/anthropic"; streamText({ model: anthropic("claude-sonnet-4-5"), tools: { codeExecution: anthropic.tools.codeExecution_20250825(), // Programmatic tools - callable from code searchDocs: tool({ parameters: z.object({ query: z.string() }), execute: async ({ query }) => searchResults, experimental_allowedCallers: ["code_execution_20250825"], }), }, providerOptions: { anthropic: { forwardAnthropicContainerIdFromLastStep: true, // Persist container }, }, });
See references/anthropic-features.md for provider-defined tools, MCP connectors, context management, and more.
Error Handling
AI SDK provides robust error handling patterns:
try { const result = await streamText({ model, messages }); } catch (error) { if (error instanceof APICallError) { console.error("API Error:", error.statusCode, error.message); } else if (error instanceof InvalidToolArgumentsError) { console.error("Tool Error:", error.toolName, error.cause); } }
Patterns:
- Retries: Wrap calls with exponential backoff
- Partial results: Handle
even on errorstoolResults - Tool fallbacks: Catch in
, return error stateexecute() - Stream interruption: Use
from useChatstop()
See references/error-handling.md for comprehensive patterns.
Common Workflows
- Simple Chat:
+useChat()
with message APIstreamText() - Chat with Tools: Add
object, render with custom UItools - Autonomous Agent:
withToolLoopAgent
controlstopWhen - Structured Extraction:
with Zod schemagenerateObject() - UI Streaming:
for tool streamingcreateAgentUIStreamResponse() - File Upload:
with file handling, multi-modal messagesPromptInput
Architecture Patterns
Server-Side (API Route)
// app/api/chat/route.ts import { anthropic } from "@ai-sdk/anthropic"; import { streamText } from "ai"; export async function POST(req: Request) { const { messages } = await req.json(); const result = streamText({ model: anthropic("claude-sonnet-4-5"), messages, system: buildSystemPrompt(), tools: createTools(), }); return result.toDataStreamResponse(); }
Client-Side (React)
// components/chat.tsx 'use client'; import { useChat } from '@ai-sdk/react'; import { Message, PromptInput, Conversation } from '@/components/ai-elements'; export function Chat() { const { messages, sendMessage, status } = useChat({ api: '/api/chat' }); return ( <Conversation> {messages.map(msg => ( <Message key={msg.id} from={msg.role}> <MessageResponse>{msg.content}</MessageResponse> </Message> ))} <PromptInput onSubmit={({ text }) => sendMessage({ content: text })} /> </Conversation> ); }
Progressive Disclosure
- Start here: Quick start example, core concepts
- Deep dive: Reference files for specific APIs
- Real examples:
folder with production patternsexamples/ - Anthropic features: When you need advanced capabilities
Reference Files
Load as needed for detailed documentation:
- core-api.md - generateText, streamText, generateObject, tool definitions
- ui-hooks.md - useChat, useObject, transports, message handling
- agents.md - ToolLoopAgent, stopWhen patterns, oyster workflows
- rsc-api.md - streamUI, state management (experimental)
- elements-components.md - Full component prop tables
- anthropic-features.md - Thinking, caching, code execution, provider tools
- error-handling.md - Retry patterns, fallbacks, validation
Examples
See examples/ for complete implementations:
- tool-registry.tsx - Custom tool UI rendering
- chat-with-tools.tsx - Full chat with tool calling
- streaming-object.tsx - Structured data extraction
- elements-chat.tsx - Chat with AI Elements
- anthropic-thinking.tsx - Reasoning display
- agent-workflow.tsx - Multi-step autonomous agent
Best Practices
- Use
for UIs,streamText()
for non-interactive tasksgenerateText() - Define tools clearly - Descriptive names, precise parameter schemas
- Control loops - Use
orstopWhen
to prevent runaway agentsmaxSteps - Cache aggressively - System prompts, tool definitions, long contexts (Anthropic)
- Handle errors gracefully - Retries, fallbacks, partial results
- Throttle UI updates -
for smooth renderingexperimental_throttle - Type everything - Use Zod for tool parameters and structured output
- Test tools independently - Mock
functions for reliabilityexecute - Monitor usage - Track tokens, costs, errors in callbacks
- Progressive enhancement - Start simple, add tools/agents as needed
Resources
- Docs: https://ai-sdk.dev/docs
- AI Elements: https://ai-sdk.dev/elements
- Examples: https://github.com/vercel/ai
- Anthropic: https://docs.anthropic.com