Skills vercel-ai-sdk
Vercel AI SDK for building chat interfaces with streaming. Use when implementing useChat hook, handling tool calls, streaming responses, or building chat UI. Triggers on useChat, @ai-sdk/react, UIMessage, ChatStatus, streamText, toUIMessageStreamResponse, addToolOutput, onToolCall, sendMessage.
git clone https://github.com/openclaw/skills
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/anderskev/vercel-ai-sdk" ~/.claude/skills/clawdbot-skills-vercel-ai-sdk && rm -rf "$T"
skills/anderskev/vercel-ai-sdk/SKILL.mdVercel AI SDK
The Vercel AI SDK provides React hooks and server utilities for building streaming chat interfaces with support for tool calls, file attachments, and multi-step reasoning.
Quick Reference
Basic useChat Setup
import { useChat } from '@ai-sdk/react'; const { messages, status, sendMessage, stop, regenerate } = useChat({ id: 'chat-id', messages: initialMessages, onFinish: ({ message, messages, isAbort, isError }) => { console.log('Chat finished'); }, onError: (error) => { console.error('Chat error:', error); } }); // Send a message sendMessage({ text: 'Hello', metadata: { createdAt: Date.now() } }); // Send with files sendMessage({ text: 'Analyze this', files: fileList // FileList or FileUIPart[] });
ChatStatus States
The
status field indicates the current state of the chat:
: Chat is idle and ready to accept new messagesready
: Message sent to API, awaiting response stream startsubmitted
: Response actively streaming from the APIstreaming
: An error occurred during the requesterror
Message Structure
Messages use the
UIMessage type with a parts-based structure:
interface UIMessage { id: string; role: 'system' | 'user' | 'assistant'; metadata?: unknown; parts: Array<UIMessagePart>; // text, file, tool-*, reasoning, etc. }
Part types include:
: Text content with optional streaming statetext
: File attachments (images, documents)file
: Tool invocations with state machinetool-{toolName}
: AI reasoning tracesreasoning
: Custom data partsdata-{typeName}
Server-Side Streaming
import { streamText } from 'ai'; import { convertToModelMessages } from 'ai'; const result = streamText({ model: openai('gpt-4'), messages: convertToModelMessages(uiMessages), tools: { getWeather: tool({ description: 'Get weather', inputSchema: z.object({ city: z.string() }), execute: async ({ city }) => { return { temperature: 72, weather: 'sunny' }; } }) } }); return result.toUIMessageStreamResponse({ originalMessages: uiMessages, onFinish: ({ messages }) => { // Save to database } });
Tool Handling Patterns
Client-Side Tool Execution:
const { addToolOutput } = useChat({ onToolCall: async ({ toolCall }) => { if (toolCall.toolName === 'getLocation') { addToolOutput({ tool: 'getLocation', toolCallId: toolCall.toolCallId, output: 'San Francisco' }); } } });
Rendering Tool States:
{message.parts.map(part => { if (part.type === 'tool-getWeather') { switch (part.state) { case 'input-streaming': return <pre>{JSON.stringify(part.input, null, 2)}</pre>; case 'input-available': return <div>Getting weather for {part.input.city}...</div>; case 'output-available': return <div>Weather: {part.output.weather}</div>; case 'output-error': return <div>Error: {part.errorText}</div>; } } })}
Reference Files
Detailed documentation on specific aspects:
- use-chat.md: Complete useChat API reference
- messages.md: UIMessage structure and part types
- streaming.md: Server-side streaming implementation
- tools.md: Tool definition and execution patterns
Common Patterns
Error Handling
const { error, clearError } = useChat({ onError: (error) => { toast.error(error.message); } }); // Clear error and reset to ready state if (error) { clearError(); }
Message Regeneration
const { regenerate } = useChat(); // Regenerate last assistant message await regenerate(); // Regenerate specific message await regenerate({ messageId: 'msg-123' });
Custom Transport
import { DefaultChatTransport } from 'ai'; const { messages } = useChat({ transport: new DefaultChatTransport({ api: '/api/chat', prepareSendMessagesRequest: ({ id, messages, trigger, messageId }) => ({ body: { chatId: id, lastMessage: messages[messages.length - 1], trigger, messageId } }) }) });
Performance Optimization
// Throttle UI updates to reduce re-renders const chat = useChat({ experimental_throttle: 100 // Update max once per 100ms });
Automatic Message Sending
import { lastAssistantMessageIsCompleteWithToolCalls } from 'ai'; const chat = useChat({ sendAutomaticallyWhen: lastAssistantMessageIsCompleteWithToolCalls // Automatically resend when all tool calls have outputs });
Type Safety
The SDK provides full type inference for tools and messages:
import { InferUITools, UIMessage } from 'ai'; const tools = { getWeather: tool({ inputSchema: z.object({ city: z.string() }), execute: async ({ city }) => ({ weather: 'sunny' }) }) }; type MyMessage = UIMessage< { createdAt: number }, // Metadata type UIDataTypes, InferUITools<typeof tools> // Tool types >; const { messages } = useChat<MyMessage>();
Key Concepts
Parts-Based Architecture
Messages use a parts array instead of a single content field. This allows:
- Streaming text while maintaining other parts
- Tool calls with independent state machines
- File attachments and custom data mixed with text
Tool State Machine
Tool parts progress through states:
: Tool input streaming (optional)input-streaming
: Tool input completeinput-available
: Waiting for user approval (optional)approval-requested
: User approved/denied (optional)approval-responded
: Tool execution completeoutput-available
: Tool execution failedoutput-error
: User denied approvaloutput-denied
Streaming Protocol
The SDK uses Server-Sent Events (SSE) with UIMessageChunk types:
,text-start
,text-deltatext-end
,tool-input-availabletool-output-available
,reasoning-start
,reasoning-deltareasoning-end
,start
,finishabort
Client vs Server Tools
Server-side tools have an
execute function and run on the API route.
Client-side tools omit
execute and are handled via onToolCall and addToolOutput.
Best Practices
- Always handle the
state and provide user feedbackerror - Use
for high-frequency updatesexperimental_throttle - Implement proper loading states based on
status - Type your messages with custom metadata and tools
- Use
for multi-turn tool workflowssendAutomaticallyWhen - Handle all tool states in the UI for better UX
- Use
to allow users to cancel long-running requestsstop() - Validate messages with
on the servervalidateUIMessages