Claude-skills tanstack-ai
TanStack AI (alpha) provider-agnostic type-safe chat with streaming for OpenAI, Anthropic, Gemini, Ollama. Use for chat APIs, React/Solid frontends with useChat/ChatClient, isomorphic tools, tool approval flows, agent loops, multimodal inputs, or troubleshooting streaming and tool definitions.
git clone https://github.com/secondsky/claude-skills
T=$(mktemp -d) && git clone --depth=1 https://github.com/secondsky/claude-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/plugins/tanstack-ai/skills/tanstack-ai" ~/.claude/skills/secondsky-claude-skills-tanstack-ai && rm -rf "$T"
plugins/tanstack-ai/skills/tanstack-ai/SKILL.mdTanStack AI (Provider-Agnostic LLM SDK)
Status: Production Ready ✅
Last Updated: 2025-12-09
Dependencies: Node.js 18+, TypeScript 5+; React 18+ for
@tanstack/ai-react; Solid 1.8+ for @tanstack/ai-solidLatest Versions: @tanstack/ai@latest (alpha), @tanstack/ai-react@latest, @tanstack/ai-client@latest, adapters: @tanstack/ai-openai@latest @tanstack/ai-anthropic@latest @tanstack/ai-gemini@latest @tanstack/ai-ollama@latest
Quick Start (7 Minutes)
1) Install core + adapter
pnpm add @tanstack/ai @tanstack/ai-react @tanstack/ai-openai # swap adapters as needed: @tanstack/ai-anthropic @tanstack/ai-gemini @tanstack/ai-ollama pnpm add zod # recommended for tool schemas
Why this matters:
- Core is framework-agnostic; React binding just wraps the headless client. citeturn1search3
- Adapters abstract provider quirks so you can change models without rewriting code. citeturn1search3
2) Ship a streaming chat endpoint (Next.js or TanStack Start)
// app/api/chat/route.ts (Next.js) or src/routes/api/chat.ts (TanStack Start) import { chat, toStreamResponse } from '@tanstack/ai' import { openai } from '@tanstack/ai-openai' import { tools } from '@/tools/definitions' // definitions only export async function POST(request: Request) { const { messages, conversationId } = await request.json() const stream = chat({ adapter: openai(), messages, model: 'gpt-4o', tools, }) return toStreamResponse(stream) }
CRITICAL:
- Pass tool definitions to the server so the LLM can request them; implementations live in their runtimes. citeturn0search7
- Always stream; chunked responses keep UIs responsive and reduce token waste. citeturn0search1
3) Wire the client with useChat
+ SSE
useChat// components/Chat.tsx import { useChat, fetchServerSentEvents } from '@tanstack/ai-react' import { clientTools } from '@tanstack/ai-client' import { updateUIDef } from '@/tools/definitions' const updateUI = updateUIDef.client(({ message }) => { alert(message) return { success: true } }) export function Chat() { const tools = clientTools(updateUI) const { messages, sendMessage, isLoading, approval } = useChat({ connection: fetchServerSentEvents('/api/chat'), tools, }) return ( <form onSubmit={e => { e.preventDefault(); sendMessage(e.currentTarget.prompt.value) }}> <textarea name="prompt" disabled={isLoading} /> {approval?.pending && ( <button type="button" onClick={() => approval.approve()}> Approve tool </button> )} </form> ) }
CRITICAL:
- Use
(or matching adapter) to mirror the streaming response. citeturn0search0fetchServerSentEvents - Keep client tool names identical to definitions to avoid “tool not found” errors. citeturn0search7
The 4-Step Setup Process
Step 1: Choose provider + model safely
- Add the correct adapter and set the matching API key (
,OPENAI_API_KEY
,ANTHROPIC_API_KEY
, or Ollama host).GEMINI_API_KEY - Prefer per-model option typing from adapters to avoid invalid options (e.g., vision-only fields). citeturn1search3
Step 2: Define tools once, implement per runtime
// tools/definitions.ts import { z, toolDefinition } from '@tanstack/ai' export const getWeatherDef = toolDefinition({ name: 'getWeather', description: 'Get current weather for a city', inputSchema: z.object({ city: z.string() }), needsApproval: true, }) export const getWeather = getWeatherDef.server(async ({ city }) => { const data = await fetch(`https://api.weather.gov/points?q=${city}`).then(r => r.json()) return { summary: data.properties?.relativeLocation?.properties?.city ?? city } }) export const showToast = getWeatherDef.client(({ city }) => { console.log(`Showing toast for ${city}`) return { acknowledged: true } })
Key Points:
forces explicit user approval for sensitive actions. citeturn0search1needsApproval: true- Keep tools single-purpose and idempotent; return structured objects instead of throwing errors. citeturn0search1
Step 3: Create connection adapter + chat options
- Server:
for HTTP streaming;toStreamResponse(stream)
helper for Server-Sent Events. citeturn0search3turn0search4toServerSentEventsStream - Client:
or a custom adapter for websockets if needed. citeturn0search0fetchServerSentEvents('/api/chat') - Configure
(e.g.,agentLoopStrategy
) to cap tool recursion. citeturn1search4maxIterations(8)
Step 4: Add observability + guardrails
- Log tool executions and stream chunks for debugging; alpha exposes hooks while devtools are in progress. citeturn0search1
- Validate inputs with Zod; fail fast and return typed error objects.
- Enforce timeouts on external API calls inside tools to prevent stuck agent loops.
Critical Rules
Always Do
✅ Stream responses; avoid waiting for full completions. citeturn0search1
✅ Pass definitions to the server and implementations to the correct runtime. citeturn0search7
✅ Use Zod schemas for tool inputs/outputs to keep type safety across providers. citeturn0search1
✅ Cap agent loops with
maxIterations to prevent runaway tool calls. citeturn1search4✅ Require
needsApproval for destructive or billing-sensitive tools. citeturn0search1
Never Do
❌ Mix provider adapters in a single request—instantiate one adapter per call.
❌ Throw raw errors from tools; return structured error payloads.
❌ Send client tool implementations to the server (definitions only).
❌ Hardcode model capabilities; rely on adapter typings for per-model options. citeturn0search1
❌ Skip API key checks; fail fast with helpful messages on the server. citeturn0search1
Known Issues Prevention
This skill prevents 3 documented issues:
Issue #1: “tool not found” / silent tool failures
Why it happens: Definitions aren’t passed to
chat(); only implementations exist locally.Prevention: Export definitions separately and include them in the server
tools array; keep names stable. citeturn0search7
Issue #2: Streaming stalls in the UI
Why it happens: Mismatch between server response type and client adapter (HTTP chunked vs SSE).
Prevention: Use
toStreamResponse on the server + fetchServerSentEvents (or matching adapter) on the client. citeturn0search1turn0search0
Issue #3: Model option validation errors
Why it happens: Provider-specific options (e.g., vision params) sent to unsupported models.
Prevention: Use adapter-provided types; rely on per-model option typing to surface invalid fields at compile time. citeturn1search3
Configuration Files Reference
.env.local (Full Example)
OPENAI_API_KEY=sk-... ANTHROPIC_API_KEY= GEMINI_API_KEY= OLLAMA_HOST=http://localhost:11434 AI_STREAM_STRATEGY=immediate
Why these settings:
- Keep non-active providers empty to avoid accidental multi-provider calls.
is read by the sample client to pick chunk strategies (immediate vs buffered).AI_STREAM_STRATEGY
Common Patterns
Pattern 1: Agentic cycle with bounded tools
import { chat, maxIterations } from '@tanstack/ai' import { openai } from '@tanstack/ai-openai' const stream = chat({ adapter: openai(), messages, tools, agentLoopStrategy: maxIterations(8), // hard cap })
When to use: Any flow where the LLM could recurse across tools (search → summarize → fetch detail). citeturn1search4
Pattern 2: Hybrid server + client tools
// server: data fetch const fetchUser = fetchUserDef.server(async ({ id }) => db.user.find(id)) // client: UI update const highlightUser = highlightUserDef.client(({ id }) => { document.querySelector(`#user-${id}`)?.classList.add('ring') return { highlighted: true } }) chat({ tools: [fetchUser, highlightUser] })
When to use: When the model must both fetch data and mutate UI state in one loop. citeturn0search1
Using Bundled Resources
Scripts (scripts/)
— verifies required provider keys are present before running dev servers.scripts/check-ai-env.sh
Example Usage:
./scripts/check-ai-env.sh
References (references/)
— condensed server/client/tool patterns plus troubleshooting cues.references/tanstack-ai-cheatsheet.md
When Claude should load these: When debugging tool routing, streaming issues, or recalling exact API calls.
Assets (assets/)
— copy/paste API route template with streaming + tools.assets/api-chat-route.ts
— ready-to-use toolDefinition examples with approval + zod schemas.assets/tool-definitions.ts
When to Load References
Load reference files for specific implementation scenarios:
-
Adapter Comparison: Load
when choosing between OpenAI, Anthropic, Gemini, or Ollama adapters, or when debugging provider-specific quirks.references/adapter-matrix.md -
React Integration Details: Load
when implementing useChat hooks, handling SSE streams in React components, or managing client-side tool state.references/react-integration.md -
Routing Setup: Load
when setting up API routes in Next.js vs TanStack Start, or troubleshooting streaming response setup.references/start-vs-next-routing.md -
Streaming Issues: Load
when debugging SSE connection problems, chunk delivery issues, or HTTP streaming configuration.references/streaming-troubleshooting.md -
Quick Reference: Load
for condensed API patterns, tool definition syntax, or rapid troubleshooting cues.references/tanstack-ai-cheatsheet.md -
Tool Architecture: Load
when implementing complex client/server tool workflows, approval flows, or hybrid tool patterns.references/tool-patterns.md -
Type Safety Details: Load
when working with per-model option typing, multimodal inputs, or debugging type errors across adapters.references/type-safety.md
Advanced Topics
Per-model type safety
- Use adapter typings to pick valid options per model; avoid generic
options onany
. citeturn1search3chat() - For multimodal models, send
with correct MIME types; unsupported modalities are caught at compile time. citeturn1search3parts
Tool approval UX
- Surfaced via
object inapproval
; render approve/reject UI and persist decision per tool call. citeturn0search1useChat - For auditable actions, log approval decisions alongside tool inputs.
Connection adapters
- Default to
(SSE) for minimal setup; switch to custom adapters for websockets or HTTP chunking. citeturn0search0fetchServerSentEvents - Use
in the client to emit every chunk for typing indicator UIs. citeturn0search0ImmediateStrategy
Dependencies
Required:
- @tanstack/ai@latest — core chat + tool engine
- @tanstack/ai-react@latest — React bindings (skip for headless usage)
- @tanstack/ai-client@latest — headless chat client + adapters
- Adapter: one of @tanstack/ai-openai@latest | @tanstack/ai-anthropic@latest | @tanstack/ai-gemini@latest | @tanstack/ai-ollama@latest
- zod@latest — schema validation for tools
Optional:
- @tanstack/ai-solid@latest — Solid bindings
- @tanstack/react-query@latest — cache data fetched inside tools
- @tanstack/start@latest — co-locate AI tools with server functions
Official Documentation
- TanStack AI Overview: https://tanstack.com/ai/latest/docs/getting-started/overview
- Quick Start: https://tanstack.com/ai/latest/docs/getting-started/quick-start
- Tool Architecture & Approval: https://tanstack.com/ai/latest/docs/guides/tool-architecture
- Client Tools: https://tanstack.com/ai/latest/docs/guides/client-tools
- API Reference: https://tanstack.com/ai/latest/docs/api/ai
Package Versions (Verified 2025-12-09)
{ "dependencies": { "@tanstack/ai": "latest", "@tanstack/ai-react": "latest", "@tanstack/ai-client": "latest", "@tanstack/ai-openai": "latest" }, "devDependencies": { "zod": "latest" } }
Troubleshooting
Problem: UI never receives tool output
Solution: Ensure tool implementations return serializable objects; avoid returning undefined. Register client implementations via
clientTools(...).
Problem: “Missing API key” responses
Solution: Run
./scripts/check-ai-env.sh and set the relevant provider key in .env.local. Fail fast in the route before invoking chat(). citeturn0search1
Problem: Streaming stops after first chunk
Solution: Confirm the server returns
toStreamResponse(stream) (or SSE helper) and that any reverse proxy allows chunked transfer.
Complete Setup Checklist
Use this checklist to verify your setup:
- Installed core + one adapter and zod
- API route returns
with tool definitions includedtoStreamResponse(stream) - Client uses
(or matching adapter) and registers client tool implementationsfetchServerSentEvents -
paths render approve/reject UIneedsApproval - Agent loop capped (e.g.,
)maxIterations - Environment keys validated with
check-ai-env.sh - Multimodal inputs tested if targeting vision/audio models
Questions? Issues?
- Load
for deeper examplesreferences/tanstack-ai-cheatsheet.md - Re-run quick start steps with a single provider adapter
- Review official docs above for API surface updates