Claude-skill-registry develop-ai-functions-example
Develop examples for AI SDK functions. Use when creating, running, or modifying examples under examples/ai-functions/src to validate provider support, demonstrate features, or create test fixtures.
git clone https://github.com/majiayu000/claude-skill-registry
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data/develop-ai-functions-example" ~/.claude/skills/majiayu000-claude-skill-registry-develop-ai-functions-example && rm -rf "$T"
skills/data/develop-ai-functions-example/SKILL.mdAI Functions Examples
The
examples/ai-functions/ directory contains scripts for validating, testing, and iterating on AI SDK functions across providers.
Example Categories
Examples are organized by AI SDK function in
examples/ai-functions/src/:
| Directory | Purpose |
|---|---|
| Non-streaming text generation with |
| Streaming text generation with |
| Structured output generation with |
| Streaming structured output with |
| examples for agentic workflows |
| Single embedding generation with |
| Batch embedding generation with |
| Image generation with |
| Text-to-speech with |
| Audio transcription with |
| Document reranking with |
| Custom middleware implementations |
| Provider registry setup and usage |
| OpenTelemetry integration |
| Multi-component examples (agents, routers) |
| Shared utilities (not examples) |
| Reusable tool definitions |
File Naming Convention
Examples follow the pattern:
{provider}-{feature}.ts
| Pattern | Example | Description |
|---|---|---|
| | Basic provider usage |
| | Specific feature |
| | Provider with sub-provider |
| | Sub-provider with feature |
Example Structure
All examples use the
run() wrapper from lib/run.ts which:
- Loads environment variables from
.env - Provides error handling with detailed API error logging
Basic Template
import { providerName } from '@ai-sdk/provider-name'; import { generateText } from 'ai'; import { run } from '../lib/run'; run(async () => { const result = await generateText({ model: providerName('model-id'), prompt: 'Your prompt here.', }); console.log(result.text); console.log('Token usage:', result.usage); console.log('Finish reason:', result.finishReason); });
Streaming Template
import { providerName } from '@ai-sdk/provider-name'; import { streamText } from 'ai'; import { printFullStream } from '../lib/print-full-stream'; import { run } from '../lib/run'; run(async () => { const result = streamText({ model: providerName('model-id'), prompt: 'Your prompt here.', }); await printFullStream({ result }); });
Tool Calling Template
import { providerName } from '@ai-sdk/provider-name'; import { generateText, tool } from 'ai'; import { z } from 'zod'; import { run } from '../lib/run'; run(async () => { const result = await generateText({ model: providerName('model-id'), tools: { myTool: tool({ description: 'Tool description', inputSchema: z.object({ param: z.string().describe('Parameter description'), }), execute: async ({ param }) => { return { result: `Processed: ${param}` }; }, }), }, prompt: 'Use the tool to...', }); console.log(JSON.stringify(result, null, 2)); });
Structured Output Template
import { providerName } from '@ai-sdk/provider-name'; import { generateObject } from 'ai'; import { z } from 'zod'; import { run } from '../lib/run'; run(async () => { const result = await generateObject({ model: providerName('model-id'), schema: z.object({ name: z.string(), items: z.array(z.string()), }), prompt: 'Generate a...', }); console.log(JSON.stringify(result.object, null, 2)); console.log('Token usage:', result.usage); });
Running Examples
From the
examples/ai-functions directory:
pnpm tsx src/generate-text/openai.ts pnpm tsx src/stream-text/openai-tool-call.ts pnpm tsx src/agent/openai-generate.ts
When to Write Examples
Write examples when:
-
Adding a new provider: Create basic examples for each supported API (
,generateText
,streamText
, etc.)generateObject -
Implementing a new feature: Demonstrate the feature with at least one provider example
-
Reproducing a bug: Create an example that shows the issue for debugging
-
Adding provider-specific options: Show how to use
for provider-specific settingsproviderOptions -
Creating test fixtures: Use examples to generate API response fixtures (see
skill)capture-api-response-test-fixture
Utility Helpers
The
lib/ directory contains shared utilities:
| File | Purpose |
|---|---|
| Error-handling wrapper with loading |
| Clean object printing (removes undefined values) |
| Colored streaming output for tool calls, reasoning, text |
| Save streaming chunks for test fixtures |
| Display images in terminal |
| Save audio files to disk |
Using print utilities
import { print } from '../lib/print'; // Pretty print objects without undefined values print('Result:', result); print('Usage:', result.usage, { depth: 2 });
Using printFullStream
import { printFullStream } from '../lib/print-full-stream'; const result = streamText({ ... }); await printFullStream({ result }); // Colored output for text, tool calls, reasoning
Reusable Tools
The
tools/ directory contains reusable tool definitions:
import { weatherTool } from '../tools/weather-tool'; const result = await generateText({ model: openai('gpt-4o'), tools: { weather: weatherTool }, prompt: 'What is the weather in San Francisco?', });
Best Practices
-
Keep examples focused: Each example should demonstrate one feature or use case
-
Use descriptive prompts: Make it clear what the example is testing
-
Handle errors gracefully: The
wrapper handles this automaticallyrun() -
Use realistic model IDs: Use actual model IDs that work with the provider
-
Add comments for complex logic: Explain non-obvious code patterns
-
Reuse tools when appropriate: Use
or create new reusable tools inweatherTooltools/