Awesome-omni-skill pydantic-ai
Python framework for building production-grade AI agents with LLMs. Use when creating agents that need structured outputs, tools, dependency injection, or type-safe interactions. Specifically use for: (1) Building AI agents with OpenAI, Anthropic, Google, or other LLM providers, (2) Creating agents that require structured output validation via Pydantic models, (3) Implementing tool-calling agents with function tools, (4) Building multi-agent applications or A2A (Agent2Agent) protocol servers, (5) Adding observability with Pydantic Logfire, (6) Streaming responses or events from agents
git clone https://github.com/diegosouzapw/awesome-omni-skill
T=$(mktemp -d) && git clone --depth=1 https://github.com/diegosouzapw/awesome-omni-skill "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/ai-agents/pydantic-ai" ~/.claude/skills/diegosouzapw-awesome-omni-skill-pydantic-ai && rm -rf "$T"
skills/ai-agents/pydantic-ai/SKILL.md- references API keys
Pydantic AI
Overview
Pydantic AI is a type-safe Python framework for building AI agents. It provides tools, structured outputs, dependency injection, and comprehensive model support for production-grade applications.
When to Use Pydantic AI
Use this skill when you need to:
- Build AI agents with any LLM provider (OpenAI, Anthropic, Google, Groq, etc.)
- Ensure type-safe, validated structured outputs using Pydantic models
- Create agents that can call tools (functions) to gather information
- Implement dependency injection for testable, maintainable agents
- Stream agent responses or events in real-time
- Build multi-agent workflows or A2A servers
- Add observability with Pydantic Logfire
Quick Start
Installation
uv add pydantic-ai
Or for slim installs with only specific model dependencies:
uv add "pydantic-ai-slim[openai,anthropic]"
Basic Agent
from pydantic_ai import Agent agent = Agent('openai:gpt-4o', instructions='Be helpful and concise.') result = agent.run_sync('What is 2+2?') print(result.output)
Agent with Tools and Structured Output
from dataclasses import dataclass from pydantic import BaseModel, Field from pydantic_ai import Agent, RunContext @dataclass class Dependencies: api_key: str class Output(BaseModel): response: str confidence: float agent = Agent( 'openai:gpt-4o', deps_type=Dependencies, output_type=Output, instructions='Help users with their queries.', ) @agent.tool async def get_info(ctx: RunContext[Dependencies], query: str) -> str: """Fetch information about a topic.""" return f"Information about {query}" result = await agent.run('Tell me about Python', deps=Dependencies(api_key='key')) print(result.output) # Output(response='...', confidence=0.95)
Running Agents
- Async executionagent.run()
- Synchronous executionagent.run_sync()
- Stream text/structured outputagent.run_stream()
- Stream all events (tool calls, text, etc.)agent.run_stream_events()
- Iterate over graph nodesagent.iter()
Agent Components
| Component | Description |
|---|---|
| Instructions | Static or dynamic instructions for the LLM |
| Tools | Functions the LLM can call () |
| Output Type | Pydantic model for structured output validation |
| Dependencies | Type-safe dependency injection for tools/instructions |
| Model | LLM model (OpenAI, Anthropic, Google, etc.) |
Model Selection
Specify models by provider:
openai:gpt-4o, anthropic:claude-3-5-sonnet, google:gemini-2.0-flash, etc.
See
references/models.md for all supported providers and models.
Common Patterns
Dynamic Instructions
@agent.instructions async def add_context(ctx: RunContext[Dependencies]) -> str: return f"Current user ID: {ctx.deps.user_id}"
Tool Parameters
@agent.tool async def search( ctx: RunContext[Dependencies], query: str, max_results: int = 10, ) -> list[str]: """Search a database with the given query.""" # Implementation pass
Streaming Responses
async with agent.run_stream('Tell me a story') as response: async for chunk in response.stream_text(): print(chunk, end='')
Advanced Features
- Graphs: Complex workflows using
pydantic_graph - Multi-Agent: Agent-to-agent communication with A2A protocol
- Durable Execution: DBOS, Prefect, or Temporal integration
- MCP Integration: Model Context Protocol support
- UI Streams: AG-UI or Vercel AI SDK integration
Resources
references/
- All supported LLM providers and modelsmodels.md
- API documentation for core classesapi_reference.md
- Detailed examples for common use casesexamples.md
scripts/
No executable scripts included. Pydantic AI is a framework, not a tool collection.
assets/
No assets included. This is a pure Python framework.
Development
- Test agents with
for quick iterationagent.run_sync() - Use
for testing (project must have tests configured)uv run pytest - Enable Logfire for observability:
logfire.instrument_pydantic_ai()