Awesome-omni-skills pydantic-ai
PydanticAI \u2014 Typed AI Agents in Python workflow skill. Use this skill when the user needs Build production-ready AI agents with PydanticAI \u2014 type-safe tool use, structured outputs, dependency injection, and multi-model support and the operator should preserve the upstream workflow, copied support files, and provenance before merging or handing off.
git clone https://github.com/diegosouzapw/awesome-omni-skills
T=$(mktemp -d) && git clone --depth=1 https://github.com/diegosouzapw/awesome-omni-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/pydantic-ai" ~/.claude/skills/diegosouzapw-awesome-omni-skills-pydantic-ai && rm -rf "$T"
skills/pydantic-ai/SKILL.mdPydanticAI — Typed AI Agents in Python
Overview
This public intake copy packages
plugins/antigravity-awesome-skills-claude/skills/pydantic-ai from https://github.com/sickn33/antigravity-awesome-skills into the native Omni Skills editorial shape without hiding its origin.
Use it when the operator needs the upstream workflow, support files, and repository context to stay intact while the public validator and private enhancer continue their normal downstream flow.
This intake keeps the copied upstream files intact and uses
metadata.json plus ORIGIN.md as the provenance anchor for review.
PydanticAI — Typed AI Agents in Python
Imported source sections that did not map cleanly to the public headings are still preserved below or in the support files. Notable imported sections: How It Works, Security & Safety Notes, Common Pitfalls, Limitations.
When to Use This Skill
Use this section as the trigger filter. It should make the activation boundary explicit before the operator loads files, runs commands, or opens a pull request.
- Use when building Python AI agents that call tools and return structured data
- Use when you need validated, typed LLM outputs (not raw strings)
- Use when you want to write unit tests for agent logic without hitting a real LLM
- Use when switching between LLM providers without rewriting agent code
- Use when the user asks about Agent, @agent.tool, RunContext, ModelRetry, or result_type
- Use when the request clearly matches the imported source intent: Build production-ready AI agents with PydanticAI — type-safe tool use, structured outputs, dependency injection, and multi-model support.
Operating Table
| Situation | Start here | Why it matters |
|---|---|---|
| First-time use | | Confirms repository, branch, commit, and imported path before touching the copied workflow |
| Provenance review | | Gives reviewers a plain-language audit trail for the imported source |
| Workflow execution | | Starts with the smallest copied file that materially changes execution |
| Supporting context | | Adds the next most relevant copied source file without loading the entire package |
| Handoff decision | | Helps the operator switch to a stronger native skill when the task drifts |
Workflow
This workflow is intentionally editorial and operational at the same time. It keeps the imported source useful to the operator while still satisfying the public intake standards that feed the downstream enhancer flow.
- Confirm the user goal, the scope of the imported workflow, and whether this skill is still the right router for the task.
- Read the overview and provenance files before loading any copied upstream support files.
- Load only the references, examples, prompts, or scripts that materially change the outcome for the current request.
- Execute the upstream workflow while keeping provenance and source boundaries explicit in the working notes.
- Validate the result against the upstream expectations and the evidence you can point to in the copied files.
- Escalate or hand off to a related skill when the work moves out of this imported workflow's center of gravity.
- Before merge or closure, record what was used, what changed, and what the reviewer still needs to verify.
Imported Workflow Notes
Imported: Overview
PydanticAI is a Python agent framework from the Pydantic team that brings the same type-safety and validation guarantees as Pydantic to LLM-based applications. It supports structured outputs (validated with Pydantic models), dependency injection for testability, streamed responses, multi-turn conversations, and tool use — across OpenAI, Anthropic, Google Gemini, Groq, Mistral, and Ollama. Use this skill when building production AI agents, chatbots, or LLM pipelines where correctness and testability matter.
Imported: How It Works
Step 1: Installation
pip install pydantic-ai # Install extras for specific providers pip install 'pydantic-ai[openai]' # OpenAI / Azure OpenAI pip install 'pydantic-ai[anthropic]' # Anthropic Claude pip install 'pydantic-ai[gemini]' # Google Gemini pip install 'pydantic-ai[groq]' # Groq pip install 'pydantic-ai[vertexai]' # Google Vertex AI
Step 2: A Minimal Agent
from pydantic_ai import Agent # Simple agent — returns a plain string agent = Agent( 'anthropic:claude-sonnet-4-6', system_prompt='You are a helpful assistant. Be concise.', ) result = agent.run_sync('What is the capital of Japan?') print(result.data) # "Tokyo" print(result.usage()) # Usage(requests=1, request_tokens=..., response_tokens=...)
Step 3: Structured Output with Pydantic Models
from pydantic import BaseModel from pydantic_ai import Agent class MovieReview(BaseModel): title: str year: int rating: float # 0.0 to 10.0 summary: str recommended: bool agent = Agent( 'openai:gpt-4o', result_type=MovieReview, system_prompt='You are a film critic. Return structured reviews.', ) result = agent.run_sync('Review Inception (2010)') review = result.data # Fully typed MovieReview instance print(f"{review.title} ({review.year}): {review.rating}/10") print(f"Recommended: {review.recommended}")
Step 4: Tool Use
Register tools with
@agent.tool — the LLM can call them during a run:
from pydantic_ai import Agent, RunContext from pydantic import BaseModel import httpx class WeatherReport(BaseModel): city: str temperature_c: float condition: str weather_agent = Agent( 'anthropic:claude-sonnet-4-6', result_type=WeatherReport, system_prompt='Get current weather for the requested city.', ) @weather_agent.tool async def get_temperature(ctx: RunContext, city: str) -> dict: """Fetch the current temperature for a city from the weather API.""" async with httpx.AsyncClient() as client: r = await client.get(f'https://wttr.in/{city}?format=j1') data = r.json() return { 'temp_c': float(data['current_condition'][0]['temp_C']), 'description': data['current_condition'][0]['weatherDesc'][0]['value'], } import asyncio result = asyncio.run(weather_agent.run('What is the weather in Tokyo?')) print(result.data)
Step 5: Dependency Injection
Inject services (database, HTTP clients, config) into agents for testability:
from dataclasses import dataclass from pydantic_ai import Agent, RunContext from pydantic import BaseModel @dataclass class Deps: db: Database user_id: str class SupportResponse(BaseModel): message: str escalate: bool support_agent = Agent( 'openai:gpt-4o-mini', deps_type=Deps, result_type=SupportResponse, system_prompt='You are a support agent. Use the tools to help customers.', ) @support_agent.tool async def get_order_history(ctx: RunContext[Deps]) -> list[dict]: """Fetch recent orders for the current user.""" return await ctx.deps.db.get_orders(ctx.deps.user_id, limit=5) @support_agent.tool async def create_refund(ctx: RunContext[Deps], order_id: str, reason: str) -> dict: """Initiate a refund for a specific order.""" return await ctx.deps.db.create_refund(order_id, reason, ctx.deps.user_id) # Usage async def handle_support(user_id: str, message: str): deps = Deps(db=get_db(), user_id=user_id) result = await support_agent.run(message, deps=deps) return result.data
Step 6: Testing with TestModel
Write unit tests without real LLM calls:
from pydantic_ai.models.test import TestModel def test_support_agent_escalates(): with support_agent.override(model=TestModel()): # TestModel returns a minimal valid response matching result_type result = support_agent.run_sync( 'I want to cancel my account', deps=Deps(db=FakeDb(), user_id='user-123'), ) # Test the structure, not the LLM's exact words assert isinstance(result.data, SupportResponse) assert isinstance(result.data.escalate, bool)
FunctionModel for deterministic test responses:
from pydantic_ai.models.function import FunctionModel, ModelContext def my_model(messages, info): return ModelResponse(parts=[TextPart('Always this response')]) with agent.override(model=FunctionModel(my_model)): result = agent.run_sync('anything')
Step 7: Streaming Responses
import asyncio from pydantic_ai import Agent agent = Agent('anthropic:claude-sonnet-4-6') async def stream_response(): async with agent.run_stream('Write a haiku about Python') as result: async for chunk in result.stream_text(): print(chunk, end='', flush=True) print() # newline print(f"Total tokens: {result.usage()}") asyncio.run(stream_response())
Step 8: Multi-Turn Conversations
from pydantic_ai import Agent from pydantic_ai.messages import ModelMessagesTypeAdapter agent = Agent('openai:gpt-4o', system_prompt='You are a helpful assistant.') # First turn result1 = agent.run_sync('My name is Alice.') history = result1.all_messages() # Second turn — passes conversation history result2 = agent.run_sync('What is my name?', message_history=history) print(result2.data) # "Your name is Alice."
Examples
Example 1: Ask for the upstream workflow directly
Use @pydantic-ai to handle <task>. Start from the copied upstream workflow, load only the files that change the outcome, and keep provenance visible in the answer.
Explanation: This is the safest starting point when the operator needs the imported workflow, but not the entire repository.
Example 2: Ask for a provenance-grounded review
Review @pydantic-ai against metadata.json and ORIGIN.md, then explain which copied upstream files you would load first and why.
Explanation: Use this before review or troubleshooting when you need a precise, auditable explanation of origin and file selection.
Example 3: Narrow the copied support files before execution
Use @pydantic-ai for <task>. Load only the copied references, examples, or scripts that change the outcome, and name the files explicitly before proceeding.
Explanation: This keeps the skill aligned with progressive disclosure instead of loading the whole copied package by default.
Example 4: Build a reviewer packet
Review @pydantic-ai using the copied upstream files plus provenance, then summarize any gaps before merge.
Explanation: This is useful when the PR is waiting for human review and you want a repeatable audit packet.
Imported Usage Notes
Imported: Examples
Example 1: Code Review Agent
from pydantic import BaseModel, Field from pydantic_ai import Agent from typing import Literal class CodeReview(BaseModel): quality: Literal['excellent', 'good', 'needs_work', 'poor'] issues: list[str] = Field(default_factory=list) suggestions: list[str] = Field(default_factory=list) approved: bool code_review_agent = Agent( 'anthropic:claude-sonnet-4-6', result_type=CodeReview, system_prompt=""" You are a senior engineer performing code review. Evaluate code quality, identify issues, and provide actionable suggestions. Set approved=True only for good or excellent quality code with no security issues. """, ) def review_code(diff: str) -> CodeReview: result = code_review_agent.run_sync(f"Review this code:\n\n{diff}") return result.data
Example 2: Agent with Retry Logic
from pydantic_ai import Agent, ModelRetry from pydantic import BaseModel, field_validator class StrictJson(BaseModel): value: int @field_validator('value') def must_be_positive(cls, v): if v <= 0: raise ValueError('value must be positive') return v agent = Agent('openai:gpt-4o-mini', result_type=StrictJson) @agent.result_validator async def validate_result(ctx, result: StrictJson) -> StrictJson: if result.value > 1000: raise ModelRetry('Value must be under 1000. Try again with a smaller number.') return result
Example 3: Multi-Agent Pipeline
from pydantic_ai import Agent from pydantic import BaseModel class ResearchSummary(BaseModel): key_points: list[str] conclusion: str class BlogPost(BaseModel): title: str body: str meta_description: str researcher = Agent('openai:gpt-4o', result_type=ResearchSummary) writer = Agent('anthropic:claude-sonnet-4-6', result_type=BlogPost) async def research_and_write(topic: str) -> BlogPost: # Stage 1: research research = await researcher.run(f'Research the topic: {topic}') # Stage 2: write based on research post = await writer.run( f'Write a blog post about: {topic}\n\nResearch:\n' + '\n'.join(f'- {p}' for p in research.data.key_points) + f'\n\nConclusion: {research.data.conclusion}' ) return post.data
Best Practices
Treat the generated public skill as a reviewable packaging layer around the upstream repository. The goal is to keep provenance explicit and load only the copied source material that materially improves execution.
- ✅ Always define result_type with a Pydantic model — avoid returning raw strings in production
- ✅ Use deps_type with a dataclass for dependency injection — makes agents testable
- ✅ Use TestModel in unit tests — never hit a real LLM in CI
- ✅ Add @agent.result_validator for business-logic checks beyond Pydantic validation
- ✅ Use run_stream for long outputs in user-facing applications to show progressive results
- ❌ Don't put secrets (API keys) in Agent() arguments — use environment variables
- ❌ Don't share a single Agent instance across async tasks if deps differ — create per-request instances or use agent.run() with per-call deps
Imported Operating Notes
Imported: Best Practices
- ✅ Always define
with a Pydantic model — avoid returning raw strings in productionresult_type - ✅ Use
with a dataclass for dependency injection — makes agents testabledeps_type - ✅ Use
in unit tests — never hit a real LLM in CITestModel - ✅ Add
for business-logic checks beyond Pydantic validation@agent.result_validator - ✅ Use
for long outputs in user-facing applications to show progressive resultsrun_stream - ❌ Don't put secrets (API keys) in
arguments — use environment variablesAgent() - ❌ Don't share a single
instance across async tasks if deps differ — create per-request instances or useAgent
with per-callagent.run()deps - ❌ Don't catch
broadly — let PydanticAI retry withValidationError
for recoverable LLM output errorsModelRetry
Troubleshooting
Problem: The operator skipped the imported context and answered too generically
Symptoms: The result ignores the upstream workflow in
plugins/antigravity-awesome-skills-claude/skills/pydantic-ai, fails to mention provenance, or does not use any copied source files at all.
Solution: Re-open metadata.json, ORIGIN.md, and the most relevant copied upstream files. Load only the files that materially change the answer, then restate the provenance before continuing.
Problem: The imported workflow feels incomplete during review
Symptoms: Reviewers can see the generated
SKILL.md, but they cannot quickly tell which references, examples, or scripts matter for the current task.
Solution: Point at the exact copied references, examples, scripts, or assets that justify the path you took. If the gap is still real, record it in the PR instead of hiding it.
Problem: The task drifted into a different specialization
Symptoms: The imported skill starts in the right place, but the work turns into debugging, architecture, design, security, or release orchestration that a native skill handles better. Solution: Use the related skills section to hand off deliberately. Keep the imported provenance visible so the next skill inherits the right context instead of starting blind.
Related Skills
- Use when the work is better handled by that native specialization after this imported skill establishes context.@prompt-engineer
- Use when the work is better handled by that native specialization after this imported skill establishes context.@prompt-engineering
- Use when the work is better handled by that native specialization after this imported skill establishes context.@prompt-engineering-patterns
- Use when the work is better handled by that native specialization after this imported skill establishes context.@prompt-library
Additional Resources
Use this support matrix and the linked files below as the operator packet for this imported skill. They should reflect real copied source material, not generic scaffolding.
| Resource family | What it gives the reviewer | Example path |
|---|---|---|
| copied reference notes, guides, or background material from upstream | |
| worked examples or reusable prompts copied from upstream | |
| upstream helper scripts that change execution or validation | |
| routing or delegation notes that are genuinely part of the imported package | |
| supporting assets or schemas copied from the source package | |
Imported Reference Notes
Imported: Security & Safety Notes
- Set API keys via environment variables (
,OPENAI_API_KEY
, etc.) — never hardcode them.ANTHROPIC_API_KEY - Validate all tool inputs before passing to external systems — use Pydantic models or manual checks.
- Tools that mutate data (write to DB, send emails, call payment APIs) should require explicit user confirmation before the agent invokes them in production.
- Log
for audit trails when agents perform consequential actions.result.all_messages() - Set
limits onretries=
to prevent runaway loops on persistent validation failures.Agent()
Imported: Common Pitfalls
-
Problem:
on every LLM response — structured output never validates Solution: SimplifyValidationError
fields. Useresult_type
andOptional
where appropriate. The model may struggle with overly strict schemas.default -
Problem: Tool is never called by the LLM Solution: Write a clear, specific docstring for the tool function — PydanticAI sends the docstring as the tool description to the LLM.
-
Problem:
dependency isRunContext
inside a tool Solution: PassNone
when callingdeps=
oragent.run()
. Dependencies are not set globally.agent.run_sync() -
Problem:
error when callingasyncio.run()
inside FastAPI Solution: Useagent.run()
directly in async FastAPI route handlers — don't wrap inawait agent.run()
.asyncio.run()
Imported: Limitations
- Use this skill only when the task clearly matches the scope described above.
- Do not treat the output as a substitute for environment-specific validation, testing, or expert review.
- Stop and ask for clarification if required inputs, permissions, safety boundaries, or success criteria are missing.