LLMs-Universal-Life-Science-and-Clinical-Skills- pydanticai-agents
Build typed, provider-agnostic agents with PydanticAI. Use when structured I/O, dependency injection, MCP support, and OpenTelemetry-friendly observability matter more than framework hype.
install
source · Clone the upstream repo
git clone https://github.com/mdbabumiamssm/LLMs-Universal-Life-Science-and-Clinical-Skills-
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/mdbabumiamssm/LLMs-Universal-Life-Science-and-Clinical-Skills- "$T" && mkdir -p ~/.claude/skills && cp -r "$T/Skills/Agentic_AI/PydanticAI_Agents" ~/.claude/skills/mdbabumiamssm-llms-universal-life-science-and-clinical-skills-pydanticai-agents && rm -rf "$T"
manifest:
Skills/Agentic_AI/PydanticAI_Agents/SKILL.mdsource content
PydanticAI Agents
Use this skill when you care about typed contracts, validation, and clean Python engineering as much as raw model output.
Workflow
- Define the agent's structured inputs, outputs, and dependencies before writing prompts.
- Choose the provider model through PydanticAI's model layer so the workflow remains portable.
- Add tools, dependency injection, and structured outputs only where they simplify the system.
- Instrument the workflow with Logfire or another OTel-compatible backend before shipping.
- Back the agent with tests and evals, especially when schema correctness matters.
Guardrails
- Do not bypass typed outputs for convenience on critical workflows.
- Keep tool schemas strict and explicit.
- Prefer MCP integration through documented interfaces rather than hidden adapters.
- Treat observability as mandatory for production agents.
Output Requirements
- State the output schema strategy.
- State the provider/model path.
- State the observability path and one failure mode to test.