Babysitter opentelemetry-llm
OpenTelemetry instrumentation for LLM applications with distributed tracing
install
source · Clone the upstream repo
git clone https://github.com/a5c-ai/babysitter
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/a5c-ai/babysitter "$T" && mkdir -p ~/.claude/skills && cp -r "$T/library/specializations/ai-agents-conversational/skills/opentelemetry-llm" ~/.claude/skills/a5c-ai-babysitter-opentelemetry-llm && rm -rf "$T"
manifest:
library/specializations/ai-agents-conversational/skills/opentelemetry-llm/SKILL.mdsource content
OpenTelemetry LLM Skill
Capabilities
- Configure OpenTelemetry SDK for LLM apps
- Implement LLM-specific instrumentation
- Set up trace exporters (Jaeger, OTLP)
- Design semantic conventions for LLM
- Configure span attributes for AI workloads
- Implement context propagation
Target Processes
- llm-observability-monitoring
- agent-deployment-pipeline
Implementation Details
Core Components
- TracerProvider: SDK configuration
- SpanProcessor: Batch/simple processors
- Exporters: Jaeger, OTLP, Console
- Instrumentation: Auto and manual
LLM Semantic Conventions
- gen_ai.system (OpenAI, Anthropic)
- gen_ai.request.model
- gen_ai.request.max_tokens
- gen_ai.response.finish_reason
- gen_ai.usage.prompt_tokens
Configuration Options
- Exporter selection
- Sampling strategies
- Resource attributes
- Span limits
- Context propagation
Best Practices
- Consistent attribute naming
- Appropriate sampling
- Error handling traces
- Propagate context across services
Dependencies
- opentelemetry-sdk
- opentelemetry-exporter-*
- openinference (optional)