Phoenix phoenix-integration-snippets
git clone https://github.com/Arize-ai/phoenix
T=$(mktemp -d) && git clone --depth=1 https://github.com/Arize-ai/phoenix "$T" && mkdir -p ~/.claude/skills && cp -r "$T/.agents/skills/phoenix-integration-snippets" ~/.claude/skills/arize-ai-phoenix-phoenix-integration-snippets && rm -rf "$T"
.agents/skills/phoenix-integration-snippets/SKILL.mdPhoenix Integration Snippets
Generate onboarding snippets (install + implementation) for Phoenix tracing integrations and add them to the project onboarding UI.
Workflow
Copy this checklist and track progress:
- [ ] 1. Research: read integration docs and OpenInference repo - [ ] 2. Determine language support (Python, TypeScript, or both) - [ ] 3. Generate snippets following the format below - [ ] 4. Test every language variant against Phoenix - [ ] 5. Wire into the onboarding UI - [ ] 6. Report results with links to trace pages
Step 1: Research. Read the relevant file in
docs/phoenix/integrations/ for the framework. Also check the OpenInference repo for example code: https://github.com/Arize-ai/openinference
Step 4: Test. See Testing below. Only proceed to wiring into the UI when traces are confirmed.
Step 5: Wire into the onboarding UI. After adding
docsHref and githubHref, verify every URL returns HTTP 200 before committing. For GitHub links, prefer the OpenInference repo (https://github.com/Arize-ai/openinference/tree/main/...).
Step 6: Report. Provide clickable links to the Phoenix project pages (e.g.,
http://localhost:6006/projects/<base64-id>/traces).
Snippet Format
Each snippet has two parts:
Packages: Array of package names. Order: phoenix-otel first, then instrumentation package, then SDK.
Do not assume the framework package bundles its model provider SDK. In a clean env, verify the exact imports used by the snippet; if the framework's OpenAI/Gemini/etc. adapter requires a separate SDK package, include it explicitly in
packages.
Implementation: Working, copy-pasteable code that produces at least one trace. 10-20 lines, meaningful example prompt, no print/log statements.
Adding to the Onboarding UI
1. Add implementation function
Directory:
app/src/components/project/integrationSnippets/ — read existing files to match conventions.
Do NOT pass
endpoint/url in snippet code — the onboarding UI displays env vars (including PHOENIX_COLLECTOR_ENDPOINT) separately, and both register functions read it automatically.
Python: Use
auto_instrument=True — no manual instrumentor calls. SDK imports must come after register().
Exception: if the framework emits native OpenTelemetry spans and uses a mutating span processor, start with
register(...) so Phoenix becomes the global provider the framework will use. Then add the mutating processor so it replaces Phoenix's default processor, and add the Phoenix exporter back after it.
TypeScript: ESM imports are hoisted so import ordering doesn't matter.
await provider.forceFlush() is required in short-lived scripts.
2. Register the integration
File:
app/src/pages/project/integrationRegistry.tsx
Import your function and add an entry to
ONBOARDING_INTEGRATIONS. Pass snippet functions as direct references (they match the getImplementationCode type in integrationDefinitions.ts).
Testing
Test snippets as written — the exact code the user will see in the onboarding UI. If any modification is required to make a snippet work, that is a bug.
Isolated test environments
Create a fresh environment per integration with only the packages from that snippet's
packages array. This prevents false positives from cross-contamination (e.g., an installed openinference-instrumentation-openai producing extra traces when testing a LangChain snippet).
Set
PHOENIX_COLLECTOR_ENDPOINT and run the snippet code verbatim.
Use a fresh Phoenix project name per test run. Reusing an existing project can mask failures by making old traces look like the new snippet worked.
Validation checklist
For each snippet, verify:
- No export errors (no
, no405
)Failed to export span batch - Traces appear in Phoenix under the expected project name
- Trace kind and structure match expectations (e.g., LangChain shows
spans, not just barechain
spans)llm - Only one top-level trace per invocation (multiple top-level traces suggest instrumentor cross-contamination)
When a snippet doesn't work as-is
If you must modify the snippet code to get traces flowing, do not silently work around it and continue. Instead:
- Fix the snippet if the change is small and clearly correct (e.g., a typo, missing import)
- Flag to the user if the fix requires a design decision (e.g., the SDK doesn't support env-var-based config, or auto-instrumentation doesn't work for this framework)