Langfuse-docs langfuse-integration-page
Create a new Langfuse integration page in the langfuse-docs repo. Use this skill whenever the user wants to add, create, draft, or scaffold an integration page, cookbook, or docs page for a new tool/framework/model-provider/gateway in Langfuse — triggers include "new integration", "integration page", "docs page for <X>", "cookbook for <X>", "add <X> to langfuse docs", or any request that results in a new `cookbook/integration_*.ipynb`. Also use when the user pastes working integration code, a link to a partner's docs, or rough notes and wants them turned into the standard Langfuse integration notebook. The skill produces a correctly formatted Jupyter notebook, updates `cookbook/_routes.json`, and tries to fetch the partner logo into `public/images/integrations/`.
git clone https://github.com/langfuse/langfuse-docs
T=$(mktemp -d) && git clone --depth=1 https://github.com/langfuse/langfuse-docs "$T" && mkdir -p ~/.claude/skills && cp -r "$T/.agents/skills/langfuse-integration-page" ~/.claude/skills/langfuse-langfuse-docs-langfuse-integration-page && rm -rf "$T"
.agents/skills/langfuse-integration-page/SKILL.mdLangfuse integration page creator
This skill scaffolds a new integration page for the langfuse-docs site. Integration pages live as Jupyter notebooks in
cookbook/integration_<slug>.ipynb and are converted to MDX by scripts/update_cookbook_docs.sh using the mapping in cookbook/_routes.json. Getting the notebook metadata block, the STEPS_START/STEPS_END wrapper, and the routes entry right is the whole job — once those are correct, the build does the rest.
What to produce
Three things, always, in the user's
langfuse-docs checkout:
- A new notebook at
that matches the house template (see "Notebook structure" below).cookbook/integration_<slug>.ipynb - A new entry appended to
pointing at the notebook and the targetcookbook/_routes.json
.docsPath - A best-effort logo download into
. If fetching fails, leave a TODO for the user.public/images/integrations/<slug>_icon.<ext>
Do not run
scripts/update_cookbook_docs.sh yourself — that regenerates many files and is slow (~10 min build). The user runs it when they're ready.
Step 1 — Gather what you need, up front
Before writing anything, collect the following. Ask the user for what's missing using a single
AskUserQuestion batch where possible. Some answers are mutually exclusive (pick-one); some can be inferred from context.
Always ask (these determine the template and the routes entry):
- Integration name — the human-readable name (e.g., "Pydantic AI", "Fireworks AI", "Temporal"). Used in the title and intro.
- Slug — kebab-case, used in the filename, logo path, and
. Default to the name lowercased with spaces → hyphens, but confirm. Example: "Pydantic AI" →docsPath
; "Fireworks AI" →pydantic-ai
.fireworks-ai - Category — one of:
,model-providers
,frameworks
,gateways
. This is theother
segment in<category>
. Guidance:docsPath: "integrations/<category>/<slug>"
: inference APIs (OpenAI-compatible or otherwise) — Anthropic, Cohere, Fireworks, Groq, Bedrock, Vertex, Gemini, etc.model-providers
: agent/app frameworks — LangChain, CrewAI, Pydantic AI, Google ADK, Temporal, Semantic Kernel, etc.frameworks
: LLM proxies/routers — Portkey, LiteLLM proxy, TrueFoundry, OpenRouter, Kong AI, etc.gateways
: anything else — scraping (Firecrawl, Exa), UIs (Gradio, LibreChat), dev tools, etc.other
- Language —
(default) orpython
. JS integrations use the filename prefixjs
and commonly get ajs_integration_<slug>.ipynb
suffix in the slug when both exist (e.g.,-js
,anthropic-js
).claude-agent-sdk-js - Instrumentation pattern — pick one (this determines the template body). See
for full details and match it to the integration:references/patterns.md
— OpenInference instrumentor library (e.g.,openinference
). Most common for agent frameworks.openinference-instrumentation-google-adk
— The partner is OpenAI-compatible; useopenai-drop-in
. Common for inference providers (Fireworks, Groq, DeepSeek, etc.).from langfuse.openai import openai
— Framework has built-in instrumentation hook (e.g.,framework-native
for Pydantic AI).Agent.instrument_all()
— Partner emits OTel natively; configure an OTLP exporter pointing at Langfuse. Less common; used for things like Temporal, some MLflow setups.otel-direct
Ask if not obvious:
- Intro blurb about the partner — one sentence ("What is X?"). If the user didn't give one and there's a URL, you can draft it and confirm.
- Logo source — if the user provided a URL, great; if not, see Step 4 for the fetch heuristic.
- Install command, env vars beyond the Langfuse ones, and a minimal runnable example — needed for the code cells. If missing, draft from docs and mark as
.TODO: confirm
How to ask
Use
AskUserQuestion with options formatted as the four categories and four patterns. Keep the number of questions ≤ 4. If the user gave full context (e.g., they pasted a complete code example and mentioned the framework), skip questions you can answer confidently from context and just confirm in your response.
Step 2 — Generate the notebook
You have two ways to create the
.ipynb:
-
Preferred: use the bundled builder script
. It takes a structured JSON/YAML description of the cells and writes a properly formatted notebook. Using the script avoids subtle JSON errors (trailing commas, missingscripts/build_notebook.py
arrays, line-split source strings) that break"source"
.nbconvertpython3 <skill-dir>/scripts/build_notebook.py \ --out cookbook/integration_<slug>.ipynb \ --spec /tmp/<slug>_spec.jsonSee the script's
for the spec schema. There are examples at the bottom of--help
.references/patterns.md -
Fallback: write the
file directly. If you do this, open an existing notebook (e.g.,.ipynb
) first and mirror its JSON shape exactly. Be careful: everycookbook/integration_pydantic_ai.ipynb
field is a list of strings, each ending insource
except the last; markdown cells carry\n
; code cells carry"metadata": {"vscode": {"languageId": "raw"}}
."execution_count": null, "outputs": []
Whichever route you pick, the cell structure must match the house template.
Notebook structure (the template)
Every integration page has the same skeleton. Section order matters because the MDX converter in
scripts/move_docs.py reads the NOTEBOOK_METADATA comment from the top of the first cell and wraps everything between STEPS_START and STEPS_END in a <Steps> component.
Cell 1 — markdown. Metadata + intro.
The first line is a single-line HTML comment with all the page metadata. Attribute format is
key: "value" (double-quoted), space-separated, on one line. Required keys:
<!-- NOTEBOOK_METADATA source: "⚠️ Jupyter Notebook" title: "<Page title>" sidebarTitle: "<Short nav label>" logo: "/images/integrations/<slug>_icon.<ext>" description: "<1-sentence SEO description>" category: "Integrations" -->
Then the page H1, a 1-sentence intro, and two blockquote callouts:
# Integrate Langfuse with <Partner Name> This notebook shows how to integrate **Langfuse** with **<Partner>** to [monitor / debug / trace / evaluate] your LLM application. > **What is <Partner>?** [<Partner>](<partner url>) is <one sentence about the partner>. > **What is Langfuse?** [Langfuse](https://langfuse.com) is an open-source LLM engineering platform that helps teams trace, debug, and evaluate their LLM applications.
Title-writing notes: prefer "Observability for <Partner> with Langfuse" for model providers and inference APIs, "Integrate Langfuse with <Partner>" for frameworks, and "Trace <Partner> Workflows with Langfuse" for orchestration tools. Sidebar title is the short name (e.g., "Pydantic AI", "Fireworks AI", "Temporal").
Cell 2 — markdown. Start of steps.
<!-- STEPS_START --> ## Step 1: Install Dependencies
Cell 3 — code. Install.
%pip install langfuse <partner-package> -U
Use
-U to upgrade. For JS notebooks, use npm install in a shell cell (see the JS examples in cookbook/js_integration_*.ipynb).
Cell 4 — markdown. Env var setup prose.
One short paragraph mentioning that keys come from Langfuse project settings, linking to Langfuse Cloud and
https://langfuse.com/self-hosting.
Cell 5 — code. Env vars.
Always include the three Langfuse vars in this exact shape (both regions, US commented out) plus whatever the partner needs:
import os # Get keys for your project from the project settings page: https://langfuse.com/cloud os.environ["LANGFUSE_PUBLIC_KEY"] = "pk-lf-..." os.environ["LANGFUSE_SECRET_KEY"] = "sk-lf-..." os.environ["LANGFUSE_BASE_URL"] = "https://cloud.langfuse.com" # 🇪🇺 EU region (API host) # os.environ["LANGFUSE_BASE_URL"] = "https://us.cloud.langfuse.com" # 🇺🇸 US region (API host) # <Partner> API key os.environ["<PARTNER>_API_KEY"] = "..."
Cell 6 — markdown + cell 7 — code. Initialize Langfuse client with auth check. (Skip this pair for the
openai-drop-in pattern, which relies on the langfuse OpenAI wrapper instead.)
from langfuse import get_client langfuse = get_client() # Verify connection if langfuse.auth_check(): print("Langfuse client is authenticated and ready!") else: print("Authentication failed. Please check your credentials and host.")
Cells 8+ — Instrumentation + runnable example. These are pattern-specific. See
references/patterns.md for the exact cell bodies for each of the four patterns.
Final steps cell — markdown. View traces.
## Step N: View Traces in Langfuse After running the example, open [Langfuse Cloud](https://langfuse.com/cloud) to see the trace, including prompts, completions, tool calls, token usage, and latency.  <!-- TODO: replace with your actual trace screenshot (upload to langfuse.com images) and example trace link --> [Example trace in Langfuse](<example trace URL or placeholder>) <!-- STEPS_END -->
Last cell — markdown. LearnMore.
<!-- MARKDOWN_COMPONENT name: "LearnMore" path: "@/components-mdx/integration-learn-more.mdx" -->
For JS integrations use
@/components-mdx/integration-learn-more-js.mdx instead.
Why these shapes matter
move_docs.py does five specific transforms on the raw markdown that nbconvert produces:
- Turns the top
HTML comment into YAML frontmatter.NOTEBOOK_METADATA - Turns
/STEPS_START
into aSTEPS_END
MDX component.<Steps> - Turns
/TABS_START
(if present) intoTABS_END
.<Tabs> - Turns
/CALLOUT_START
(if present) intoCALLOUT_END
.<Callout> - Turns
/MARKDOWN_COMPONENT
comments into JSX imports + usages.COMPONENT
Anything you write outside these transforms flows through unchanged, so standard markdown works. The three most common mistakes are: metadata not on the very first line of the first cell,
STEPS_START or STEPS_END missing, and single quotes instead of double quotes in the metadata attributes.
Step 3 — Update cookbook/_routes.json
cookbook/_routes.jsonRead
cookbook/_routes.json, append a new object to the JSON array, and write it back. Use this shape for integration pages:
{ "notebook": "integration_<slug>.ipynb", "docsPath": "integrations/<category>/<slug>", "isGuide": false }
Notes:
in<slug>
and innotebook
must match exactly.docsPath- For JS integrations, use
; the slug in"notebook": "js_integration_<slug>.ipynb"
typically has adocsPath
suffix if a Python version also exists (e.g.,-js
+anthropic
,anthropic-js
+claude-agent-sdk
).claude-agent-sdk-js
for dedicated integration pages. SetisGuide: false
only if the user explicitly wants the notebook to also appear underisGuide: true
. Most integration pages arecontent/guides/cookbook/
; a handful of integration-adjacent notebooks (false
,integration_anthropic.ipynb
) areintegration_llama_index.ipynb
because they double as general guides.true- If
is omitted ordocsPath
, the notebook is only published as a guide — not what you want for an integration page.null - Append the entry at the bottom of the array to keep diffs clean. Preserve 2-space indentation and the trailing newline. Be careful with the comma on the previous entry.
If you can edit JSON by hand, do that. If you'd rather not eyeball it, there's
scripts/add_route.py in this skill that does a safe append.
Step 4 — Fetch the logo
Heuristic, in order. Stop at the first one that succeeds:
- If the user gave a URL to a logo file, download it directly.
- Try the partner's marketing site favicon:
, thenhttps://<partner-domain>/favicon.svg
, thenfavicon.png
./apple-touch-icon.png - Try a Clearbit-style lookup:
(returns a PNG).https://logo.clearbit.com/<partner-domain> - Give up and leave a TODO.
Save to
public/images/integrations/<slug>_icon.<ext> preserving the extension. SVG is preferred; PNG is fine. The logo: field in the notebook metadata needs to point at this path.
Use
curl -sSfL -o <dest> <url> in bash. Check the result is non-empty and looks like a valid image before using it — if curl returns an HTML error page saved as .svg, that's worse than a missing file.
If the fetch fails, leave the notebook's
logo: field pointing at the expected path anyway and tell the user they need to upload the logo manually.
Step 5 — Summarize what you did
End your turn with a short summary listing:
- The notebook path
- The routes entry you added
- The logo status (fetched to path / TODO)
- Placeholders the user still needs to fill (trace screenshot URL, example trace link, anything you marked
)TODO: confirm - The command the user should run when ready:
(run from the repo root)bash scripts/update_cookbook_docs.sh - A reminder to check that the partner's
install line makes sense and to run the notebook end-to-end once before publishing-U
Reference files
— exact cell bodies for each of the four instrumentation patterns, with real examples from the existing notebooks.references/patterns.md
— fields inreferences/routes-json-schema.md
and when to usecookbook/_routes.json
.isGuide: true
— a fill-in-the-blanks version of the full notebook.references/notebook-template.md
Bundled scripts
— takes a spec JSON and produces a properly formattedscripts/build_notebook.py
. Safer than hand-writing JSON..ipynb
— appends an entry toscripts/add_route.py
without breaking the existing formatting.cookbook/_routes.json