Claude-skill-registry langchain-sdk-patterns

install
source · Clone the upstream repo
git clone https://github.com/majiayu000/claude-skill-registry
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data/langchain-sdk-patterns" ~/.claude/skills/majiayu000-claude-skill-registry-langchain-sdk-patterns && rm -rf "$T"
manifest: skills/data/langchain-sdk-patterns/SKILL.md
source content

LangChain SDK Patterns

Overview

Production-ready patterns for LangChain applications including LCEL chains, structured output, and error handling.

Prerequisites

  • Completed
    langchain-install-auth
    setup
  • Familiarity with async/await patterns
  • Understanding of error handling best practices

Core Patterns

Pattern 1: Type-Safe Chain with Pydantic

from pydantic import BaseModel, Field
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate

class SentimentResult(BaseModel):
    """Structured output for sentiment analysis."""
    sentiment: str = Field(description="positive, negative, or neutral")
    confidence: float = Field(description="Confidence score 0-1")
    reasoning: str = Field(description="Brief explanation")

llm = ChatOpenAI(model="gpt-4o-mini")
structured_llm = llm.with_structured_output(SentimentResult)

prompt = ChatPromptTemplate.from_template(
    "Analyze the sentiment of: {text}"
)

chain = prompt | structured_llm

# Returns typed SentimentResult
result: SentimentResult = chain.invoke({"text": "I love LangChain!"})
print(f"Sentiment: {result.sentiment} ({result.confidence})")

Pattern 2: Retry with Fallback

from langchain_openai import ChatOpenAI
from langchain_anthropic import ChatAnthropic
from langchain_core.runnables import RunnableWithFallbacks

primary = ChatOpenAI(model="gpt-4o")
fallback = ChatAnthropic(model="claude-3-5-sonnet-20241022")

# Automatically falls back on failure
robust_llm = primary.with_fallbacks([fallback])

response = robust_llm.invoke("Hello!")

Pattern 3: Async Batch Processing

import asyncio
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate

llm = ChatOpenAI(model="gpt-4o-mini")
prompt = ChatPromptTemplate.from_template("Summarize: {text}")
chain = prompt | llm

async def process_batch(texts: list[str]) -> list:
    """Process multiple texts concurrently."""
    inputs = [{"text": t} for t in texts]
    results = await chain.abatch(inputs, config={"max_concurrency": 5})
    return results

# Usage
results = asyncio.run(process_batch(["text1", "text2", "text3"]))

Pattern 4: Streaming with Callbacks

from langchain_openai import ChatOpenAI
from langchain_core.callbacks import StreamingStdOutCallbackHandler

llm = ChatOpenAI(
    model="gpt-4o-mini",
    streaming=True,
    callbacks=[StreamingStdOutCallbackHandler()]
)

# Streams tokens to stdout as they arrive
for chunk in llm.stream("Tell me a story"):
    # Each chunk contains partial content
    pass

Pattern 5: Caching for Cost Reduction

from langchain_openai import ChatOpenAI
from langchain_core.globals import set_llm_cache
from langchain_community.cache import SQLiteCache

# Enable SQLite caching
set_llm_cache(SQLiteCache(database_path=".langchain_cache.db"))

llm = ChatOpenAI(model="gpt-4o-mini")

# First call hits API
response1 = llm.invoke("What is 2+2?")

# Second identical call uses cache (no API cost)
response2 = llm.invoke("What is 2+2?")

Output

  • Type-safe chains with Pydantic models
  • Robust error handling with fallbacks
  • Efficient async batch processing
  • Cost-effective caching strategies

Error Handling

Standard Error Pattern

from langchain_core.exceptions import OutputParserException
from openai import RateLimitError, APIError

def safe_invoke(chain, input_data, max_retries=3):
    """Invoke chain with error handling."""
    for attempt in range(max_retries):
        try:
            return chain.invoke(input_data)
        except RateLimitError:
            if attempt < max_retries - 1:
                time.sleep(2 ** attempt)
                continue
            raise
        except OutputParserException as e:
            # Handle parsing failures
            return {"error": str(e), "raw": e.llm_output}
        except APIError as e:
            raise RuntimeError(f"API error: {e}")

Resources

Next Steps

Proceed to

langchain-core-workflow-a
for chains and prompts workflow.