Claude-skill-registry langchain-agents
Building LLM agents with LangChain and LangGraph, covering tool-calling model initialization, state management, and observability with LangSmith. Triggers: langchain, langgraph, langsmith, agent-executor, chat-model-tools.
install
source · Clone the upstream repo
git clone https://github.com/majiayu000/claude-skill-registry
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data/langchain-agents" ~/.claude/skills/majiayu000-claude-skill-registry-langchain-agents && rm -rf "$T"
manifest:
skills/data/langchain-agents/SKILL.mdsource content
LangChain Agents
Overview
LangChain provides a standard interface for building LLM agents that can use tools. Modern agent development is moving toward LangGraph to handle stateful, multi-turn, and non-linear logic that simple loops cannot capture.
When to Use
- Multi-Provider Apps: When you want to swap between OpenAI, Anthropic, and local models without changing business logic.
- Stateful Agents: When you need human-in-the-loop, long-term persistence, or complex execution graphs.
- Observability: Using LangSmith to debug exactly where an agentic chain failed.
Decision Tree
- Is it a simple tool-calling loop?
- YES: Use the
abstraction.create_agent
- YES: Use the
- Does it require cycles, complex state transitions, or human approval?
- YES: Build using LangGraph.
- Do you need to track exactly how much an agent run cost or where it halluncinated?
- YES: Enable LangSmith tracing.
Workflows
1. Creating a Simple Tool-Enabled Agent
- Define a Python function with a docstring to serve as a tool.
- Initialize a ChatModel (e.g.,
).ChatAnthropic - Call
to generate the agent.create_agent(model, tools=[...]) - Execute the agent using
.agent.invoke({"messages": [...]})
2. Debugging with LangSmith
- Enable LangSmith environment variables (
,LANGSMITH_API_KEY
).LANGSMITH_TRACING - Run the agent as usual; traces are automatically captured.
- Visualize the execution path and captured state transitions in the LangSmith UI to identify where the agent went wrong.
3. Adding Memory to an Agent
- Initialize a
orCheckpointer
object.Memory - Pass the memory to the agent's invoke call to maintain state across turns.
- The agent will automatically append tool outputs and model responses to the conversation thread.
Non-Obvious Insights
- Abstraction Layer: LangChain's primary value is standardizing the interface across providers, preventing vendor lock-in.
- Simple vs. Complex: For many basic tasks, you don't need LangGraph; the high-level
is often enough for under 10 lines of code.agent_executor - Durable Execution: Using LangGraph allows for "checkpoints," meaning an agent can stop, wait for human input, and resume hours later without losing state.
Evidence
- "Standardizes how you interact with models so that you can seamlessly swap providers..." - LangChain Docs
- "LangChain agents are built on top of LangGraph in order to provide durable execution, streaming... persistence." - LangChain Docs
- "You do not need to know LangGraph for basic LangChain agent usage." - LangChain Docs
Scripts
: Python script for tool definition and agent invocation.scripts/langchain-agents_tool.py
: Equivalent logic using LangChain.js.scripts/langchain-agents_tool.js
Dependencies
langchainlanggraphlangsmith