Claude-code-plugins-plus-skills langchain-core-workflow-b
install
source · Clone the upstream repo
git clone https://github.com/jeremylongshore/claude-code-plugins-plus-skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/jeremylongshore/claude-code-plugins-plus-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/plugins/saas-packs/langchain-pack/skills/langchain-core-workflow-b" ~/.claude/skills/jeremylongshore-claude-code-plugins-plus-skills-langchain-core-workflow-b && rm -rf "$T"
manifest:
plugins/saas-packs/langchain-pack/skills/langchain-core-workflow-b/SKILL.mdsource content
LangChain Core Workflow B: Agents & Tools
Overview
Build autonomous agents that use tools, make decisions, and execute multi-step tasks. Covers tool definition with Zod schemas,
createToolCallingAgent, AgentExecutor, streaming agent output, and conversation memory.
Prerequisites
- Completed
(chains)langchain-core-workflow-a npm install langchain @langchain/core @langchain/openai zod
Step 1: Define Tools (TypeScript)
import { tool } from "@langchain/core/tools"; import { z } from "zod"; // Tool with Zod schema validation const calculator = tool( async ({ expression }) => { try { // Use a safe math parser in production (e.g., mathjs) const result = Function(`"use strict"; return (${expression})`)(); return String(result); } catch (e) { return `Error: invalid expression "${expression}"`; } }, { name: "calculator", description: "Evaluate a mathematical expression. Input: a math expression string.", schema: z.object({ expression: z.string().describe("Math expression like '2 + 2' or '100 * 0.15'"), }), } ); const weatherLookup = tool( async ({ city }) => { // Replace with real API call const data: Record<string, string> = { "New York": "72F, sunny", "London": "58F, cloudy", "Tokyo": "80F, humid", }; return data[city] ?? `No weather data for ${city}`; }, { name: "weather", description: "Get current weather for a city.", schema: z.object({ city: z.string().describe("City name"), }), } ); const tools = [calculator, weatherLookup];
Step 2: Create Agent with AgentExecutor
import { ChatOpenAI } from "@langchain/openai"; import { createToolCallingAgent, AgentExecutor } from "langchain/agents"; import { ChatPromptTemplate, MessagesPlaceholder } from "@langchain/core/prompts"; const llm = new ChatOpenAI({ model: "gpt-4o-mini" }); const prompt = ChatPromptTemplate.fromMessages([ ["system", "You are a helpful assistant. Use tools when needed."], new MessagesPlaceholder("chat_history"), ["human", "{input}"], new MessagesPlaceholder("agent_scratchpad"), ]); const agent = createToolCallingAgent({ llm, tools, prompt, }); const executor = new AgentExecutor({ agent, tools, verbose: true, // Log reasoning steps maxIterations: 10, // Prevent infinite loops returnIntermediateSteps: true, });
Step 3: Run the Agent
// Simple invocation const result = await executor.invoke({ input: "What's 25 * 4, and what's the weather in Tokyo?", chat_history: [], }); console.log(result.output); // "25 * 4 = 100. The weather in Tokyo is 80F and humid." // The agent decided to call both tools, then composed the answer. console.log(result.intermediateSteps); // Shows each tool call and its result
Step 4: Agent with Conversation Memory
import { ChatMessageHistory } from "@langchain/community/stores/message/in_memory"; import { RunnableWithMessageHistory } from "@langchain/core/runnables"; const messageHistory = new ChatMessageHistory(); const agentWithHistory = new RunnableWithMessageHistory({ runnable: executor, getMessageHistory: (_sessionId) => messageHistory, inputMessagesKey: "input", historyMessagesKey: "chat_history", }); // First call await agentWithHistory.invoke( { input: "My name is Alice" }, { configurable: { sessionId: "user-1" } } ); // Second call -- agent remembers const res = await agentWithHistory.invoke( { input: "What's my name?" }, { configurable: { sessionId: "user-1" } } ); console.log(res.output); // "Your name is Alice!"
Step 5: Stream Agent Events
const eventStream = executor.streamEvents( { input: "Calculate 15% tip on $85", chat_history: [] }, { version: "v2" } ); for await (const event of eventStream) { if (event.event === "on_chat_model_stream") { process.stdout.write(event.data.chunk.content ?? ""); } else if (event.event === "on_tool_start") { console.log(`\n[Calling tool: ${event.name}]`); } else if (event.event === "on_tool_end") { console.log(`[Tool result: ${event.data.output}]`); } }
Step 6: Bind Tools Directly (Without AgentExecutor)
import { ChatOpenAI } from "@langchain/openai"; import { HumanMessage } from "@langchain/core/messages"; const model = new ChatOpenAI({ model: "gpt-4o-mini" }); const modelWithTools = model.bindTools(tools); const response = await modelWithTools.invoke([ new HumanMessage("What's 42 * 17?"), ]); // Check if model wants to call a tool if (response.tool_calls && response.tool_calls.length > 0) { for (const tc of response.tool_calls) { console.log(`Tool: ${tc.name}, Args: ${JSON.stringify(tc.args)}`); // Execute tool manually const toolResult = await tools .find((t) => t.name === tc.name)! .invoke(tc.args); console.log(`Result: ${toolResult}`); } }
Python Equivalent
from langchain_openai import ChatOpenAI from langchain.agents import create_tool_calling_agent, AgentExecutor from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder from langchain_core.tools import tool @tool def calculator(expression: str) -> str: """Evaluate a math expression.""" return str(eval(expression)) tools = [calculator] llm = ChatOpenAI(model="gpt-4o-mini") prompt = ChatPromptTemplate.from_messages([ ("system", "You are a helpful assistant."), MessagesPlaceholder("chat_history", optional=True), ("human", "{input}"), MessagesPlaceholder("agent_scratchpad"), ]) agent = create_tool_calling_agent(llm, tools, prompt) executor = AgentExecutor(agent=agent, tools=tools, verbose=True) result = executor.invoke({"input": "What is 25 * 4?", "chat_history": []})
Error Handling
| Error | Cause | Fix |
|---|---|---|
| Agent stuck in loop | Increase or improve system prompt |
| Tool name mismatch | Verify tools array passed to both and |
| Prompt missing placeholder | Add |
| Tool execution error | Tool throws exception | Wrap tool body in try/catch, return error string |
Resources
Next Steps
Proceed to
langchain-common-errors for debugging guidance.