Claude-code-plugins-plus langchain-hello-world
install
source · Clone the upstream repo
git clone https://github.com/jeremylongshore/claude-code-plugins-plus-skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/jeremylongshore/claude-code-plugins-plus-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/plugins/saas-packs/langchain-pack/skills/langchain-hello-world" ~/.claude/skills/jeremylongshore-claude-code-plugins-plus-langchain-hello-world && rm -rf "$T"
manifest:
plugins/saas-packs/langchain-pack/skills/langchain-hello-world/SKILL.mdsource content
LangChain Hello World
Overview
Minimal working examples demonstrating LCEL (LangChain Expression Language) -- the
.pipe() chain syntax that is the foundation of all LangChain applications.
Prerequisites
- Completed
setuplangchain-install-auth - Valid LLM provider API key configured
Example 1: Simplest Chain (TypeScript)
import { ChatOpenAI } from "@langchain/openai"; import { ChatPromptTemplate } from "@langchain/core/prompts"; import { StringOutputParser } from "@langchain/core/output_parsers"; // Three components: prompt -> model -> parser const prompt = ChatPromptTemplate.fromTemplate("Tell me a joke about {topic}"); const model = new ChatOpenAI({ model: "gpt-4o-mini" }); const parser = new StringOutputParser(); // LCEL: chain them with .pipe() const chain = prompt.pipe(model).pipe(parser); const result = await chain.invoke({ topic: "TypeScript" }); console.log(result); // "Why do TypeScript developers wear glasses? Because they can't C#!"
Example 2: Chat with System Prompt
import { ChatOpenAI } from "@langchain/openai"; import { ChatPromptTemplate } from "@langchain/core/prompts"; import { StringOutputParser } from "@langchain/core/output_parsers"; const prompt = ChatPromptTemplate.fromMessages([ ["system", "You are a {persona}. Keep answers under 50 words."], ["human", "{question}"], ]); const chain = prompt .pipe(new ChatOpenAI({ model: "gpt-4o-mini" })) .pipe(new StringOutputParser()); const answer = await chain.invoke({ persona: "senior DevOps engineer", question: "What is the most important Kubernetes concept?", }); console.log(answer);
Example 3: Structured Output with Zod
import { ChatOpenAI } from "@langchain/openai"; import { ChatPromptTemplate } from "@langchain/core/prompts"; import { z } from "zod"; const ReviewSchema = z.object({ sentiment: z.enum(["positive", "negative", "neutral"]), confidence: z.number().min(0).max(1), summary: z.string().describe("One-sentence summary"), }); const model = new ChatOpenAI({ model: "gpt-4o-mini" }); const structuredModel = model.withStructuredOutput(ReviewSchema); const prompt = ChatPromptTemplate.fromTemplate( "Analyze the sentiment of this review:\n\n{review}" ); const chain = prompt.pipe(structuredModel); const result = await chain.invoke({ review: "LangChain makes building AI apps surprisingly straightforward.", }); console.log(result); // { sentiment: "positive", confidence: 0.92, summary: "..." }
Example 4: Streaming
import { ChatOpenAI } from "@langchain/openai"; import { ChatPromptTemplate } from "@langchain/core/prompts"; import { StringOutputParser } from "@langchain/core/output_parsers"; const chain = ChatPromptTemplate.fromTemplate("Write a haiku about {topic}") .pipe(new ChatOpenAI({ model: "gpt-4o-mini" })) .pipe(new StringOutputParser()); // Stream tokens as they arrive const stream = await chain.stream({ topic: "coding" }); for await (const chunk of stream) { process.stdout.write(chunk); }
Example 5: Python Equivalent
from langchain_openai import ChatOpenAI from langchain_core.prompts import ChatPromptTemplate from langchain_core.output_parsers import StrOutputParser prompt = ChatPromptTemplate.from_template("Tell me about {topic}") model = ChatOpenAI(model="gpt-4o-mini") parser = StrOutputParser() # LCEL uses | operator in Python chain = prompt | model | parser result = chain.invoke({"topic": "LangChain"}) print(result)
How LCEL Works
Every component in an LCEL chain implements the
Runnable interface:
| Method | Purpose |
|---|---|
| Single input, single output |
| Process array of inputs |
| Yield output chunks |
| Chain to next runnable |
The
.pipe() method (or | in Python) creates a RunnableSequence where each step's output feeds the next step's input. Every LangChain component -- prompts, models, parsers, retrievers -- is a Runnable.
Error Handling
| Error | Cause | Fix |
|---|---|---|
| Template variable not in invoke args | Match keys to template |
| Chain not awaited | Add before |
| Too many API calls | Add delay or use for testing |
Resources
Next Steps
Proceed to
langchain-core-workflow-a for advanced chain composition.