Claude-skill-registry langchain-hello-world

install
source · Clone the upstream repo
git clone https://github.com/majiayu000/claude-skill-registry
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data/langchain-hello-world" ~/.claude/skills/majiayu000-claude-skill-registry-langchain-hello-world-f01f30 && rm -rf "$T"
manifest: skills/data/langchain-hello-world/SKILL.md
source content

LangChain Hello World

Overview

Minimal working example demonstrating core LangChain functionality with chains and prompts.

Prerequisites

  • Completed
    langchain-install-auth
    setup
  • Valid LLM provider API credentials configured
  • Python 3.9+ or Node.js 18+ environment ready

Instructions

Step 1: Create Entry File

Create a new file

hello_langchain.py
for your hello world example.

Step 2: Import and Initialize

from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate

llm = ChatOpenAI(model="gpt-4o-mini")

Step 3: Create Your First Chain

from langchain_core.output_parsers import StrOutputParser

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant."),
    ("user", "{input}")
])

chain = prompt | llm | StrOutputParser()

response = chain.invoke({"input": "Hello, LangChain!"})
print(response)

Output

  • Working Python file with LangChain chain
  • Successful LLM response confirming connection
  • Console output showing:
Hello! I'm your LangChain-powered assistant. How can I help you today?

Error Handling

ErrorCauseSolution
Import ErrorSDK not installedRun
pip install langchain langchain-openai
Auth ErrorInvalid credentialsCheck environment variable is set
TimeoutNetwork issuesIncrease timeout or check connectivity
Rate LimitToo many requestsWait and retry with exponential backoff
Model Not FoundInvalid model nameCheck available models in provider docs

Examples

Simple Chain (Python)

from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser

llm = ChatOpenAI(model="gpt-4o-mini")
prompt = ChatPromptTemplate.from_template("Tell me a joke about {topic}")
chain = prompt | llm | StrOutputParser()

result = chain.invoke({"topic": "programming"})
print(result)

With Memory (Python)

from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.messages import HumanMessage, AIMessage

llm = ChatOpenAI(model="gpt-4o-mini")
prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant."),
    MessagesPlaceholder(variable_name="history"),
    ("user", "{input}")
])

chain = prompt | llm

history = []
response = chain.invoke({"input": "Hi!", "history": history})
print(response.content)

TypeScript Example

import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { StringOutputParser } from "@langchain/core/output_parsers";

const llm = new ChatOpenAI({ modelName: "gpt-4o-mini" });
const prompt = ChatPromptTemplate.fromTemplate("Tell me about {topic}");
const chain = prompt.pipe(llm).pipe(new StringOutputParser());

const result = await chain.invoke({ topic: "LangChain" });
console.log(result);

Resources

Next Steps

Proceed to

langchain-local-dev-loop
for development workflow setup.