Awesome-omni-skill developing-langgraph-js-agents

Build, audit, review, and update LangGraph.js agents. Use PROACTIVELY when working with LangGraph, @langchain/langgraph, agent graphs, state machines, or AI workflows in TypeScript/JavaScript. Covers creating new agents, adding features, debugging, testing, and optimizing. (user)

install
source · Clone the upstream repo
git clone https://github.com/diegosouzapw/awesome-omni-skill
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/diegosouzapw/awesome-omni-skill "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/ai-agents/developing-langgraph-js-agents" ~/.claude/skills/diegosouzapw-awesome-omni-skill-developing-langgraph-js-agents && rm -rf "$T"
manifest: skills/ai-agents/developing-langgraph-js-agents/SKILL.md
source content

<essential_principles>

How LangGraph.js Agents Work

LangGraph decomposes agents into discrete nodes (functions) connected through shared state. Execution flows through a graph where nodes do work and edges determine what runs next.

1. State-First Design

State is the shared memory accessible to all nodes. Design state before nodes:

import { Annotation, MessagesAnnotation } from "@langchain/langgraph";

const AgentState = Annotation.Root({
  ...MessagesAnnotation.spec,
  // Add custom fields with reducers
  context: Annotation<string[]>({
    reducer: (x, y) => x.concat(y),
    default: () => [],
  }),
});

Critical rules:

  • Store raw data, not formatted text (format in nodes)
  • Use reducers for fields that accumulate (messages, lists)
  • Keep state minimal - only persist what's needed across steps

2. Nodes Do Work, Edges Route

// Node: receives state, returns partial update
async function callModel(state: typeof AgentState.State) {
  const response = await model.invoke(state.messages);
  return { messages: [response] };
}

// Edge: determines next node
function shouldContinue(state: typeof AgentState.State) {
  const lastMessage = state.messages.at(-1);
  if (lastMessage?.tool_calls?.length) return "tools";
  return END;
}

3. Always Compile Before Use

const graph = new StateGraph(AgentState)
  .addNode("agent", callModel)
  .addNode("tools", toolNode)
  .addEdge(START, "agent")
  .addConditionalEdges("agent", shouldContinue)
  .addEdge("tools", "agent")
  .compile(); // Required!

4. Checkpointers Enable Persistence

For conversation memory, human-in-the-loop, or fault tolerance:

import { MemorySaver } from "@langchain/langgraph";

const graph = workflow.compile({
  checkpointer: new MemorySaver()
});

// Invoke with thread_id
await graph.invoke(input, {
  configurable: { thread_id: "user-123" }
});

</essential_principles>

<intake>

What would you like to do?

  1. Build a new agent from scratch
  2. Add a feature to an existing agent
  3. Audit/review an agent's architecture
  4. Debug an agent issue
  5. Write tests for an agent
  6. Optimize agent performance
  7. Something else

Wait for response, then read the matching workflow from

workflows/
and follow it.

</intake> <routing>
ResponseWorkflow
1, "new", "create", "build", "start", "scaffold"
workflows/build-new-agent.md
2, "add", "feature", "implement", "extend"
workflows/add-feature.md
3, "audit", "review", "check", "assess", "evaluate"
workflows/audit-agent.md
4, "debug", "fix", "broken", "error", "bug", "issue"
workflows/debug-agent.md
5, "test", "tests", "testing", "coverage"
workflows/write-tests.md
6, "optimize", "performance", "slow", "fast", "improve"
workflows/optimize-agent.md
7, otherClarify intent, then select appropriate workflow
</routing>

<verification_loop>

After Every Change

# 1. TypeScript compiles?
npx tsc --noEmit

# 2. Tests pass?
npm test

# 3. Agent runs?
npx ts-node src/agent.ts

Report:

  • "Build: ✓" or "Build: ✗ [error]"
  • "Tests: X pass, Y fail"
  • "Agent executed successfully" or "Runtime error: [details]"

</verification_loop>

<reference_index>

Domain Knowledge

All in

references/
:

LangChain Fundamentals:

  • langchain-fundamentals.md - Messages, chat models, structured output, retrieval, guardrails

Architecture:

  • graph-api.md - StateGraph, nodes, edges, compilation
  • functional-api.md - Tasks, entrypoints, when to use
  • state-management.md - Annotations, reducers, state design

Features:

  • tools.md - Creating and binding tools
  • persistence.md - Checkpointers, memory, threads
  • streaming.md - Real-time output modes
  • interrupts.md - Human-in-the-loop patterns
  • subgraphs.md - Composing multi-agent systems
  • agent-chat-ui.md - Chat UI setup and integration
  • agent-inbox.md - Inbox UI for interrupt management, ambient agents
  • deployment.md - Local server, LangSmith Cloud, Studio, observability, time-travel

Patterns:

  • common-patterns.md - ReAct, RAG, routing patterns
  • multi-agent.md - Supervisor, hierarchical, network architectures
  • agent-skills.md - Modular capabilities and skill loading
  • anti-patterns.md - Common mistakes to avoid

</reference_index>

<workflows_index>

Workflows

All in

workflows/
:

FilePurpose
build-new-agent.mdCreate a new LangGraph.js agent from scratch
add-feature.mdAdd capabilities to an existing agent
audit-agent.mdReview architecture and identify issues
debug-agent.mdFind and fix agent bugs
write-tests.mdTest nodes, graphs, and integrations
optimize-agent.mdImprove performance and reduce latency

</workflows_index>

<templates_index>

Templates

All in

templates/
:

FilePurpose
basic-agent.tsMinimal ReAct agent scaffold
rag-agent.tsRetrieval-augmented agent
multi-agent.tsMulti-agent system with subgraphs

</templates_index>

<external_docs>

Official Documentation

For topics not fully covered here, consult:

</external_docs>