AI & LLMs · Guide · AI & Prompt Tools
How to Build a Stateful Agent with LangGraph
State, nodes, conditional edges, and SQLite checkpointing — build a long-running LangGraph agent you can pause, resume, and debug.
LangGraph is a library for building stateful, long-running agents as explicit graphs — nodes are steps, edges are transitions, and there’s a state object that carries context through the whole run. It’s the framework you reach for when a task has branches, loops, or needs to survive a restart.
This guide is a from-scratch walkthrough: install, define a graph, wire in tools, persist state, and run it. If you’ve already tried a sequential framework like CrewAI and hit the wall where “what if X happens” becomes ugly, LangGraph is the next step up.
Advertisement
Mental model
Forget “multi-agent conversation.” Think state machine. You define:
- A State — a typed dict that holds everything the agent has learned so far.
- Nodes — Python functions that read the state and return updates to it.
- Edges — rules for which node runs next, possibly conditional on the state.
Because the state is explicit, LangGraph can checkpoint it, resume after a crash, retry a node, and branch — things a sequential crew can’t express cleanly.
Step 1 — Install
python -m venv .venv && source .venv/bin/activate
pip install langgraph langchain-openaiLangGraph is model-agnostic. We’ll use OpenAI here; swap to Anthropic by replacing langchain-openai with langchain-anthropic.
Step 2 — Define the state
from typing import TypedDict, Annotated, List
from operator import add
class State(TypedDict):
question: str
notes: Annotated[List[str], add] # accumulates across nodes
answer: strThe Annotated[..., add] means “when a node returnsnotes, append it to the list instead of replacing.” This is LangGraph’s reducer pattern — you say how updates merge, per field.
Step 3 — Define two nodes
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-5-mini")
def research(state: State):
prompt = f"List 3 facts relevant to: {state['question']}"
out = llm.invoke(prompt).content
return {"notes": [out]}
def answer(state: State):
joined = "\n".join(state["notes"])
prompt = f"Using these notes:\n{joined}\nAnswer: {state['question']}"
out = llm.invoke(prompt).content
return {"answer": out}Step 4 — Wire the graph
from langgraph.graph import StateGraph, START, END
builder = StateGraph(State)
builder.add_node("research", research)
builder.add_node("answer", answer)
builder.add_edge(START, "research")
builder.add_edge("research", "answer")
builder.add_edge("answer", END)
graph = builder.compile()Step 5 — Run it
result = graph.invoke({"question": "Why is MCP becoming the agent tool standard?"})
print(result["answer"])That’s a working LangGraph agent. Linear, but it’s the shape everything else builds on.
Step 6 — Add a conditional edge
The power of LangGraph is conditional routing. Say you only want to call the answer node if you got at least one note.
def route(state: State):
return "answer" if state["notes"] else "research" # loop until notes exist
builder.add_conditional_edges("research", route, {"research": "research", "answer": "answer"})This is a loop — the graph will keep running research until the condition flips. Put a hard iteration cap (see step 8) so it can’t run forever.
Step 7 — Persist state (checkpointing)
The killer feature. Add a checkpointer and the graph can pause, restart, and be inspected at any node.
from langgraph.checkpoint.sqlite import SqliteSaver
saver = SqliteSaver.from_conn_string("agent.sqlite")
graph = builder.compile(checkpointer=saver)
config = {"configurable": {"thread_id": "task-42"}}
graph.invoke({"question": "..."}, config=config)The graph’s state after every node is saved to SQLite (or Postgres, or Redis). If the process crashes, you can resume the samethread_id and pick up where it stopped. This is why LangGraph is the default choice for long-running workloads.
Step 8 — Safety rails
- Cap total iterations:
graph.invoke(..., config={"recursion_limit": 25}). - Log every node with a decorator; you’ll debug graphs three times more often than you expect.
- Budget tokens up front. Run a single end-to-end pass through our token counter before looping it.
- Use LangSmith or OpenTelemetry for traces. Looking at the graph in a viewer is a different experience from reading logs.
When NOT to use LangGraph
- Your agent is one LLM call. Just call the LLM.
- Your agent is a straight pipeline of 2–4 agents. Use CrewAI — simpler mental model.
- You need autocomplete-in-editor help. LangGraph is an ops framework, not a coding assistant. Use Cursor or Claude Code.