AI & LLMs · Guide · AI & Prompt Tools
How to Use LangChain
Installing langchain, LCEL runnables, retrievers, memory, output parsers, and when to pick LangGraph instead.
LangChain is the sprawling but battle-tested framework for composing LLM calls, retrievers, tools, and agents in Python or JavaScript.
Advertisement
LangChain gives you a vocabulary — prompts, chat models, output parsers, retrievers, vector stores, agents — and the glue to wire them together. Its modern composition layer, LCEL (LangChain Expression Language), uses a pipe operator to chain Runnables: prompt | model | parser reads like the data flow itself and unlocks streaming, batching, and async for free.
What it is
LangChain is MIT-licensed and maintained by LangChain Inc. (Harrison Chase and team). In 2024 it split into langchain-core (Runnables and interfaces), langchain (high-level chains), partner packages like langchain-openai and langchain-anthropic, and langchain-community for third-party integrations. JavaScript lives in a separate monorepo with equivalent modules.
Install
pip install langchain langchain-openai langchain-community # JavaScript npm install langchain @langchain/openai
First run
A three-step LCEL chain that answers a question with structured output:
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
prompt = ChatPromptTemplate.from_template("Answer briefly: {q}")
model = ChatOpenAI(model="gpt-4o-mini")
chain = prompt | model | StrOutputParser()
print(chain.invoke({"q": "Why is the sky blue?"}))Everyday workflows
- Build RAG with a Chroma or pgvector retriever piped into a prompt; add a reranker for quality.
- Expose the chain over HTTP with LangServe or Flask; trace every run in LangSmith.
- For agents, prefer LangGraph (the sibling project) over the legacy AgentExecutor — it is more controllable.
Gotchas and tips
LangChain’s surface area is enormous and documentation lags behind code. Pin versions, read the source when docs conflict, and avoid deeply nested chains you cannot trace. A 5-line chain you understand beats a 50-line chain you copied from a tutorial.
Production caveats matter: many community integrations are community-maintained, meaning patchy reliability. Wrap external tools with retries, timeouts, and circuit breakers; never trust a retriever to return within SLA without measuring it first.
Who it’s for
Teams that want the broadest integration ecosystem and are willing to pay the complexity tax. Tip: LangSmith tracing is the single biggest quality-of-life upgrade — turn it on before you write your second chain.
Advertisement