Skip to content
Free Tool Arena

AI & LLMs · Guide · AI & Prompt Tools

How to Use Pydantic AI

Installing pydantic-ai, Agent class, result_type, dependencies, tools, streaming, and OpenTelemetry tracing.

Updated April 2026 · 6 min read

Pydantic AI is a Python agent framework from the team behind Pydantic. It treats LLM output like any other untrusted input — validate it against a schema, retry on failure, and let the type checker catch your mistakes. If you already use Pydantic for FastAPI request bodies, Pydantic AI feels like the obvious extension to agents and tool calls.

Advertisement

What Pydantic AI actually is

A thin, typed wrapper around model APIs (OpenAI, Anthropic, Gemini, Groq, Ollama, Bedrock) that forces every response through a Pydantic model. You define an Agent with a result_type, bind tools as decorated Python functions, and the framework handles JSON-schema generation, validation, and retry loops. The result is an object you can .attribute access with full IDE autocomplete instead of response["choices"][0][...].

Compared to LangChain it is smaller, more opinionated, and actually typed. Compared to raw API calls it gives you structured output, automatic retries on schema mismatch, and a standard place to hang dependencies (database sessions, API clients) via its deps_type system.

Installing

pip install pydantic-ai

# or with a specific model provider extra
pip install "pydantic-ai[openai]"
pip install "pydantic-ai[anthropic]"

Set the provider API key in your environment (OPENAI_API_KEY, ANTHROPIC_API_KEY, etc.). Python 3.9 or newer.

First working example

from pydantic import BaseModel
from pydantic_ai import Agent

class Invoice(BaseModel):
    vendor: str
    total: float
    currency: str
    due_date: str

agent = Agent(
    "openai:gpt-4o-mini",
    result_type=Invoice,
    system_prompt="Extract invoice fields from the user message.",
)

result = agent.run_sync(
    "Acme Corp billed us 1,249.00 EUR, due 2026-05-15."
)
print(result.data)
# Invoice(vendor='Acme Corp', total=1249.0, currency='EUR', due_date='2026-05-15')

No JSON parsing, no try/except around json.loads, no “the model returned prose again.” If the model emits invalid JSON or the wrong shape, Pydantic AI retries with the validation error as feedback up to retries=1 by default.

A real workflow — tools and dependencies

Agents become useful when they can call functions. Register tools with @agent.tool; Pydantic AI derives the JSON schema from the signature.

from dataclasses import dataclass
from pydantic_ai import Agent, RunContext

@dataclass
class Deps:
    db: "Database"

support_agent = Agent(
    "anthropic:claude-sonnet-4",
    deps_type=Deps,
    system_prompt="You are a support agent. Use tools to look up customers.",
)

@support_agent.tool
async def get_customer(ctx: RunContext[Deps], email: str) -> dict:
    """Fetch a customer row by email."""
    return await ctx.deps.db.fetch_one(
        "SELECT id, plan, mrr FROM customers WHERE email = $1", email
    )

async def handle_ticket(db, question: str):
    result = await support_agent.run(question, deps=Deps(db=db))
    return result.data

The RunContext gives tools typed access to the shared deps. No global state, no monkey-patching, no LangChain callback handlers — just a dataclass you pass in.

Gotchas

Streaming and structured output don’t mix cleanly. If you want token streaming, drop the result_type and stream plain strings, or use run_stream with its partial-validation API and accept that early chunks may not validate.

Retries hide costs. A validation failure doubles your token bill for that turn. Watch the usage field on results when you’re tuning prompts, especially with expensive models.

Tool docstrings are the prompt. The function docstring and parameter types become the JSON schema the model sees. Sloppy docstrings produce sloppy tool calls. Treat them like API docs.

When NOT to use it

Skip Pydantic AI if you need a huge pre-built tool ecosystem (LangChain’s integrations are still an order of magnitude bigger), if you’re staying in JavaScript/TypeScript, or if you’re doing pure RAG over documents — LlamaIndex handles that with less glue code. For small typed extract-and-tool-call services, though, Pydantic AI is the least-painful option in Python today.

Sketch your agent graph and tool flow with the flowchart maker, validate sample JSON payloads against your Pydantic schemas in the JSON formatter, and count prompt tokens before you ship with the token counter.

Advertisement

Found this useful?Email