Skip to content
Free Tool Arena

AI & LLMs · Guide · AI & Prompt Tools

How to Use Vercel AI SDK

Installing the ai SDK, generateText + streamText, tools, useChat hook, multi-step, edge runtime, provider flexibility.

Updated April 2026 · 6 min read

The Vercel AI SDK is a TypeScript toolkit for calling, streaming, and tool-using any LLM from Node, edge, or the browser.

Advertisement

The AI SDK (package name ai) normalizes the wire formats of OpenAI, Anthropic, Google, Mistral, Groq, Cohere, Amazon Bedrock, and dozens more behind a single API. You get generateText, streamText, generateObject (for Zod- validated structured output), and a set of React/Svelte/Vue hooks that plug straight into streaming UIs. It is the de-facto standard for TypeScript AI apps.

What it is

The SDK is Apache-2.0 licensed, maintained by Vercel, and split across ai (core), @ai-sdk/openai and siblings (providers), and @ai-sdk/react (UI hooks). It targets Node 18+, works on the Vercel Edge Runtime, and runs in the browser for providers that allow it.

Install

npm install ai @ai-sdk/openai zod
# plus whichever UI package you need
npm install @ai-sdk/react

First run

Stream a response from an API route and render it in a React component:

// app/api/chat/route.ts
import { openai } from "@ai-sdk/openai"
import { streamText } from "ai"

export async function POST(req: Request) {
  const { messages } = await req.json()
  const result = streamText({
    model: openai("gpt-4o-mini"),
    messages,
  })
  return result.toDataStreamResponse()
}

Everyday workflows

  • Use generateObject with a Zod schema to get validated JSON instead of parsing strings.
  • Pass tools: a record of tool definitions to let the model call your functions; the SDK handles the loop.
  • In the client, wire useChat() to your /api/chat route — streaming tokens, tool calls, and errors come for free.

Gotchas and tips

The SDK had a major version bump (v4 to v5) that changed message shape and tool-call semantics. Blog posts from 2024 often target v3; check the package version before copy-pasting. Also remember that toDataStreamResponse uses a custom protocol — if you consume it outside the built-in hooks, read the stream spec first.

Edge runtime is fast but limited. No Node APIs, 1MB bundle cap, and some providers’ SDKs pull in fs or crypto transitively. Check your bundle with next build before deploying a chat route to the edge.

Who it’s for

Any TypeScript developer shipping an LLM feature into a web app. Tip: put all model selection behind one environment variable — swapping gpt-4o for claude-sonnet-4 then becomes a config change, not a refactor.

Advertisement

Found this useful?Email