You're building an AI-powered feature for your application and facing a critical decision: which JavaScript SDK should you use? The choice between LangChain JS, Vercel AI SDK, and OpenAI SDK determines your development velocity, bundle size, deployment options, and long-term flexibility.
This guide provides a clear decision framework matched to your specific project requirements.
In brief:
Each framework makes different trade-offs between abstraction level, bundle size, and deployment flexibility. Here's how they compare across the factors that matter most.
| Feature | LangChain JS | Vercel AI SDK | OpenAI SDK |
|---|---|---|---|
| Current Version | 1.2.7 | 6.0.27 | 6.15.0 |
| Bundle Size (gzipped) | 101.2 kB | 67.5 kB | 34.3 kB |
| Weekly Downloads | 1.3M | Data unavailable | 8.8M |
| Provider Support | 50+ providers | 25+ providers (LLM + audio) | OpenAI native + limited external |
| Edge Runtime | ❌ Incompatible | ✅ Native support | ⚠️ Requires variant |
| React Hooks | ❌ Manual integration | ✅ Built-in | ❌ Manual integration |
| RAG Support | ✅ Comprehensive built-in | ⚠️ Via LangChain/LlamaIndex adapters | ❌ Manual implementation |
| Agent Architectures | ✅ Pre-built (ReAct, Plan-and-Execute, ReWOO, LLMCompiler) | ⚠️ Pattern support | ❌ Manual loops |
| Best For | Complex workflows, RAG, autonomous agents | Next.js apps, streaming chat | Direct API access, simple integrations |
LangChain JS is an open-source framework for building LLM-powered applications with provider abstraction for vendor independence, LangGraph-first architecture for low-level control, and a pre-built integration ecosystem. Current version 1.2.7 provides developer-friendly abstractions for autonomous agents, retrieval-augmented generation, and multi-step workflows.
LangChain JS fits when:
fs module dependency (GitHub Issue #212).Core architecture pattern:
1// config/agents/weather-agent.ts
2import * as z from "zod";
3import { createAgent, tool } from "langchain";
4
5const getWeather = tool(({ city }) => `It's always sunny in ${city}!`, {
6 name: "get_weather",
7 description: "Get the weather for a given city",
8 schema: z.object({ city: z.string() }),
9});
10
11const agent = createAgent({
12 model: "claude-sonnet-4-5-20250929",
13 tools: [getWeather],
14});
15
16console.log(await agent.invoke({
17 messages: [{ role: "user", content: "What's the weather in Tokyo?" }]
18}));This 30-50 line LangChain implementation gives you autonomous tool selection, error handling via middleware, and streaming capabilities through configuration, compared to 150+ lines with manual OpenAI SDK implementations.
Vercel AI SDK standardizes AI model integration across 25+ providers with a TypeScript-native, streaming-by-default architecture. Version 6.0.27 provides unified APIs for text generation, structured objects, and tool calls through AI SDK Core, while AI SDK UI offers framework-agnostic hooks for chat interfaces and generative UI.
Vercel AI SDK excels for:
Streaming chat implementation:
1// app/components/Chat.tsx
2'use client';
3import { useChat } from '@ai-sdk/react';
4import { useState } from 'react';
5
6export default function Chat() {
7 const { messages, sendMessage, status } = useChat({
8 transport: new DefaultChatTransport({ api: '/api/chat' }),
9 });
10 const [input, setInput] = useState('');
11
12 return (
13 <>
14 {messages.map(message => (
15 <div key={message.id}>
16 {message.role === 'user' ? 'User: ' : 'AI: '}
17 {message.parts.map((part, index) =>
18 part.type === 'text' ? <span key={index}>{part.text}</span> : null
19 )}
20 </div>
21 ))}
22 <form onSubmit={e => {
23 e.preventDefault();
24 if (input.trim()) {
25 sendMessage({ text: input });
26 setInput('');
27 }
28 }}>
29 <input
30 value={input}
31 onChange={e => setInput(e.target.value)}
32 disabled={status !== 'ready'}
33 />
34 <button type="submit" disabled={status !== 'ready'}>Submit</button>
35 </form>
36 </>
37 );
38}The OpenAI Node SDK provides direct access to OpenAI's REST API with strongly-typed inputs and outputs. Version 6.15.0 supports the latest models including modern Responses API patterns while maintaining backward compatibility with Chat Completions.
OpenAI SDK suits:
Basic chat completion:
1// lib/openai-client.ts
2import OpenAI from 'openai';
3
4const openai = new OpenAI({
5 apiKey: process.env.OPENAI_API_KEY,
6});
7
8const completion = await openai.chat.completions.create({
9 model: 'gpt-4o',
10 messages: [
11 { role: 'user', content: 'Are semicolons optional in JavaScript?' }
12 ],
13});
14
15console.log(completion.choices[0].message.content);The SDK's streaming support uses standard Server-Sent Events (SSE):
1const stream = await openai.responses.create({
2 model: 'gpt-5',
3 input: [{ role: 'user', content: 'Say "double bubble bath" ten times fast.' }],
4 stream: true,
5});
6
7for await (const event of stream) {
8 if (event.type === 'response.output_text.delta') {
9 process.stdout.write(event.output.text);
10 }
11}The three frameworks represent distinct positions on the abstraction spectrum.
OpenAI SDK uses the most direct approach with minimal abstraction and exact API mirroring, though you implement streaming state management, retry logic, and error handling yourself.
Vercel AI SDK optimizes for UI integration with purpose-built React hooks. The useChat() and useCompletion() hooks reduce boilerplate to approximately 20 lines for a complete chat interface versus 100+ lines with direct API usage, though this creates framework lock-in.
LangChain JS provides the highest-level abstractions with object-oriented patterns based on its LangGraph-first architecture, with the steepest learning curve but simplifying complex workflows once understood.
TypeScript support is comprehensive across all three. OpenAI SDK generates types from OpenAPI specifications. Both Vercel AI SDK and LangChain JS integrate Zod for schema validation with automatic type inference, giving you compile-time safety for tool parameters and structured outputs.
Tool calling implementation reveals the abstraction differences clearly:
1const { text } = await generateText({
2 model: openai('gpt-4o'),
3 tools: {
4 weather: tool({
5 description: 'Get current weather data for a city',
6 parameters: z.object({
7 city: z.string().describe('The city name')
8 }),
9 execute: async ({ city }) => {
10 const response = await fetch(`https://api.weather.service/current?city=${city}`);
11 const data = await response.json();
12 return {
13 temperature: data.temp,
14 condition: data.condition,
15 humidity: data.humidity
16 };
17 }
18 })
19 },
20 prompt: 'What is the weather in San Francisco?'
21});Community feedback consistently favors Vercel AI SDK for "cleaner APIs, solid TypeScript support, and better streaming," while developers appreciate OpenAI SDK for "stability and debugging simplicity." LangChain JS is recognized as "powerful but sometimes overly complex" for straightforward use cases.
Streaming capabilities separate these frameworks significantly. Vercel AI SDK provides the highest-level abstraction with purpose-built React hooks, LangChain JS uses async iterators requiring manual React integration, and OpenAI SDK offers event-driven streaming with the most granular control.
Vercel AI SDK provides the highest-level abstractions with purpose-built React hooks (useChat, useCompletion, useAssistant) that eliminate manual state management entirely. The framework returns AsyncIterable<string> for server-side streaming and ReadableStream for Edge Functions, with built-in React hooks handling connection management, chunk processing, and re-renders automatically.
For teams looking to build production chatbots, this efficiency through Vercel's built-in React hooks for streaming demonstrates significant boilerplate reduction compared to equivalent functionality with other SDKs.
LangChain JS uses async iterators for streaming with the stream(input) method, which returns an async iterator you can consume in a loop, while streamEvents() provides granular control over individual events.
OpenAI SDK offers the most granular control through semantic event-driven patterns with events like response.created, response.output_text.delta, and response.completed for precise rendering control. For ultra-low latency real-time applications, the Realtime API delivers WebRTC (browser) and WebSocket (server) connectivity bypassing standard HTTP.
Edge runtime compatibility creates a critical deployment constraint. Vercel AI SDK offers explicit edge support with native V8 compatibility. LangChain JS is fundamentally incompatible due to Node.js fs module usage, which isn't available in edge environments. OpenAI SDK requires the openai-edge variant for edge compatibility.
If your architecture requires edge deployment for low latency or global distribution, LangChain JS cannot be used regardless of other factors.
Performance characteristics vary by use case. Vercel's edge infrastructure processes billions of tokens daily with single-digit millisecond round-trip times. Edge functions show approximately 9x faster cold starts compared to serverless. OpenAI SDK performance depends primarily on model characteristics rather than SDK overhead.
Provider flexibility determines your ability to switch between AI services and avoid vendor lock-in.
For applications requiring true multi-provider flexibility, particularly when you want to A/B test providers or distribute load across services, Vercel AI SDK stands out with 25+ integrated providers and single-line model switching, while LangChain JS offers strong support with 50+ providers. OpenAI SDK requires more manual configuration for external providers, making it less ideal for frequent provider experimentation.
Provider switching becomes critical during API outages or rate limiting. With Vercel AI SDK, changing from OpenAI to Anthropic requires modifying only the model identifier, while maintaining identical streaming, tool calling, and error handling code paths. This operational resilience matters when production systems face service degradation.
LangChain JS provides the most mature agent infrastructure with pre-built architectures including ReAct (reasoning and acting iteratively) and Plan-and-Execute (multi-step workflows). You get autonomous tool selection, error handling via middleware through wrapToolCall, and streaming through config.streamWriter.
1import * as z from "zod";
2import { createAgent, tool } from "langchain";
3
4const getWeather = tool(({ city }) => `It's always sunny in ${city}!`, {
5 name: "get_weather",
6 description: "Get the weather for a given city",
7 schema: z.object({ city: z.string() }),
8});
9
10const agent = createAgent({
11 model: "claude-sonnet-4-5-20250929",
12 tools: [getWeather],
13});
14
15console.log(await agent.invoke({
16 messages: [{ role: 'user', content: "What's the weather in Tokyo?" }]
17}));Vercel AI SDK provides flexible primitives for agentic patterns through tool calling. You implement patterns like ReAct or Plan-and-Execute by composing tool calls with custom orchestration logic (approximately 80-120 lines of code). The needsApproval flag enables human-in-the-loop workflows.
OpenAI SDK requires manual implementation of agent loops. The SDK supports function calling with JSON schema definitions, but you write the decision logic, execution loop, and state management yourself, typically requiring 150-200 lines for comparable functionality.
RAG support shows significant differences. LangChain JS provides comprehensive built-in capabilities with native vector store integrations, document loaders for various formats, chunking strategies, and pre-built retrieval chains. The framework supports traditional two-step retrieval, agentic RAG where agents decide what to retrieve, and hybrid approaches. AI FAQ implementation demonstrates how this architecture leverages structured content retrieval.
Vercel AI SDK uses an adapter-based approach, with RAG implementations typically integrating LangChain or LlamaIndex adapters. OpenAI SDK requires building all vector storage, document loading, and chunking infrastructure separately.
| Feature | LangChain JS | Vercel AI SDK | OpenAI SDK |
|---|---|---|---|
| Agent Types | Pre-built (ReAct, Plan-and-Execute) | Pattern support | Manual |
| Native RAG | ✅ Built-in | Via adapters | External |
| Vector Stores | Multiple built-in | Via integrations | Developer-implemented |
| Lines of Code | 30-50 | 80-120 | 150-200 |
Bundle sizes reveal the cost of abstraction layers:
For modern web performance targeting 100-200 kB gzipped initial loads, OpenAI SDK is ideal for client-side applications where every kilobyte matters. Vercel AI SDK's 67.5 kB remains reasonable for most use cases, while LangChain JS reaches the upper limits though tree-shaking can reduce this through granular imports.
Edge runtime compatibility creates a critical decision point:
LangChain JS: ❌ Incompatible with Edge Runtimes
fs module unavailable in edge environments (GitHub Issue #212). Vercel AI SDK: ✅ Full Edge Runtime Support
OpenAI SDK: ⚠️ Edge Compatible with Modifications
openai-edge variant. axios which isn't edge-compatible.This constraint dramatically changes decision trees. Projects requiring edge deployment must exclude LangChain JS entirely, leaving only Vercel AI SDK (native edge support) or OpenAI SDK with the openai-edge variant.
Production adoption measured by weekly npm downloads:
Developer engagement measured by GitHub stars:
Active maintenance across all three frameworks:
All three maintain comprehensive official documentation with extensive guides, framework-specific patterns, and complete API references. When integrating headless CMS capabilities, official documentation quality becomes critical for troubleshooting integration points.
Consider Vercel AI SDK when:
You're building an AI-powered Next.js application requiring real-time chat interfaces. The useChat() and useCompletion() hooks deliver production-ready streaming with automatic message state management. Edge runtime support enables global distribution with low latency. Provider flexibility across 25+ providers maintains optionality without code rewrites.
For deploying AI features to Vercel's edge network, native integration provides the optimal path.
Turn to LangChain JS for:
Your application requires complex multi-step reasoning with autonomous agents, retrieval-augmented generation with vector stores, or sophisticated workflow orchestration. LangChain JS offers pre-built agent architectures and comprehensive RAG tooling with native vector store integrations that significantly reduce implementation effort.
Provider abstraction across 50+ LLM providers enables sophisticated multi-model workflows, though edge runtime compatibility limitations restrict deployment to Node.js serverless environments.
OpenAI SDK optimizes for:
You need direct API access to OpenAI with granular control over parameters, request lifecycle, error handling, and streaming behavior. The smallest bundle footprint (34.3 kB gzipped) makes it ideal for client-side applications where performance matters. Exclusive OpenAI API access without multi-provider flexibility requirements eliminates abstraction overhead.
No single framework wins across all dimensions. OpenAI SDK dominates production adoption (8.8M weekly downloads) with the smallest bundle (34.3 kB gzipped)—ideal for straightforward OpenAI integration. Vercel AI SDK leads developer engagement (20.8k GitHub stars) with unmatched React/Next.js experience through built-in hooks and edge runtime support. LangChain JS offers the most comprehensive agent and RAG infrastructure despite edge incompatibility and larger bundle size.
Key decision factors:
AI application development with Strapi as your content backend maintains flexibility across all three frameworks through standard REST and GraphQL APIs. Strapi's AI-powered content type builder and LLM Translator plugin work with any OpenAI-compatible provider.
The AI landscape evolves rapidly—all three frameworks shipped major updates in Q4 2025 and Q1 2026. Monitor provider support, edge compatibility, and community momentum for long-term decisions.
npx create-strapi-app@latest in your terminal and follow our Quick Start Guide to build your first Strapi project.