Vercel AI SDK Integration
Automatically track all Vercel AI SDK calls with full input/output logging, token usage, and tool executions. Enable OpenTelemetry andexperimental_telemetry—no wrapper or extra instrumentation needed.
What Gets Tracked Automatically
| Category | Captured |
|---|---|
| LLM calls | Model, provider, prompt, messages, system prompt |
| Responses | Generated text, finish reason |
| Token usage | Input tokens, output tokens, total tokens |
| Tool calls | All tool executions with input/output |
| Latency | Duration in milliseconds |
Installation
Configuration
AddCASCADE_API_KEY to .env (or Vercel dashboard) and call configureVercelAiTelemetry at app startup. Traces export to Cascade Cloud automatically.
Basic Usage
- Call
configureVercelAiTelemetryat startup (withregisterOtel: true). - Enable
experimental_telemetryon your AI calls. - Traces export to Cascade Cloud automatically.
With Tools
Tool calls are automatically traced as child spans:tool.name, tool.input, and tool.output.
Streaming
streamText is fully supported. Spans are emitted as chunks complete.
Next.js
Createinstrumentation.ts at the project root (or in src/):
generateText or streamText in your API routes with experimental_telemetry: { isEnabled: true }: