Skip to main content

Installation

npm install @runcascade/cascade-sdk
Or from source (monorepo):
cd typescript/cascade-sdk
npm install
npm run build

Configuration

Set environment variables before starting your app:
VariableRequiredDescription
CASCADE_API_KEYYesOrganization API key (csk_live_...)
CASCADE_ENDPOINTYesOTLP endpoint

Initialize tracing

Call initTracing() once at the top of your application:
import { initTracing } from '@runcascade/cascade-sdk';

initTracing({ project: 'my_agent' });
That’s it. Traces are exported to Cascade in the background.

Basic instrumentation

1. Wrap your entry point with traceRun

Wrap your agent’s main execution in traceRun so every span is part of the same trace:
import { initTracing, traceRun } from '@runcascade/cascade-sdk';

initTracing({ project: 'my_agent' });

async function main() {
  await traceRun('MyAgent', undefined, async () => {
    // Your agent logic here
    const result = await myAgent.execute(task);
    return result;
  });
}

2. Wrap LLM clients

Wrap your OpenAI or Anthropic client for automatic LLM span tracing:
import { initTracing, traceRun, wrapLlmClient } from '@runcascade/cascade-sdk';
import OpenAI from 'openai';

initTracing({ project: 'my_agent' });
const client = wrapLlmClient(new OpenAI());

await traceRun('MyAgent', undefined, async () => {
  const response = await client.chat.completions.create({
    model: 'gpt-4',
    messages: [{ role: 'user', content: 'Hello!' }],
  });
  return response;
});

3. Wrap tools with tool

Decorate functions that the LLM calls as tools:
import { initTracing, traceRun, tool } from '@runcascade/cascade-sdk';

initTracing({ project: 'my_agent' });

const searchDb = tool({ name: 'search_db' }, async (query: string) => {
  return db.search(query);
});

await traceRun('MyAgent', undefined, async () => {
  const results = await searchDb('user query');
  return results;
});

Multi-turn sessions

For chat-like agents, group traces under a session:
import { initTracing, traceRun, traceSession, setSessionId, endSession } from '@runcascade/cascade-sdk';

initTracing({ project: 'chatbot' });

const sessionId = `chat-${Date.now()}`;
setSessionId(sessionId);

try {
  await traceRun('Turn1', undefined, async () => { /* ... */ });
  await traceRun('Turn2', undefined, async () => { /* ... */ });
} finally {
  endSession(sessionId);
}
Or use traceSession as a context:
await traceSession('chat-123', async () => {
  await traceRun('Turn1', undefined, async () => { /* ... */ });
  await traceRun('Turn2', undefined, async () => { /* ... */ });
});
endSession('chat-123');

Sub-agents with traceAgent

Use traceAgent to name sub-agents within a run:
import { traceRun, traceAgent } from '@runcascade/cascade-sdk';

await traceRun('Orchestrator', undefined, async () => {
  await traceAgent('Planner', undefined, async () => {
    const plan = await planner.run(task);
    return plan;
  });
  await traceAgent('Executor', undefined, async () => {
    return await executor.run(plan);
  });
});

Auto-evaluation

Pass scorer names to initTracing for automatic evaluation on every trace:
initTracing({
  project: 'my_agent',
  evals: ['helpfulness', 'hallucination'],
});
For multi-turn sessions, add sessionEvals to run when the session ends:
initTracing({
  project: 'chatbot',
  evals: ['helpfulness'],
  sessionEvals: ['Proceeding Despite Explicit User Refusal'],
});

Quick reference

APIPurpose
initTracing({ project, evals, sessionEvals })Initialize once at startup
traceRun(name, metadata, fn)Root span for a run
traceAgent(name, metadata, fn)Named sub-agent span
traceSession(sessionId, fn)Scope traces to a session
setSessionId(id) / endSession(id)Session context for multi-turn
tool(options, fn)Wrap a function as a traced tool
wrapLlmClient(client)Auto-trace OpenAI/Anthropic calls