Get up and running with Cascade in minutes. This guide covers installation, API key setup, and sending your first trace.
Install the SDK
Install the Cascade SDK from PyPI:
Requires Python 3.8 or higher.
Set your credentials
Get your API key and endpoint from the Cascade Dashboard . Both are provided per customer.
Create a .env file in the root of your project:
CASCADE_API_KEY = "csk_live_..."
CASCADE_ENDPOINT = "https://your-endpoint.runcascade.com/v1/traces"
Or pass them directly in code:
from cascade import init_tracing
init_tracing( api_key = "csk_live_..." , endpoint = "https://your-endpoint.runcascade.com/v1/traces" )
Initialize tracing
Call init_tracing() once at the top of your application. Everything else is automatic.
from cascade import init_tracing
# With a project name (recommended - organizes traces in the dashboard)
init_tracing( project = "customer_support_chatbot" )
Send your first trace
Wrap your agent’s entry point with trace_run() and use wrap_llm_client() to automatically capture all LLM interactions:
from cascade import init_tracing, trace_run, wrap_llm_client
from anthropic import Anthropic
# Initialize tracing
init_tracing( project = "my_first_agent" )
# Wrap your LLM client - all calls are now automatically traced
client = wrap_llm_client(Anthropic())
# Create a traced agent run
with trace_run( "SimpleAgent" ):
response = client.messages.create(
model = "claude-sonnet-4-20250514" ,
max_tokens = 100 ,
messages = [
{ "role" : "user" , "content" : "Say hello and introduce yourself in one sentence." }
]
)
print (response.content[ 0 ].text)
Run the script:
Open the Cascade Dashboard to see your trace with full details including prompts, completions, token counts, latency, and cost.
Next steps
Sessions & Tracing Learn about tracing runs, sub-agents, tools, and LLM clients.
Evaluation SDK Score your agent’s performance with built-in and custom evaluators.
Integrations Auto-instrument LangGraph, OpenAI Agents, and Claude Agents.
Model providers Trace any LLM provider through a unified wrapper.