Initialize tracing
Callinit_tracing() once at the top of your application. Everything else is automatic.
Trace a run
Wrap your agent’s entry point withtrace_run()to create the root span. Any execution triggered from within the block is captured—including LLM calls, tool invocations, and sub-agent delegations that occur in nested functions or framework code. For example, if your agent logic lives in a single call (e.g.plan = planner.run(task)), placing that call insidetrace_runis enough; everything insiderun()is traced as child spans.
Trace sub-agents
If you have multiple agents or sub-agents, usetrace_agent() to create a named sub-agent span. All tool calls and LLM calls inside the block are automatically tagged with the agent name.
Trace multi-turn sessions
For multi-turn conversations, group traces under a session so they appear together in the dashboard. Create a session ID, callset_session_id() to set it in context, and call end_session() when the conversation ends.
Example:
trace_run() inherits the session ID from context.
Trace tools
Decorate any function with@tool to trace it as a tool call. Works with both sync and async functions.
@tool decorator automatically records:
- Input parameters
- Output value
- Execution time
- Errors (with full exception info)
Trace functions
Use@function for internal utility functions that support your tools. These appear as distinct “function” spans in the trace tree.
Wrap LLM clients
Usewrap_llm_client() to automatically trace every LLM call (prompts, completions, token counts, latency, and cost) with zero changes to your existing code.
- Anthropic
- OpenAI
- OpenRouter
What gets captured per LLM call
| Attribute | Description |
|---|---|
llm.model | Model name (e.g. claude-sonnet-4-20250514, gpt-4) |
llm.provider | Provider (anthropic, openai) |
llm.prompt | Input prompt/messages |
llm.completion | Full completion text |
llm.input_tokens | Input/prompt token count |
llm.output_tokens | Output/completion token count |
llm.total_tokens | Total tokens |
llm.latency_ms | Latency in milliseconds |
llm.cost_usd | Estimated cost in USD |