Our docs got a refresh! Check out the new content and improved navigation. For detailed API reference see our Python SDK docs and TypeScript SDK.
Traces
Tracing
Monitor and understand the behavior of your LLM applications with tracing
Tracing is a core feature of the Patronus SDK that allows you to monitor and understand the behavior of your LLM applications. This page covers how to set up and use tracing in your code.
The simplest way to add tracing is with the @traced() decorator:
import patronusfrom patronus import tracedpatronus.init()@traced()def generate_response(prompt: str) -> str: # Your LLM call or processing logic here return f"Response to: {prompt}"# Call the traced functionresult = generate_response("Tell me about machine learning")
The @traced() decorator accepts several parameters for customization:
@traced( span_name="Custom span name", # Default: function name log_args=True, # Whether to log function arguments log_results=True, # Whether to log function return values log_exceptions=True, # Whether to log exceptions disable_log=False, # Completely disable logging (maintains spans) attributes={"key": "value"} # Custom attributes to add to the span)def my_function(): pass
The start_span() context manager accepts these parameters:
with start_span( "Span name", # Name of the span (required) record_exception=False, # Whether to record exceptions attributes={"custom": "attribute"} # Custom attributes to add) as span: # Your code here # You can also add attributes during execution: span.set_attribute("dynamic_value", 42)