Our docs got a refresh! Check out the new content and improved navigation. For detailed API reference see our Python SDK docs and TypeScript SDK.
Description
Traces

Tracing

Monitor and understand the behavior of your LLM applications with tracing

Tracing is a core feature of the Patronus SDK that allows you to monitor and understand the behavior of your LLM applications. This page covers how to set up and use tracing in your code.

Configuration

For information about configuring observability features, including exporter protocols and endpoints, see the Observability Configuration guide.

Getting started with tracing

Tracing in Patronus works through two main mechanisms:

  • Function decorators: Easily trace entire functions
  • Context managers: Trace specific code blocks within functions

Using the @traced() decorator

The simplest way to add tracing is with the @traced() decorator:

import patronus
from patronus import traced
 
patronus.init()
 
@traced()
def generate_response(prompt: str) -> str:
    # Your LLM call or processing logic here
    return f"Response to: {prompt}"
 
# Call the traced function
result = generate_response("Tell me about machine learning")

Decorator options

The @traced() decorator accepts several parameters for customization:

@traced(
    span_name="Custom span name",   # Default: function name
    log_args=True,                  # Whether to log function arguments
    log_results=True,               # Whether to log function return values
    log_exceptions=True,            # Whether to log exceptions
    disable_log=False,              # Completely disable logging (maintains spans)
    attributes={"key": "value"}     # Custom attributes to add to the span
)
def my_function():
    pass

See the API documentation for complete details.

Using the start_span() context manager

For more granular control, use the start_span() context manager to trace specific blocks of code:

import patronus
from patronus.tracing import start_span
 
patronus.init()
 
def complex_workflow(data):
    # First phase
    with start_span("Data preparation", attributes={"data_size": len(data)}):
        prepared_data = preprocess(data)
 
    # Second phase
    with start_span("Model inference"):
        results = run_model(prepared_data)
 
    # Third phase
    with start_span("Post-processing"):
        final_results = postprocess(results)
 
    return final_results

Context manager options

The start_span() context manager accepts these parameters:

with start_span(
    "Span name",                        # Name of the span (required)
    record_exception=False,             # Whether to record exceptions
    attributes={"custom": "attribute"}  # Custom attributes to add
) as span:
    # Your code here
    # You can also add attributes during execution:
    span.set_attribute("dynamic_value", 42)

On this page