Our Python SDK got smarter. We developed a Typscript SDK too. We are updating our SDK code blocks. Python SDKhere.Typscript SDKhere.
Description
Tracing and Logging

Tracing & Logging With OpenTelemetry

You can use Patronus OpenTelemetry collector to export logs and traces, enabling features such as annotations and visualizations.

We do not convert your logs or spans, allowing you to send logs in any format. However, logs sent in unsupported formats may not be compatible with some features. Therefore, we strongly recommend adhering to our semantics for evaluation logs.

By design, you can store any standard OpenTelemetry logs and traces.

Using the Patronus SDK for Tracing

The Patronus SDK provides built-in support for OpenTelemetry tracing, making it easy to trace your LLM applications and evaluations with minimal configuration.

Installation

Install the Patronus SDK using your preferred package manager:

# Using pip
pip install patronus
 
# Using uv
uv add patronus

Quick Start Example

Here's a simple example of how to set up tracing with the Patronus SDK:

import os
 
import patronus
from patronus import traced
 
# Initialize the SDK with your API key
patronus.init(
    # This is the default and can be omitted
    api_key=os.environ.get("PATRONUS_API_KEY")
)
 
 
# Use the traced decorator to automatically trace a function execution
@traced()
def generate_response(task_context: str, user_query: str) -> str:
    # Your LLM call or processing logic here
    return (
        "To even qualify for our car insurance policy, "
        "you need to have a valid driver's license that expires "
        "later than 2028."
    )
 
 
@traced()
def retrieve_context(user_query: str) -> str:
    return (
        "To qualify for our car insurance policy, you need a way to "
        "show competence in driving which can be accomplished through "
        "a valid driver's license. You must have multiple years of "
        "experience and cannot be graduating from driving school before "
        "or on 2028."
    )
 
 
# Evaluations with the SDK are automatically traced
from patronus.evals import RemoteEvaluator
 
# Create a Patronus evaluator
hallucination_detector = RemoteEvaluator("lynx", "patronus:hallucination")
 
# Trace specific blocks with the context manager
from patronus.tracing import start_span
 
 
def process_user_query(query):
    with start_span("process_query", attributes={"query_length": len(query)}):
        # Processing logic
        task_context = retrieve_context(query)
        task_output = generate_response(task_context, query)
 
        # Evaluations are automatically traced
        evaluation = hallucination_detector.evaluate(
            task_input=query,
            task_context=task_context,
            task_output=task_output,
        )
 
        return task_output, evaluation
 
 
if __name__ == '__main__':
    query = "What is the car insurance policy"
    response, evaluation = process_user_query(query)
    print(f"Response: {response}")
    print(f"Evaluation: {evaluation.format()}")

For more detailed information about tracing using the Patronus SDK, visit the SDK documentation.

Manual OpenTelemetry Configuration

If you prefer to configure OpenTelemetry directly or are using a language other than Python, you can integrate Patronus with your existing OpenTelemetry setup.

Integration

To integrate your OTel logs and traces with Patronus, configure your system to export data to our collection endpoint. Specify an API Key, and optionally provide a project name and an app. These details need to be passed in the request headers.

1. Setup OTel SDK

You can install and setup an OTel SDK following the instructions for your preferred language. Once you have an SDK installed, you can configure the exporter to use the Patronus Collector. You will also need to install an OTLP exporter.

For Python:

pip install opentelemetry-api
pip install opentelemetry-sdk
pip install opentelemetry-exporter-otlp

2. Configure OTLP Exporter

Patronus infrastructure hosts an OTeL Collector for OpenTelemetry integration. You can set up your SDK exporter to export the data to the Patronus collector.

export OTEL_EXPORTER_OTLP_ENDPOINT="https://otel.patronus.ai:4317"
export OTEL_EXPORTER_OTLP_HEADERS='x-api-key=<YOUR_API_KEY>'
Remember to replace "<YOUR_API_KEY>"

Once you configure your OTeL Exporter, your logs and traces will be sent to the Patronus platform.

Python Example with Manual Configuration

Here's how to set up OpenTelemetry and propagate trace context to Patronus using Python:

# trace_with_otel.py
import os
 
import requests
from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.trace.span import format_span_id, format_trace_id
 
# Initialize Tracer Provider
trace_provider = TracerProvider()
trace.set_tracer_provider(trace_provider)
 
# Configure exporter pointing to Patronus collector
trace_processor = BatchSpanProcessor(OTLPSpanExporter())
trace_provider.add_span_processor(trace_processor)
tracer = trace.get_tracer("my.tracer")
 
# Start a span
with tracer.start_as_current_span("my-span") as span:
    # Create headers with trace context
    headers = {
        "X-API-Key": os.environ.get("PATRONUS_API_KEY"),
        "Content-Type": "application/json",
    }
 
    # Make request to Patronus API
    response = requests.post(
        "https://api.patronus.ai/v1/evaluate",
        headers=headers,
        json={
            "evaluators": [{"evaluator": "lynx", "criteria": "patronus:hallucination"}],
            "evaluated_model_input": "What is the car insurance policy?",
            "evaluated_model_retrieved_context": (
                "To qualify for our car insurance policy, you need a way to "
                "show competence in driving which can be accomplished through "
                "a valid driver's license. You must have multiple years of "
                "experience and cannot be graduating from driving school before "
                "or on 2028."
            ),
            "evaluated_model_output": (
                "To even qualify for our car insurance policy, "
                "you need to have a valid driver's license that expires "
                "later than 2028."
            ),
            "trace_id": format_trace_id(span.get_span_context().trace_id),
            "span_id": format_span_id(span.get_span_context().span_id),
        },
    )
    print(f"Evaluation response: {response}")
    response.raise_for_status()

Before running the script please remember about exporting environment variables.

export PATRONUS_API_KEY='<YOUR_API_KEY>'
export OTEL_EXPORTER_OTLP_ENDPOINT="https://otel.patronus.ai:4317"
export OTEL_EXPORTER_OTLP_HEADERS="x-api-key=$PATRONUS_API_KEY"
 
python ./trace_with_otel.py

On this page