Our Python SDK got smarter. We developed a Typscript SDK too. We are updating our SDK code blocks. Python SDKhere.Typscript SDKhere.
Description
Tracing and Logging

Tracing & Logging With OpenTelemetry

You can use Patronus OpenTelemetry collector to export logs and traces, enabling features such as annotations and visualizations.

We do not convert your logs or spans, allowing you to send logs in any format. However, logs sent in unsupported formats may not be compatible with some features. Therefore, we strongly recommend adhering to our semantics for evaluation logs.

By design, you can store any standard OpenTelemetry logs and traces.

Using the Patronus SDK for Tracing

The Patronus SDK provides built-in support for OpenTelemetry tracing, making it easy to trace your LLM applications and evaluations with minimal configuration.

Installation

Install the Patronus SDK using your preferred package manager:

# Using pip
pip install patronus
 
# Using uv
uv add patronus

Quick Start Example

Here's a simple example of how to set up tracing with the Patronus SDK:

import os
 
import patronus
from patronus import traced
 
# Initialize the SDK with your API key
patronus.init(
    # This is the default and can be omitted
    api_key=os.environ.get("PATRONUS_API_KEY")
)
 
 
# Use the traced decorator to automatically trace a function execution
@traced()
def generate_response(task_context: str, user_query: str) -> str:
    # Your LLM call or processing logic here
    return (
        "To even qualify for our car insurance policy, "
        "you need to have a valid driver's license that expires "
        "later than 2028."
    )
 
 
@traced()
def retrieve_context(user_query: str) -> str:
    return (
        "To qualify for our car insurance policy, you need a way to "
        "show competence in driving which can be accomplished through "
        "a valid driver's license. You must have multiple years of "
        "experience and cannot be graduating from driving school before "
        "or on 2028."
    )
 
 
# Evaluations with the SDK are automatically traced
from patronus.evals import RemoteEvaluator
 
# Create a Patronus evaluator
hallucination_detector = RemoteEvaluator("lynx", "patronus:hallucination")
 
# Trace specific blocks with the context manager
from patronus.tracing import start_span
 
 
def process_user_query(query):
    with start_span("process_query", attributes={"query_length": len(query)}):
        # Processing logic
        task_context = retrieve_context(query)
        task_output = generate_response(task_context, query)
 
        # Evaluations are automatically traced
        evaluation = hallucination_detector.evaluate(
            task_input=query,
            task_context=task_context,
            task_output=task_output,
        )
 
        return task_output, evaluation
 
 
if __name__ == '__main__':
    query = "What is the car insurance policy"
    response, evaluation = process_user_query(query)
    print(f"Response: {response}")
    print(f"Evaluation: {evaluation.format()}")

For more detailed information about tracing using the Patronus SDK, visit the SDK documentation.

Manual OpenTelemetry Configuration

If you prefer to configure OpenTelemetry directly or are using a language other than Python, you can integrate Patronus with your existing OpenTelemetry setup.

Integration

To integrate your OTel logs and traces with Patronus, configure your system to export data to our collection endpoint. Specify an API Key, and optionally provide a project name and an app. These details need to be passed in the request headers.

1. Setup OTel SDK

You can install and setup an OTel SDK following the instructions for your preferred language. Once you have an SDK installed, you can configure the exporter to use the Patronus Collector. You will also need to install an OTLP exporter.

For Python:

pip install opentelemetry-api
pip install opentelemetry-sdk
pip install opentelemetry-exporter-otlp

For JavaScript:

npm i @opentelemetry/api
npm i @opentelemetry/sdk-node
npm i @opentelemetry/exporter-trace-otlp-grpc

2. Configure OTLP Exporter

Patronus infrastructure hosts an OTeL Collector for OpenTelemetry integration. You can set up your SDK exporter to export the data to the Patronus collector.

export OTEL_EXPORTER_OTLP_ENDPOINT="https://otel.patronus.ai:4317"
export OTEL_EXPORTER_OTLP_HEADERS='x-api-key=<YOUR_API_KEY>'
Remember to replace "<YOUR_API_KEY>"

Once you configure your OTeL Exporter, your logs and traces will be sent to the Patronus platform.

Python Example with Manual Configuration

Here's how to set up OpenTelemetry and propagate trace context to Patronus using Python:

# trace_with_otel.py
import os
 
import requests
from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.trace.span import format_span_id, format_trace_id
 
# Initialize Tracer Provider
trace_provider = TracerProvider()
trace.set_tracer_provider(trace_provider)
 
# Configure exporter pointing to Patronus collector
trace_processor = BatchSpanProcessor(OTLPSpanExporter())
trace_provider.add_span_processor(trace_processor)
tracer = trace.get_tracer("my.tracer")
 
# Start a span
with tracer.start_as_current_span("my-span") as span:
    # Create headers with trace context
    headers = {
        "X-API-Key": os.environ.get("PATRONUS_API_KEY"),
        "Content-Type": "application/json",
    }
 
    # Make request to Patronus API
    response = requests.post(
        "https://api.patronus.ai/v1/evaluate",
        headers=headers,
        json={
            "evaluators": [{"evaluator": "lynx", "criteria": "patronus:hallucination"}],
            "evaluated_model_input": "What is the car insurance policy?",
            "evaluated_model_retrieved_context": (
                "To qualify for our car insurance policy, you need a way to "
                "show competence in driving which can be accomplished through "
                "a valid driver's license. You must have multiple years of "
                "experience and cannot be graduating from driving school before "
                "or on 2028."
            ),
            "evaluated_model_output": (
                "To even qualify for our car insurance policy, "
                "you need to have a valid driver's license that expires "
                "later than 2028."
            ),
            "trace_id": format_trace_id(span.get_span_context().trace_id),
            "span_id": format_span_id(span.get_span_context().span_id),
        },
    )
    print(f"Evaluation response: {response}")
    response.raise_for_status()

Before running the script please remember about exporting environment variables.

export PATRONUS_API_KEY='<YOUR_API_KEY>'
export OTEL_EXPORTER_OTLP_ENDPOINT="https://otel.patronus.ai:4317"
export OTEL_EXPORTER_OTLP_HEADERS="x-api-key=$PATRONUS_API_KEY"
 
python ./trace_with_otel.py

JavaScript Example with Manual Configuration

Here's how you can propagate traces in JavaScript (Node.js) to Patronus:

  1. Export your Patronus API Key
export PATRONUS_API_KEY='<YOUR_API_KEY>'
  1. Create a tracing.js file in your repository.
// tracing.js
"use strict";

const { NodeSDK } = require("@opentelemetry/sdk-node");
const {
  OTLPTraceExporter,
} = require("@opentelemetry/exporter-trace-otlp-grpc");
const { Metadata } = require("@grpc/grpc-js");
const { BatchSpanProcessor } = require("@opentelemetry/sdk-trace-base");

const traceExporter = new OTLPTraceExporter({
  url: "https://otel.patronus.ai:4317",
  metadata: (() => {
    const metadata = new Metadata();
    metadata.set("x-api-key", process.env.PATRONUS_API_KEY || "");
    return metadata;
  })(),
});

const batchProcessor = new BatchSpanProcessor(traceExporter);

const sdk = new NodeSDK({
  spanProcessors: [batchProcessor],
});

try {
  sdk.start();
  console.log("OpenTelemetry SDK started successfully");
} catch (error) {
  console.error("Failed to start OpenTelemetry SDK:", error);
}

module.exports = { sdk, batchProcessor };
  1. In the file containing your application code, import the tracing.js file
// index.js
const { sdk, batchProcessor } = require("./tracing");
const { trace, SpanStatusCode } = require("@opentelemetry/api");

async function run_example() {
  const tracer = trace.getTracer("patronus-example");

  await tracer.startActiveSpan("evaluate_lynx", async (span) => {
    try {
      const headers = {
        "X-API-Key": process.env.PATRONUS_API_KEY,
        "Content-Type": "application/json",
      };

      const body = {
        evaluators: [
          { evaluator: "lynx", criteria: "patronus:hallucination" },
        ],
        evaluated_model_input: "What is the car insurance policy?",
        evaluated_model_retrieved_context:
          "To qualify for our car insurance policy, you need a way to show competence in driving which can be accomplished through a valid driver's license. You must have multiple years of experience and cannot be graduating from driving school before or on 2028.",
        evaluated_model_output:
          "To even qualify for our car insurance policy, you need to have a valid driver's license that expires later than 2028.",
        trace_id: span.spanContext().traceId,
        span_id: span.spanContext().spanId,
      };

      const res = await fetch("https://api.patronus.ai/v1/evaluate", {
        method: "POST",
        headers,
        body: JSON.stringify(body),
      });

      console.log(`Evaluation response: ${res.status} ${res.statusText}`);
      if (!res.ok) throw new Error(`HTTP ${res.status}`);

      span.setStatus({ code: SpanStatusCode.OK });
    } catch (err) {
      span.recordException(err);
      span.setStatus({ code: SpanStatusCode.ERROR, message: err.message });
      console.error("Error during evaluation:", err);
    } finally {
      span.end();
    }
  });

  await batchProcessor.forceFlush();
  await sdk.shutdown();
}

run_example();

Before running the script, remember to export environment variables.

export PATRONUS_API_KEY='<YOUR_API_KEY>'
 
node index.js

On this page