Our Python SDK got smarter. We developed a Typscript SDK too. We are updating our SDK code blocks. Python SDKhere.Typscript SDKhere.
Tracing and Logging
Tracing & Logging With OpenTelemetry
You can use Patronus OpenTelemetry collector to export logs and traces, enabling features such as annotations and visualizations.
We do not convert your logs or spans, allowing you to send logs in any format. However, logs sent in unsupported formats may not be compatible with some features. Therefore, we strongly recommend adhering to our semantics for evaluation logs.
By design, you can store any standard OpenTelemetry logs and traces.
The Patronus SDK provides built-in support for OpenTelemetry tracing, making it easy to trace your LLM applications and evaluations with minimal configuration.
Here's a simple example of how to set up tracing with the Patronus SDK:
import osimport patronusfrom patronus import traced# Initialize the SDK with your API keypatronus.init( # This is the default and can be omitted api_key=os.environ.get("PATRONUS_API_KEY"))# Use the traced decorator to automatically trace a function execution@traced()def generate_response(task_context: str, user_query: str) -> str: # Your LLM call or processing logic here return ( "To even qualify for our car insurance policy, " "you need to have a valid driver's license that expires " "later than 2028." )@traced()def retrieve_context(user_query: str) -> str: return ( "To qualify for our car insurance policy, you need a way to " "show competence in driving which can be accomplished through " "a valid driver's license. You must have multiple years of " "experience and cannot be graduating from driving school before " "or on 2028." )# Evaluations with the SDK are automatically tracedfrom patronus.evals import RemoteEvaluator# Create a Patronus evaluatorhallucination_detector = RemoteEvaluator("lynx", "patronus:hallucination")# Trace specific blocks with the context managerfrom patronus.tracing import start_spandef process_user_query(query): with start_span("process_query", attributes={"query_length": len(query)}): # Processing logic task_context = retrieve_context(query) task_output = generate_response(task_context, query) # Evaluations are automatically traced evaluation = hallucination_detector.evaluate( task_input=query, task_context=task_context, task_output=task_output, ) return task_output, evaluationif __name__ == '__main__': query = "What is the car insurance policy" response, evaluation = process_user_query(query) print(f"Response: {response}") print(f"Evaluation: {evaluation.format()}")
For more detailed information about tracing using the Patronus SDK, visit the SDK documentation.
If you prefer to configure OpenTelemetry directly or are using a language other than Python, you can integrate Patronus with your existing OpenTelemetry setup.
To integrate your OTel logs and traces with Patronus, configure your system to export data to our collection endpoint. Specify an API Key, and optionally provide a project name and an app. These details need to be passed in the request headers.
You can install and setup an OTel SDK following the instructions for your preferred language. Once you have an SDK installed, you can configure the exporter to use the Patronus Collector. You will also need to install an OTLP exporter.
Patronus infrastructure hosts an OTeL Collector for OpenTelemetry integration. You can set up your SDK exporter to export the data to the Patronus collector.
Here's how to set up OpenTelemetry and propagate trace context to Patronus using Python:
# trace_with_otel.pyimport osimport requestsfrom opentelemetry import tracefrom opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporterfrom opentelemetry.sdk.trace import TracerProviderfrom opentelemetry.sdk.trace.export import BatchSpanProcessorfrom opentelemetry.trace.span import format_span_id, format_trace_id# Initialize Tracer Providertrace_provider = TracerProvider()trace.set_tracer_provider(trace_provider)# Configure exporter pointing to Patronus collectortrace_processor = BatchSpanProcessor(OTLPSpanExporter())trace_provider.add_span_processor(trace_processor)tracer = trace.get_tracer("my.tracer")# Start a spanwith tracer.start_as_current_span("my-span") as span: # Create headers with trace context headers = { "X-API-Key": os.environ.get("PATRONUS_API_KEY"), "Content-Type": "application/json", } # Make request to Patronus API response = requests.post( "https://api.patronus.ai/v1/evaluate", headers=headers, json={ "evaluators": [{"evaluator": "lynx", "criteria": "patronus:hallucination"}], "evaluated_model_input": "What is the car insurance policy?", "evaluated_model_retrieved_context": ( "To qualify for our car insurance policy, you need a way to " "show competence in driving which can be accomplished through " "a valid driver's license. You must have multiple years of " "experience and cannot be graduating from driving school before " "or on 2028." ), "evaluated_model_output": ( "To even qualify for our car insurance policy, " "you need to have a valid driver's license that expires " "later than 2028." ), "trace_id": format_trace_id(span.get_span_context().trace_id), "span_id": format_span_id(span.get_span_context().span_id), }, ) print(f"Evaluation response: {response}") response.raise_for_status()
Before running the script please remember about exporting environment variables.