Our Python SDK got smarter. We developed a Typscript SDK too. We are updating our SDK code blocks. Python SDKhere.Typscript SDKhere.
Tracing and Logging
Tracing & Logging With OpenTelemetry
You can use Patronus OpenTelemetry collector to export logs and traces, enabling features such as annotations and visualizations.
We do not convert your logs or spans, allowing you to send logs in any format. However, logs sent in unsupported formats may not be compatible with some features. Therefore, we strongly recommend adhering to our semantics for evaluation logs.
By design, you can store any standard OpenTelemetry logs and traces.
Follow our Integration guide to start using tracing.
To integrate your OTel logs and traces with our platform, configure your system to export data to our collection endpoint. Together with this, an API key is required. Optionally, you may provide a project name and an app. These details need to be passed in the request headers.
You can install and setup OTel SDK following the instruction for selected language. Once you have your SDK installed you can configure exporter to use Patronus Collector. You will also need to install otlp exporter.
Depends on your language selection, you have to do following steps:
In this section you have code parts responsible for each step. You would need to merge those code blocks to have fully working example. For ready to use examples please go to Examples section.
# Tracing:with tracer.start_as_current_span("my.span") as span: print("Let's trace again!")# Logging:from typing import Anyfrom opentelemetry import tracefrom opentelemetry.sdk._logs import LogRecordfrom opentelemetry._logs import SeverityNumberfrom time import time_ns# Helper method to use emit a logdef log_to_patronus(body: Any): # In case of using tracing as well this will create a log in current span context span_context = trace.get_current_span().get_span_context() logger.emit( record=LogRecord( trace_flags=span_context.trace_flags, timestamp=time_ns(), trace_id=span_context.trace_id, span_id=span_context.span_id, body=body, severity_number=SeverityNumber.INFO ) )log_to_patronus("Let's trace this!")
When you use Patronus evaluation (via SDK or API), we automatically create a Span and Log for each evaluation. You can wrap these within your trace context using tracing.
You must initiate your tracer first
from patronus import Clientclient = Client()# Start Opentelemetry tracingwith tracer.start_as_current_span("span-name") as span: # Run evaluation result = client.evaluate( evaluator="lynx-small", criteria="patronus:hallucination", evaluated_model_input="What is the largest animal in the world?", evaluated_model_output="The giant sandworm.", evaluated_model_retrieved_context="The blue whale is the largest known animal.", )
Code above will create your trace context and add Patronus evaluation Span and Log to your trace
from time import time_nsfrom typing import Anyfrom opentelemetry import tracefrom opentelemetry._logs import set_logger_provider, SeverityNumberfrom opentelemetry.exporter.otlp.proto.grpc._log_exporter import OTLPLogExporterfrom opentelemetry.sdk._logs import LogRecord, LoggerProviderfrom opentelemetry.sdk._logs.export import BatchLogRecordProcessorlogger_provider = LoggerProvider()set_logger_provider(logger_provider)logs_processor = BatchLogRecordProcessor(OTLPLogExporter(insecure=False))logger_provider.add_log_record_processor(logs_processor)logger = logger_provider.get_logger("my.logger")# Helper method to use emit a logdef log_to_patronus(body: Any): # In case of using tracing as well this will create a log in current span context span_context = trace.get_current_span().get_span_context() logger.emit( record=LogRecord( trace_flags=span_context.trace_flags, timestamp=time_ns(), trace_id=span_context.trace_id, span_id=span_context.span_id, body=body, severity_number=SeverityNumber.INFO ) )log_to_patronus("Hello universe!")
from opentelemetry import tracefrom opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporterfrom opentelemetry.sdk.trace import TracerProviderfrom opentelemetry.sdk.trace.export import BatchSpanProcessorfrom patronus import Clienttrace_provider = TracerProvider()trace_processor = BatchSpanProcessor(OTLPSpanExporter())trace_provider.add_span_processor(trace_processor)trace.set_tracer_provider(trace_provider)tracer = trace.get_tracer("Demo Tracing")client = Client()# Start Opentelemetry tracingwith tracer.start_as_current_span("span-name") as span: # Run evaluation within your trace context result = client.evaluate( evaluator="lynx-small", criteria="patronus:hallucination", evaluated_model_input="What is the largest animal in the world?", evaluated_model_output="The giant sandworm.", evaluated_model_retrieved_context="The blue whale is the largest known animal." )