Our docs got a refresh! Check out the new content and improved navigation. For detailed API reference see our Python SDK docs and TypeScript SDK.
Description

Explanations

What are explanations?

Explanations are natural language justifications that describe why an evaluator gave a particular score or pass/fail result. They provide human-readable reasoning that helps you understand evaluation outcomes.

Most Patronus evaluators generate explanations by default, giving you insight into the decision-making process behind each evaluation result.

Configure explanation strategy

The explain_strategy parameter controls when explanations are generated. This is useful for optimizing performance in high-volume or latency-sensitive scenarios.

Available options:

  • always: Generate explanations for all evaluation results (most informative but slower)
  • on-fail: Only generate explanations for failed evaluations (balances insight with performance)
  • on-success: Only generate explanations for passed evaluations (less common)
  • never: No explanations are generated (fastest, for high-volume scenarios)

Example

Here's how to set the explain strategy when calling an evaluator:

import os
import patronus
from patronus.evals import RemoteEvaluator
 
patronus.init(
    api_key=os.environ.get("PATRONUS_API_KEY")
)
 
patronus_evaluator = RemoteEvaluator(
    "lynx",
    "patronus:hallucination",
    explain_strategy="on-fail"  # Only generate explanations for failures
)
 
result = patronus_evaluator.evaluate(
    task_input="What is the largest animal in the world?",
    task_context="The blue whale is the largest known animal.",
    task_output="The giant sandworm."
)

When to use each strategy

Use always when:

  • You need to understand every evaluation decision
  • You're debugging or analyzing model behavior
  • Transparency is critical (e.g., compliance, auditing)

Use on-fail when:

  • You want to focus on understanding failures
  • You need to balance insight with performance
  • You're monitoring production traffic

Use on-success when:

  • You want to understand why things pass (less common)
  • You're analyzing edge cases that unexpectedly pass

Use never when:

  • You're running high-volume batch evaluations
  • Latency is critical (real-time guardrails)
  • You only need pass/fail signals without reasoning

Important notes

  • Not all evaluators support explanations
  • For real-time monitoring with strict latency requirements, use explain_strategy="never" or explain_strategy="on-fail" to reduce response times
  • See optimizing evaluation performance for additional performance tips

On this page