Explanations
What are explanations?
Explanations are natural language justifications that describe why an evaluator gave a particular score or pass/fail result. They provide human-readable reasoning that helps you understand evaluation outcomes.
Most Patronus evaluators generate explanations by default, giving you insight into the decision-making process behind each evaluation result.
Configure explanation strategy
The explain_strategy parameter controls when explanations are generated. This is useful for optimizing performance in high-volume or latency-sensitive scenarios.
Available options:
- always: Generate explanations for all evaluation results (most informative but slower)
- on-fail: Only generate explanations for failed evaluations (balances insight with performance)
- on-success: Only generate explanations for passed evaluations (less common)
- never: No explanations are generated (fastest, for high-volume scenarios)
Example
Here's how to set the explain strategy when calling an evaluator:
When to use each strategy
Use always when:
- You need to understand every evaluation decision
- You're debugging or analyzing model behavior
- Transparency is critical (e.g., compliance, auditing)
Use on-fail when:
- You want to focus on understanding failures
- You need to balance insight with performance
- You're monitoring production traffic
Use on-success when:
- You want to understand why things pass (less common)
- You're analyzing edge cases that unexpectedly pass
Use never when:
- You're running high-volume batch evaluations
- Latency is critical (real-time guardrails)
- You only need pass/fail signals without reasoning
Important notes
- Not all evaluators support explanations
- For real-time monitoring with strict latency requirements, use
explain_strategy="never"orexplain_strategy="on-fail"to reduce response times - See optimizing evaluation performance for additional performance tips
