Description

Evaluator Reference Guide

Overview

Patronus supports a suite of high quality evaluators available in the evaluation API and python SDK. To use any of these evaluators, simply specify the evaluator name in the "evaluator" field in the above code snippet.

All evaluators return a binary PASS/FAIL result. When provided, raw scores are continuous and linearized on a 0-1 scale, where **FAIL=0 ** and PASS=1.

EvaluatorDefinitionRequired FieldsContext Window (tokens)Raw Scores Provided
phiChecks for protected health information (PHI), defined broadly as any information about an individual's health status or provision of healthcare.evaluated_model_output-No
piiChecks for personally identifiable information (PII). PII is information that, in conjunction with other data, can identify an individual.evaluated_model_output-No
toxicityChecks output for abusive and hateful messages.evaluated_model_output1024Yes
hallucinationChecks whether the LLM response is hallucinatory, i.e. the output is not grounded in the provided context.evaluated_model_input, evaluated_model_output, evaluated_model_retrieved_context128kYes
hallucination-smallChecks whether the LLM response is hallucinatory, i.e. the output is not grounded in the provided context.evaluated_model_input, evaluated_model_output, evaluated_model_retrieved_context128KYes
lynxChecks whether the LLM response is hallucinatory, i.e. the output is not grounded in the provided context. Uses Patronus Lynx to power the evaluation. See the research paper here.evaluated_model_input, evaluated_model_output, evaluated_model_retrieved_context8kYes
lynx-smallChecks whether the LLM response is hallucinatory, i.e. the output is not grounded in the provided context. Uses Patronus Lynx to power the evaluation. See the research paper here.evaluated_model_input, evaluated_model_output, evaluated_model_retrieved_context8kYes
answer-relevanceChecks whether the answer is on-topic to the input question. Does not measure correctness.evaluated_model_input, evaluated_model_output128kYes
answer-relevance-smallChecks whether the answer is on-topic to the input question. Does not measure correctness.evaluated_model_input, evaluated_model_output128kYes
context-relevanceChecks whether the retrieved context is on-topic to the input.evaluated_model_input, evaluated_model_retrieved_context128kYes
context-relevance-smallChecks whether the retrieved context is on-topic to the input.evaluated_model_input, evaluated_model_retrieved_context128kYes
context-sufficiencyChecks whether the retrieved context is sufficient to generate an output similar in meaning to the label. The label should be the correct evaluation result.evaluated_model_input, evaluated_model_retrieved_context, evaluated_model_output, evaluated_model_gold_answer128kYes
context-sufficiency-smallChecks whether the retrieved context is sufficient to generate an output similar in meaning to the label. The label should be the correct evaluation result.evaluated_model_input, evaluated_model_retrieved_context, evaluated_model_output, evaluated_model_gold_answer128kYes
nlpComputes common NLP metrics on the output and label fields to measure semantic overlap and similarity. Currently supports bleu and rouge metrics.evaluated_model_output, evaluated_model_gold_answer-Yes
judgeChecks against user-defined criteria definitions, such as "MODEL OUTPUT should be free from brackets." LLM-based and uses active learning to improve the criteria definition based on user feedback.No required fields128kYes
judge-smallChecks against user-defined criteria definitions, such as "MODEL OUTPUT should be free from brackets." LLM-based and uses active learning to improve the criteria definition based on user feedback.No required fields128kYes
systemPatronus created evaluation metrics.evaluated_model_input, evaluated_model_retrieved_context, evaluated_model_output, evaluated_model_gold_answer128kYes

Evaluator families group together evaluators that perform the same function. Evaluators in the same family share evaluator profiles and accept the same set of inputs. Importantly, the performance and cost of evaluators in the family may differ. Below we describe information for each evaluator family.

Glider

Glider is a 3B parameter evaluator model that can be used to setup any custom evaluation. It performs the evaluation based on pass criteria and score rubrics. To learn more about how to use GLIDER, check out our detailed documentation page .

Required Input Fields

No required fields

Optional Input Fields

  • evaluated_model_input
  • evaluated_model_output
  • evaluated_model_gold_answer
  • evaluated_model_retrieved_context

Aliases

AliasTarget
gliderglider-2024-12-11

Judge

Judge evaluators perform evaluations based on pass criteria defined in natural language, such as "The MODEL OUTPUT should be free from brackets". Judge evaluators also support active learning, which means that you can improve their performance by annotating historical evaluations with thumbs up or thumbs down. To learn more about Judge Evaluators, visit their documentation page.

Required Input Fields

No required fields

Optional Input Fields

  • evaluated_model_input
  • evaluated_model_output
  • evaluated_model_gold_answer
  • evaluated_model_retrieved_context

Aliases

AliasTarget
judgejudge-large-2024-08-08
judge-largejudge-large-2024-08-08
judge-smalljudge-small-2024-08-08

Evaluators

Evaluator IDDescription
judge-large-2024-08-08The most sophisticated evaluator in the family, using advanced reasoning to achieve high correctness.
judge-small-2024-08-08The fastest and cheapest evaluator in the family.

PHI (Protected Health Information)

Checks for protected health information (PHI), defined broadly as any information about an individual's health status or provision of healthcare.

Required Input Fields

  • evaluated_model_output

Optional Input Fields

None

Aliases

AliasTarget
phiphi-2024-05-31

Evaluators

Evaluator IDDescription
phi-2024-05-31PHI detection in model outputs

NLP

Computes common NLP metrics on the output and label fields to measure semantic overlap and similarity. Currently supports the bleu and rouge frameworks.

Required Input Fields

  • evaluated_model_output
  • evaluated_model_gold_answer

Optional Input Fields

None

Aliases

AliasTarget
nlpmetrics-2024-05-16

Evaluators

Evaluator IDDescription
nlp-2024-05-16Computes NLP metrics like bleuand rouge

Exact Match

Check that your model output is the same as the provided gold answer. Useful for checking boolean or multiple choice model outputs.

Required Input Fields

  • evaluated_model_output
  • evaluated_model_gold_answer

Optional Input Fields

None

Aliases

AliasTarget
exact-matchexact-match-2024-05-31

Evaluators

Evaluator IDDescription
exact-match-2024-05-31Checks that model output and gold answer are the same

PII (Personally Identifiable Information)

Checks for personally identifiable information (PII). PII is information that, in conjunction with other data, can identify an individual.

Required Input Fields

  • evaluated_model_output

Optional Input Fields

None

Aliases

AliasTarget
piipii-2024-05-31

Evaluators

Evaluator IDDescription
pii-2024-05-31PII detection in model outputs

Answer Relevance

Checks whether the model output is on-topic to the input question. Does not measure correctness.

Required Input Fields

  • evaluated_model_input
  • evaluated_model_output

Optional Input Fields

None

Aliases

AliasTarget
answer-relevanceanswer-relevance-large-2024-07-23
answer-relevance-largeanswer-relevance-large-2024-07-23
answer-relevance-smallanswer-relevance-small-2024-07-23

Evaluators

Evaluator IDDescription
answer-relevance-large-2024-07-23The most sophisticated evaluator in the family, using advanced reasoning to achieve high correctness.
answer-relevance-small-2024-07-23The fastest and cheapest evaluator in the fam

Context Sufficiency

Checks whether the retrieved context is sufficient to generate an output similar in meaning to the label. The label should be the correct evaluation result.

Required Input Fields

  • evaluated_model_input
  • evaluated_model_gold_answer
  • evaluated_model_retrieved_context

Optional Input Fields

None

Aliases

AliasTarget
context-sufficiencycontext-sufficiency-large-2024-07-23
context-sufficiency-largecontext-sufficiency-large-2024-07-23
context-sufficiency-smallcontext-sufficiency-small-2024-07-23

Evaluators

Evaluator IDDescription
context-sufficiency-large-2024-07-23The most sophisticated evaluator in the family, using advanced reasoning to achieve high correctness.
context-sufficiency-small-2024-07-23The fastest and cheapest evaluator in the family.

Hallucination

Checks whether the LLM response is hallucinatory, i.e. the output is not grounded in the provided context.

Required Input Fields

  • evaluated_model_input
  • evaluated_model_output
  • evaluated_model_retrieved_context

Optional Input Fields

None

Aliases

AliasTarget
hallucinationhallucination-large-2024-07-23
hallucination-largehallucination-large-2024-07-23
hallucination-smallhallucination-small-2024-07-23

Evaluators

Evaluator IDDescription
hallucination-large-2024-07-23The most sophisticated evaluator in the family, using advanced reasoning to achieve high correctness.
hallucination-small-2024-07-23The fastest and cheapest evaluator in the family.

Lynx

Checks whether the LLM response is hallucinatory, i.e. the output is not grounded in the provided context. Uses Patronus Lynx to power the evaluation. See the research paper here .

Required Input Fields

  • evaluated_model_input
  • evaluated_model_output
  • evaluated_model_retrieved_context

Optional Input Fields

None

Aliases

AliasTarget
lynxlynx-large-2024-07-23
lynx-largelynx-large-2024-07-23
lynx-smalllynx-small-2024-07-23

Evaluators

Evaluator IDDescription
lynx-large-2024-07-23The most sophisticated evaluator in the family, using a large, 70B parameter model to achieve advanced reasoning and high correctness.
lynx-small-2024-07-23The cheapest evaluator in the family, using a 8B parameter model to generate reliable and quick evaluations.

Context Relevance

Checks whether the retrieved context is on-topic or relevant to the input question.

Required Input Fields

  • evaluated_model_input
  • evaluated_model_retrieved_context

Optional Input Fields

None

Aliases

AliasTarget
context-relevancecontext-relevance-large-2024-07-23
context-relevance-largecontext-relevance-large-2024-07-23
context-relevance-smallcontext-relevance-small-2024-07-23

Evaluators

Evaluator IDDescription
context-relevance-large-2024-07-23The most sophisticated evaluator in the family, using advanced reasoning to achieve high correctness.
context-relevance-small-2024-07-23The fastest and cheapest evaluator in the family.

Toxicity

Checks output for abusive and hateful messages.

Required Input Fields

  • evaluated_model_output

Optional Input Fields

None

Aliases

AliasTarget
toxicitytoxicity-2024-05-16

Evaluators

Evaluator IDDescription
toxicity-2024-05-16Detect Toxicity