GLIDER is a 3B parameter custom evaluator model trained by Patronus AI. GLIDER can score any text input and associated context on arbitrary user defined criteria.

  • It shows higher Pearson’s correlation than GPT-4o on FLASK
  • It outperforms prior judge models, achieving comparable performance to LLMs 17× its size.
  • Supports fine-grained scoring, multilingual reasoning and span highlighting.
GLIDER is capable of outputting high quality  
reasoning chains, scores and explainable highlight spans

GLIDER is capable of outputting high quality
reasoning chains, scores and explainable highlight spans


We train and align a Phi-3.5-mini-instruct model on synthetic data that spans 183 different research and industrial
evaluation metrics from 685 relevant domains of application to prove that Grading LLM Interactions and Decisions using Explainable Ranking can help improve performance. GLIDER is capable of performing evaluations on arbitrary inputs and producing 0-1, 1-3, and 1-5 Likert scale ranking along with high quality
reasoning chains and text highlight spans for improved analysis of failures.

Training

We use a mixture of synthetic datasets and openly available datasets to train the model.

We created a detailed taxonomy of potential metrics to cover along with their definitions covering 685 unique domains like finance, medicine and technology to more creative domains like art, fashion and films. To ensure that the model does not overfit to a single evaluation field like user input or model output, we diversify our dataset by forcing associations arbitrarily between random tag names representing inputs, outputs, contexts and gold answer. We create pointwise data points and then prompt the model to output a correct and incorrect score and reasoning for the generated instance. This pairwise data generation is used for RLAIF alignment training phase where we use the rejected samples to lower their probabilities and increase the probabilities of the chosen samples.

We choose phi-3.5-mini-instruct as our base model. We perform supervised fine-tuning (SFT) for one epoch. Following this, we align this model with the APO zero loss since our synthetic data contains noise and APO has been shown to be more robust in such situations. In addition to this preference optimization loss, we add a standard cross entropy term, ensuring that the model continues to capture data nuances in the alignment phase.

To read more details about the data generation and training, refer to our paper: https://arxiv.org/abs/2412.14140

Results

GLIDER achieves state of the art performance on the FLASK benchmark, beating GPT-4o while still performing close to models 17× its size on the Feedback Collection dataset.

Pearson correlation for various models on ranking tasks against human ratings

ModelBigGen BenchFLASKFeedback BenchSummeval (Relevance)Summeval (Consistency)Summeval (Coherence)Summeval (Fluency)Average
GPT-4o0.6140.6100.8100.3120.5500.4190.5220.548
GPT-4o-mini0.2310.5650.8030.4310.4250.4230.2830.452
Claude-3.5-Sonnet0.5920.5920.8120.4640.6200.4970.4960.582
Llama-3.1-70B0.5800.5720.7920.3910.4970.5270.3910.536
Qwen-2.5-72B0.5600.5810.7910.4570.4430.4310.5340.542
Phi-3.5-mini-instruct0.2940.3310.7310.2450.1660.2610.2660.328
Prometheus-2-8x7B0.5240.5550.8980.2870.3200.3280.2930.458
Prometheus-2-7B0.3920.5450.8820.2160.1880.2360.1340.370
FlowAI Judge 3.8B0.4600.4000.7870.2860.3580.3510.3090.422
GLIDER 3.8B (w/o highlights)0.4900.5700.7590.3670.4180.4330.3210.480
GLIDER 3.8B0.604
±0.005
0.615 ±0.010.774 ±0.010.398
±0.02
0.522 ±0.010.462 ±0.010.365 ±0.030.534

Table 1: bolded text indicates best overall and underline indicates best open-source judge model.


Performance (F1 score) comparison of models on pairwise ranking datasets.

ModelLive Bench (IF)HH Eval (Harm)HH Eval (Help)HH Eval (Hon)MT BenchReward Bench (Chat)Reward Bench (Chat-Hard)Reward Bench (Safe)Reward Bench (Reason)Reward Bench (Average)Average
GPT-4o0.6610.9830.8980.8310.8130.9500.6970.8610.8930.8500.843
GPT-4o-mini0.4810.9480.8630.8120.7860.9430.5660.8020.8590.7930.784
Claude-3.5-Sonnet0.6320.9440.9150.8680.8070.6180.8270.8980.8210.8490.814
Llama-3.1-70B0.6510.9130.8980.8980.8020.5770.8000.8770.8020.8260.802
Qwen-2.5-72B0.4850.9650.9150.8470.7980.9490.6120.8390.8880.8220.810
Phi-3.5-mini-instruct0.3440.7750.7450.6720.2230.8440.4510.7170.7590.6930.614
Prometheus-2-8x7B-0.9660.8480.8200.5510.9300.4710.8350.7740.753-
Prometheus-2-7B-0.7930.7280.7710.5040.8550.4910.7710.7650.720-
FlowAI Judge 3.8B0.5920.8960.7790.7340.5490.8950.5720.7860.6570.7280.719
GLIDER 3.8B (w/o highlights)0.5420.9460.8290.7830.5770.8350.5770.7970.9040.7780.754
GLIDER 3.8B0.654 ±0.040.946 ±0.0030.830 ±0.0050.778 ±0.0020.628 ±0.060.876 ±0.0050.575 ±0.0020.797 ±0.010.888 ±0.010.784
±0.006
0.776

Table 2: Bolded text indicates best overall and underline indicates best open-source judge model. Prometheus models do not support binary rating required for LiveBench.