Quick Start - Log your first eval
Follow these steps to log your first evaluation result within minutes!
1. Create an API Key
If you do not have an account yet, sign up for an account at app.patronus.ai
To create an API key, click on API Keys. Make sure you store this securely as you will not be able to view it again.
2. Log an evaluation
You can log an evaluation through our Evaluation API. An evaluation consists of the following pieces:
- Inputs to your LLM application, eg. "What is Patronus AI?"
- Outputs of your LLM application, eg. "Patronus AI is an LLM evaluation and testing platform."
- Evaluation criteria, eg. hallucination, conciseness, toxicity...and more!
You can log evaluations through the Python SDK or through an API request. If using the Python SDK, first install the library with pip install patronus
.
To run your first evaluation, run the following code by replacing YOUR_API_KEY
with the API Key you just created.
In the above example, we are evaluating whether the output contains a hallucination using Lynx. The evaluation result is then automatically logged to the Logs dashboard.
3. View Evaluation Logs in UI
Now head to app.patronus.ai/logs You can now view results for your most recent evaluation!
Evaluation Results consist of the following fields:
- Result: PASS/FAIL result for whether the LLM passed or failed the test
- Score: This is a score between 0 and 1 measuring confidence in the result
- Explanation: Natural language explanation for why the result and score was computed
In this case, Lynx scored the evaluation as FAIL because the context states that the largest animal is the blue whale, not the giant sandworm. We just flagged our first hallucination!
Now that you've logged your first evaluation, you can explore additional API fields, define your own evaluator, or run a batched evaluation experiment