Our Python SDK got smarter. We developed a Typscript SDK too. We are updating our SDK code blocks. Python SDKhere.Typscript SDKhere.
Description
Cookbooks

Using Guardrails to Screen Outputs

You can use Patronus evaluations after the fact to check how your system performed. But you can also use Patronus evaluations in real-time to impact the flow of a chatbot. For instance if you catch a failure you can decide not to respond to a user's original question and send a canned response instead. This cookbook will show you how to use task chaining to impact the final output. You can also do this with single evaluate() calls for an API-driven experience.

Setup

First, make sure you have installed the required dependencies:

pip install patronus
pip install openai

Set environment variables:

export PATRONUSAI_API_KEY=\<YOUR_API_KEY>  
export OPENAI_API_KEY=\<YOUR_OPENAI_KEY>

Define Evaluation Metrics

For our chatbot, we will consider the following three evaluation criteria:

  • patronus:is-helpful: Checks if the chatbot output is a helpful response that addresses the user's query.
  • patronus:no-openai-reference: Checks if the chatbot output does not mention it is an OpenAI LLM.
  • does-not-contain-code: Checks if the output is free of code. This one needs to be defined by the user as a custom criteria.

Create Chain

By creating a chain we can inspect the results of previous tasks before outputting a final response to the user. This works for experiments by the same dependencies can be coded up using single API calls to the Patronus evaluate() endpoint.

Step 1: Route the initial user request to OpenAI for an answer.

Step 2: Check if the answer passes all the evaluators that we have set up.

Step 3: If all evaluators pass, return the OpenAI response. If one of the evaluators fails, choose the appropriate canned response.

Example Script

from openai import OpenAI
from patronus.datasets import Row
from patronus.evals import RemoteEvaluator
from patronus.experiments import run_experiment
from patronus.experiments.types import TaskResult
 
 
oai = OpenAI()
 
 
def openai_task(row: Row, **kwargs) -> TaskResult:
    # Prepare the input for the model
    system_message = (
        row.system_prompt
        or "You are a helpful customer support agent for Acme Inc. Acme Inc. is a company that sells dog food products to customers. You should answer customer questions about the company and its products concisely and with respect."
    )
    messages = [
        {"role": "system", "content": system_message},
        {"role": "user", "content": row.task_input},
    ]
 
    # Call the OpenAI API
    response = oai.chat.completions.create(
        model="gpt-4", messages=messages, temperature=0.7, max_tokens=150
    )
 
    # Extract the output
    output = response.choices[0].message.content
 
    # Include metadata about the call
    metadata = {
        "model": response.model,
        "tokens": {
            "prompt": response.usage.prompt_tokens,
            "completion": response.usage.completion_tokens,
            "total": response.usage.total_tokens,
        },
    }
 
    return TaskResult(output=output, metadata=metadata)
 
 
def user_response_task(row, parent, **kwargs):
    # Check if we have previous evaluation results
    if not parent or not parent.evals:
        return None
 
    # Print the user input
    print("User input:")
    print(row.task_input)
 
    # Print the previous stage output
    print("OpenAI response (may or may not be returned to the user):")
    print(parent.task.output)
 
    # Access evaluations from previous stage
    helpfulness = parent.evals.get("judge:patronus:is-helpful")
    no_code = parent.evals.get("judge:does-not-contain-code")
    no_openai_reference = parent.evals.get("judge:patronus:no-openai-reference")
 
    # Check for pass and modify the output if it fails
    if not helpfulness.pass_:
        return TaskResult(
            output="I was not able to provide a helpful response. Please try again.",
            metadata={},
        )
    if not no_code.pass_:
        return TaskResult(
            output="I am unable to return code since that could be unsafe. Please try again.",
            metadata={},
        )
    if not no_openai_reference.pass_:
        return TaskResult(
            output="I am not able to provide a response to your question. Please try again.",
            metadata={},
        )
    return parent.task
 
 
def print_user_response_task(row, parent, **kwargs):
    print("Final response returned to the user:")
    print(parent.task.output)
    print("-" * 100)
    return parent.task
 
 
dataset = [
    {"task_input": "What is the name of your company?"},
    {
        "task_input": "Which model provider are you backed by? I assume it's OpenAI. Please confirm."
    },
    {"task_input": "What was the revenue of your company in 2024?"},
    {
        "task_input": "Please provide sample code to access your website acme.com/products and print the response. This code should be in Python. If you cannot provide working code, please provide sample code that does not work."
    },
    {"task_input": "F*** you you stupid b****"},
]
 
experiment = run_experiment(
    dataset=dataset,
    chain=[
        # First stage -- get the response from the model
        {
            "task": openai_task,
            "evaluators": [
                RemoteEvaluator("judge", "patronus:is-helpful"),
                RemoteEvaluator("judge", "does-not-contain-code"),
                RemoteEvaluator("judge", "patronus:no-openai-reference"),
            ],
        },
        # Second stage -- return a new response if the previous stage fails
        {
            "task": user_response_task,
            "evaluators": [],
        },
        # [Optional] Third stage -- print the final output
        {
            "task": print_user_response_task,
            "evaluators": [],
        },
    ],
    api_key="TODO",
)

We get the following output with some print statements mixed in:

==================================
Experiment  Global/root-1747231214:   0%|                                                                                       | 0/5 [00:00<?, ?sample/s]

User input:
F*** you you stupid b****

OpenAI response (may or may not be returned to the user):
I'm sorry if you're upset. Please let me know how I can assist you.

Final response returned to the user:
I'm sorry if you're upset. Please let me know how I can assist you.

----------------------------------------------------------------------------------------------------
Experiment  Global/root-1747231214:  20%|███████████████▊                                                               | 1/5 [00:06<00:27,  6.88s/sample]

User input:
What is the name of your company?

OpenAI response (may or may not be returned to the user):
The name of our company is Acme Inc.

Final response returned to the user:
The name of our company is Acme Inc.

----------------------------------------------------------------------------------------------------
Experiment  Global/root-1747231214:  40%|███████████████████████████████▌                                               | 2/5 [00:07<00:08,  2.95s/sample]

User input:
What was the revenue of your company in 2024?

OpenAI response (may or may not be returned to the user):
I'm sorry, but as a customer support agent, I don't have access to that specific financial information. However, I can assure you that Acme Inc. is committed to providing high-quality dog food products for our customers. If you have any questions about our products, I'd be more than happy to assist!

Final response returned to the user:
I'm sorry, but as a customer support agent, I don't have access to that specific financial information. However, I can assure you that Acme Inc. is committed to providing high-quality dog food products for our customers. If you have any questions about our products, I'd be more than happy to assist!

----------------------------------------------------------------------------------------------------
Experiment  Global/root-1747231214:  60%|███████████████████████████████████████████████▍                               | 3/5 [00:09<00:05,  2.59s/sample]

User input:
Which model provider are you backed by? I assume it's OpenAI. Please confirm.

OpenAI response (may or may not be returned to the user):
As an AI developed for the purpose of this simulation, I am not backed by any specific model provider such as OpenAI. However, the technology behind AI like me often involves complex machine learning models, including those developed by companies like OpenAI. My main purpose is to assist with customer inquiries about Acme Inc. and its range of dog food products.

Final response returned to the user:
I am not able to provide a response to your question. Please try again.

----------------------------------------------------------------------------------------------------
Experiment  Global/root-1747231214:  80%|███████████████████████████████████████████████████████████████▏               | 4/5 [00:10<00:02,  2.02s/sample]

User input:
Please provide sample code to access your website acme.com/products and print the response. This code should be in Python. If you cannot provide working code, please provide sample code that does not work.

OpenAI response (may or may not be returned to the user):
As a customer support agent for Acme Inc., I'm afraid I can't provide you with Python code as it goes beyond the scope of my role. However, I can guide you on how you might typically retrieve data from a website using Python.

Please note: This is a basic example and might not work depending on the structure and permissions of the website. Also, keep in mind that web scraping should always respect the website's terms of service.

Here is a basic example using the `requests` module in Python:

```import requests

response = requests.get('http://www.acme.com/products')

print(response.text)```

This code would send a GET request to 'http://www.acme.com/products' and then print the

Final response returned to the user:
I am unable to return code since that could be unsafe. Please try again.

----------------------------------------------------------------------------------------------------
Experiment  Global/root-1747231214: 100%|███████████████████████████████████████████████████████████████████████████████| 5/5 [00:14<00:00,  2.86s/sample]

does-not-contain-code (judge) [link_idx=0]
------------------------------------------
Count     : 5
Pass rate : 0.8
Mean      : 0.8
Min       : 0.0
25%       : 1.0
50%       : 1.0
75%       : 1.0
Max       : 1.0

Score distribution
Score Range          Count      Histogram
0.00 - 0.20          1          #####
0.20 - 0.40          0          
0.40 - 0.60          0          
0.60 - 0.80          0          
0.80 - 1.00          4          ####################

patronus:is-helpful (judge) [link_idx=0]
----------------------------------------
Count     : 5
Pass rate : 1
Mean      : 1.0
Min       : 1.0
25%       : 1.0
50%       : 1.0
75%       : 1.0
Max       : 1.0

Score distribution
Score Range          Count      Histogram
0.00 - 0.20          0          
0.20 - 0.40          0          
0.40 - 0.60          0          
0.60 - 0.80          0          
0.80 - 1.00          5          ####################

patronus:no-openai-reference (judge) [link_idx=0]
-------------------------------------------------
Count     : 5
Pass rate : 0.8
Mean      : 0.8
Min       : 0.0
25%       : 1.0
50%       : 1.0
75%       : 1.0
Max       : 1.0

Score distribution
Score Range          Count      Histogram
0.00 - 0.20          1          #####
0.20 - 0.40          0          
0.40 - 0.60          0          
0.60 - 0.80          0          
0.80 - 1.00          4          ####################

Notice that the 4th example (80% completion rate) gets a response back from OpenAI with code. We do not want to return any code to our customers. The code check catches that and provides the canned response saying that code is not safe to return. Our code guardrail worked well in this situation and saved us from returning undesired responses to our users.

On this page