Safety Research

AI safety is critical to the deployment of AI at scale. Without proper steering and safeguards, LLMs will readily follow malicious instructions, provide unsafe advice, and generate toxic content.

SimpleSafetyTests

SimpleSafetyTests (SST) is a test suite that any development team can adopt to rapidly and systematically identify such critical safety risks. The test suite comprises 100 test prompts across five harm areas that LLMs, for the vast majority of applications, should refuse to comply with.

We tested 11 open-access and open-source LLMs and four closed-source LLMs, and found critical safety weaknesses. While some of the models did not give a single unsafe response, most gave unsafe responses to more than 20% of the prompts, with over 50% unsafe responses in the extreme. Prepending a safety-emphasising system prompt substantially reduced the occurrence of unsafe responses, but did not completely stop them from happening.

Trained annotators labelled every model response to SST (n=3,000). We used these annotations to evaluate five AI safety filters (which assess whether a models' response is unsafe given a prompt) as a way of automatically evaluating models' performance on SST. The filters' performance varies considerably. There are also differences across the five harm areas, and on the unsafe versus safe responses. The widely-used Perspective API has 72% accuracy and a newly-created zero-shot prompt to OpenAI's GPT-4 performs best with 89% accuracy.


SST is supported natively in the Patronus AI platform.

Read more here: https://arxiv.org/abs/2311.08370

View the dataset in HuggingFace: https://huggingface.co/datasets/Bertievidgen/SimpleSafetyTests

VentureBeat article: https://venturebeat.com/ai/patronus-ai-finds-alarming-safety-gaps-in-leading-ai-systems/