Running Lynx Locally with Ollama

  1. Install ollama: https://ollama.com/download
  2. Download the .gguf version of Lynx-8B-Instruct from here (this might take 1-2 minutes): https://huggingface.co/PatronusAI/Lynx-8B-Instruct-Q4_K_M-GGUF
  3. Create a file named Modelfile with the following:
FROM "./patronus-lynx-8b-instruct-q4_k_m.gguf"
 PARAMETER stop "<|im_start|>"
 PARAMETER stop "<|im_end|>"
 TEMPLATE """
 <|im_start|>system
 {{ .System }}<|im_end|>
 <|im_start|>user
 {{ .Prompt }}<|im_end|>
 <|im_start|>assistant
 """

Make sure the .gguf path points to your downloaded model.

  1. Run ollama create lynx-8b -f Modelfile
  2. Run ollama run lynx-8b

You can now start chatting to Lynx-8B-Instruct locally!

For best results on hallucination detection, use the following prompt template:

"""Given the following QUESTION, DOCUMENT and ANSWER you must analyze the provided answer and determine whether it is faithful to the contents of the DOCUMENT. The ANSWER must not offer new information beyond the context provided in the DOCUMENT. 
The ANSWER also must not contradict information provided in the DOCUMENT. 
Output your final verdict by strictly following this format: 
"PASS" if the answer is faithful to the DOCUMENT and "FAIL" if the answer is not faithful to the DOCUMENT. 
Show your reasoning.  

--
QUESTION (THIS DOES NOT COUNT AS BACKGROUND INFORMATION):{question}

--
DOCUMENT:
[{document}]

--
ANSWER:
{answer}

--

Your output should be in JSON FORMAT with the keys "REASONING" and "SCORE":
{"REASONING": <your reasoning as bullet points>, "SCORE": <your final score>}
"""


To query the model via API:

curl http://localhost:11434/api/generate -d '{
  "model": "lynx-8b",
  "prompt":"What are hallucinations in language models?"
}'

curl http://localhost:11434/api/chat -d '{
  "model": "lynx-8b",
  "messages": [
    {"role": "user", "content": "What are hallucinations in language models?"}
  ]
}'

Note that this creates streaming responses, so requires buffering for complete sequences:

โฏ curl http://localhost:11434/api/chat -d '{
  "model": "lynx-8b",
  "messages": [
    {"role": "user", "content": "What are hallucinations in language models?"}
  ]
}'

{"model":"lynx-8b","created_at":"2024-07-04T22:06:08.593007Z","message":{"role":"assistant","content":"Hall"},"done":false}
{"model":"lynx-8b","created_at":"2024-07-04T22:06:08.626172Z","message":{"role":"assistant","content":"uc"},"done":false}
{"model":"lynx-8b","created_at":"2024-07-04T22:06:08.659228Z","message":{"role":"assistant","content":"inations"},"done":false}
{"model":"lynx-8b","created_at":"2024-07-04T22:06:08.691982Z","message":{"role":"assistant","content":" in"},"done":false}