Our docs got a refresh! Check out the new content and improved navigation. For detailed API reference see our Python SDK docs and TypeScript SDK.
Description
Prompts

Loading and Rendering Prompts

Retrieve and render prompts with variables and template engines

Use load_prompt to retrieve prompts from the Patronus platform and render them with dynamic variables.

Loading prompts

Basic loading

import patronus
from patronus.prompts import load_prompt
 
patronus.init()
 
# Load the latest version
prompt = load_prompt(name="content/writing/blog-instructions")
rendered = prompt.render()
print(rendered)

Async loading

from patronus.prompts import aload_prompt
 
prompt = await aload_prompt(name="content/writing/blog-instructions")

Loading specific versions

By revision number

# Load a specific revision
prompt = load_prompt(name="content/blog/technical-explainer", revision=3)

By label

# Load by label (e.g., production environment)
prompt = load_prompt(name="legal/contracts/privacy-policy", label="production")

Rendering prompts

Render prompts with variables using keyword arguments:

prompt = load_prompt(name="support/troubleshooting/diagnostic")
 
rendered = prompt.render(
    user_query="How do I optimize database performance?",
    expertise_level="intermediate",
    product_version="v2.3"
)

Template engines

Patronus supports multiple template engines for rendering prompts.

F-string (default)

# F-string templating uses Python's f-string syntax
rendered = prompt.with_engine("f-string").render(**kwargs)

Mustache

# Mustache templating uses {{variable}} syntax
rendered = prompt.with_engine("mustache").render(**kwargs)

Jinja2

# Jinja2 templating uses {{variable}} syntax with advanced features
rendered = prompt.with_engine("jinja2").render(**kwargs)

Setting default engine

Configure the default template engine during initialization:

import patronus
 
patronus.init(
    prompt_templating_engine="mustache"
)

Using multiple prompts

Complex applications often combine multiple prompts:

import patronus
from patronus.prompts import load_prompt
import openai
 
patronus.init()
 
# Load different prompt components
system_prompt = load_prompt(name="support/chat/system")
user_query_template = load_prompt(name="support/chat/user-message")
response_formatter = load_prompt(name="support/chat/response-format")
 
# Create OpenAI client
client = openai.OpenAI()
 
# Combine prompts in a chat completion
response = client.chat.completions.create(
    model="gpt-4",
    messages=[
        {"role": "system", "content": system_prompt.render(
            product_name="CloudWorks Pro",
            available_features=["file sharing", "collaboration", "automation"],
            knowledge_cutoff="2024-05-01"
        )},
        {"role": "user", "content": user_query_template.render(
            user_name="Alex",
            user_tier="premium",
            user_query="How do I share files with external users?"
        )}
    ],
    temperature=0.7,
    max_tokens=500
)
 
# Post-process the response
formatted_response = response_formatter.render(
    raw_response=response.choices[0].message.content,
    user_name="Alex",
    add_examples=True
)

Using with LLM providers

OpenAI

import openai
from patronus.prompts import load_prompt
 
system_prompt = load_prompt(name="support/chat/system")
 
client = openai.OpenAI()
response = client.chat.completions.create(
    model="gpt-4",
    messages=[
        {"role": "system", "content": system_prompt.render(
            product_name="CloudWorks",
            user_tier="enterprise"
        )},
        {"role": "user", "content": "How do I configure SSO?"}
    ]
)

Anthropic

import anthropic
from patronus.prompts import load_prompt
 
system_prompt = load_prompt(name="support/knowledge-base/technical-assistance")
 
client = anthropic.Anthropic()
response = client.messages.create(
    model="claude-3-opus-20240229",
    system=system_prompt.render(
        product_name="CloudWorks Pro",
        user_tier="enterprise",
        available_features=["advanced monitoring", "auto-scaling"]
    ),
    messages=[
        {"role": "user", "content": "How do I configure the load balancer?"}
    ]
)

On this page