Our docs got a refresh! Check out the new content and improved navigation. For detailed API reference see our Python SDK docs and TypeScript SDK.
Description
Prompts

Labels and Metadata

Manage prompt versions with labels and attach metadata

Labels provide stable references to specific prompt revisions, while metadata allows you to attach arbitrary configuration and documentation to prompts.

Labels

Labels allow you to create stable references to specific prompt revisions. This is useful for managing prompts across environments like development, staging, and production.

A label is unique within a prompt definition — applying a label to a new revision moves it off the previous revision.

Obtaining a revision_id

Labels operate on a specific prompt revision, identified by its revision_id (a UUID). List the revisions for a prompt definition by name to find the one you need:

from patronus import context
 
client = context.get_api_client()
 
revisions = client.prompts.list_revisions(prompt_name="support/chat/system")
for r in revisions.prompt_revisions:
    print(r.id, r.revision, r.labels)

r.id is the revision_id to pass to the calls below.

Adding labels

# Apply the "production" label to a specific revision.
client.prompts.set_labels(
    revision_id=revision_id,
    labels=["production"],
)

Updating labels

set_labels is the only call you need: applying the same label to a different revision moves it off the previous one.

# Move "production" from one revision to another.
client.prompts.set_labels(
    revision_id=new_revision_id,
    labels=["production"],
)

Removing labels

client.prompts.remove_labels(
    revision_id=revision_id,
    labels=["production"],
)

Loading by label

from patronus.prompts import load_prompt
 
# Load whichever revision currently carries the "production" label.
prompt = load_prompt(
    name="support/chat/system",
    label="production"
)

Common label patterns

Environment management

client.prompts.set_labels(revision_id=dev_revision_id, labels=["development"])
client.prompts.set_labels(revision_id=staging_revision_id, labels=["staging"])
client.prompts.set_labels(revision_id=prod_revision_id, labels=["production"])

Audience targeting

client.prompts.set_labels(revision_id=technical_revision_id, labels=["technical-audience"])
client.prompts.set_labels(revision_id=general_revision_id, labels=["general-audience"])

Metadata

Metadata allows you to attach arbitrary key-value pairs to prompts for configuration, documentation, or other purposes.

Adding metadata

from patronus.prompts import Prompt, push_prompt
 
prompt = Prompt(
    name="research/data-analysis/summarize-findings",
    body="Analyze the {data_type} data and summarize key {metric_type} trends in {time_period}.",
    metadata={
        "models": ["gpt-4", "claude-3"],
        "created_by": "data-team",
        "tags": ["data", "analysis"],
        "temperature": 0.7,
        "max_tokens": 1000
    }
)
 
loaded_prompt = push_prompt(prompt)

Accessing metadata

from patronus.prompts import load_prompt
 
prompt = load_prompt(name="research/data-analysis/summarize-findings")
 
# Access specific metadata fields
supported_models = prompt.metadata.get("models", [])
creator = prompt.metadata.get("created_by", "unknown")
temperature = prompt.metadata.get("temperature", 0.5)
 
print(f"Prompt supports models: {', '.join(supported_models)}")
print(f"Created by: {creator}")
print(f"Temperature: {temperature}")

Common metadata patterns

Model configuration

metadata = {
    "temperature": 0.7,
    "max_tokens": 500,
    "top_p": 0.9,
    "frequency_penalty": 0.0
}

Documentation

metadata = {
    "created_by": "ai-team",
    "created_at": "2024-01-15",
    "purpose": "Customer support responses",
    "version": "2.0"
}

Categorization

metadata = {
    "tags": ["support", "troubleshooting", "technical"],
    "category": "customer-service",
    "priority": "high"
}

Model compatibility

metadata = {
    "compatible_models": ["gpt-4", "gpt-4-turbo", "claude-3-opus"],
    "recommended_model": "gpt-4-turbo",
    "min_model_version": "gpt-3.5-turbo"
}

Best practices

Use labels for environments

Create consistent labels across all prompts for environment management:

# In your deployment pipeline
if environment == "production":
    prompt = load_prompt(name=prompt_name, label="production")
elif environment == "staging":
    prompt = load_prompt(name=prompt_name, label="staging")
else:
    prompt = load_prompt(name=prompt_name, label="development")

Store configuration in metadata

Keep model parameters and settings in metadata rather than hardcoding them:

prompt = load_prompt(name="support/chat/system")
 
# Use metadata for model configuration
response = client.chat.completions.create(
    model=prompt.metadata.get("model", "gpt-4"),
    temperature=prompt.metadata.get("temperature", 0.7),
    max_tokens=prompt.metadata.get("max_tokens", 500),
    messages=[...]
)

Document ownership and purpose

Use metadata to track who created prompts and why:

metadata = {
    "created_by": "ai-team",
    "reviewed_by": "product-team",
    "purpose": "Handle complex technical support queries",
    "last_updated": "2024-01-15"
}