Our docs got a refresh! Check out the new content and improved navigation. For detailed API reference see our Python SDK docs and TypeScript SDK.
Description
Prompts

Labels and Metadata

Manage prompt versions with labels and attach metadata

Labels provide stable references to specific prompt revisions, while metadata allows you to attach arbitrary configuration and documentation to prompts.

Labels

Labels allow you to create stable references to specific prompt revisions. This is useful for managing prompts across environments like development, staging, and production.

Adding labels

from patronus import context
 
client = context.get_api_client().prompts
 
# Add a label to a specific revision
client.add_label(
    prompt_id="prompt_123",
    revision=3,
    label="production"
)

Updating labels

When you add a label that already exists, it updates to point to the new revision:

# Label points to revision 3
client.add_label(
    prompt_id="prompt_123",
    revision=3,
    label="production"
)
 
# Update label to point to revision 5
client.add_label(
    prompt_id="prompt_123",
    revision=5,
    label="production"
)

Loading by label

from patronus.prompts import load_prompt
 
# Load the production version
prompt = load_prompt(
    name="support/chat/system",
    label="production"
)

Common label patterns

Environment management

# Development environment
client.add_label(prompt_id="prompt_123", revision=5, label="development")
 
# Staging environment
client.add_label(prompt_id="prompt_123", revision=4, label="staging")
 
# Production environment
client.add_label(prompt_id="prompt_123", revision=3, label="production")

Audience targeting

# Technical audience variant
client.add_label(prompt_id="prompt_456", revision=2, label="technical-audience")
 
# General audience variant
client.add_label(prompt_id="prompt_456", revision=3, label="general-audience")

Metadata

Metadata allows you to attach arbitrary key-value pairs to prompts for configuration, documentation, or other purposes.

Adding metadata

from patronus.prompts import Prompt, push_prompt
 
prompt = Prompt(
    name="research/data-analysis/summarize-findings",
    body="Analyze the {data_type} data and summarize key {metric_type} trends in {time_period}.",
    metadata={
        "models": ["gpt-4", "claude-3"],
        "created_by": "data-team",
        "tags": ["data", "analysis"],
        "temperature": 0.7,
        "max_tokens": 1000
    }
)
 
loaded_prompt = push_prompt(prompt)

Accessing metadata

from patronus.prompts import load_prompt
 
prompt = load_prompt(name="research/data-analysis/summarize-findings")
 
# Access specific metadata fields
supported_models = prompt.metadata.get("models", [])
creator = prompt.metadata.get("created_by", "unknown")
temperature = prompt.metadata.get("temperature", 0.5)
 
print(f"Prompt supports models: {', '.join(supported_models)}")
print(f"Created by: {creator}")
print(f"Temperature: {temperature}")

Common metadata patterns

Model configuration

metadata = {
    "temperature": 0.7,
    "max_tokens": 500,
    "top_p": 0.9,
    "frequency_penalty": 0.0
}

Documentation

metadata = {
    "created_by": "ai-team",
    "created_at": "2024-01-15",
    "purpose": "Customer support responses",
    "version": "2.0"
}

Categorization

metadata = {
    "tags": ["support", "troubleshooting", "technical"],
    "category": "customer-service",
    "priority": "high"
}

Model compatibility

metadata = {
    "compatible_models": ["gpt-4", "gpt-4-turbo", "claude-3-opus"],
    "recommended_model": "gpt-4-turbo",
    "min_model_version": "gpt-3.5-turbo"
}

Best practices

Use labels for environments

Create consistent labels across all prompts for environment management:

# In your deployment pipeline
if environment == "production":
    prompt = load_prompt(name=prompt_name, label="production")
elif environment == "staging":
    prompt = load_prompt(name=prompt_name, label="staging")
else:
    prompt = load_prompt(name=prompt_name, label="development")

Store configuration in metadata

Keep model parameters and settings in metadata rather than hardcoding them:

prompt = load_prompt(name="support/chat/system")
 
# Use metadata for model configuration
response = client.chat.completions.create(
    model=prompt.metadata.get("model", "gpt-4"),
    temperature=prompt.metadata.get("temperature", 0.7),
    max_tokens=prompt.metadata.get("max_tokens", 500),
    messages=[...]
)

Document ownership and purpose

Use metadata to track who created prompts and why:

metadata = {
    "created_by": "ai-team",
    "reviewed_by": "product-team",
    "purpose": "Handle complex technical support queries",
    "last_updated": "2024-01-15"
}