Our docs got a refresh! Check out the new content and improved navigation. For detailed API reference see our Python SDK docs and TypeScript SDK.
Description
Prompts

Concepts

Understanding prompt management in Patronus

What is prompt management?

Prompt management in Patronus helps you version, store, and deploy prompts as centralized assets instead of hardcoding them in your application. Think of it as version control for prompts - you can track changes, manage different versions across environments, and update prompts without redeploying your code.

This approach separates prompt content from application logic, making it easier to iterate on prompts and collaborate across teams.

Key concepts

Prompts

A prompt is a named, versioned asset that contains your LLM instructions. Each prompt includes:

  • Name: A hierarchical identifier (e.g., support/chat/system) that helps organize prompts
  • Body: The actual prompt text, optionally with template variables
  • Description: Human-readable documentation explaining the prompt's purpose
  • Metadata: Configuration and settings stored as key-value pairs

Prompts are stored centrally in Patronus and loaded at runtime by your application.

Revisions

Each time you update a prompt's content, a new revision is created. Revisions are:

  • Numbered sequentially: Revision 1, 2, 3, etc.
  • Immutable: Once created, revisions never change
  • Selectable: You can load specific revisions or always use the latest

This revision system gives you a complete history of how your prompts have evolved over time, making it easy to roll back if needed.

Labels

Labels provide stable, human-friendly references to specific prompt revisions. Instead of referencing "revision 47," you can use meaningful labels like "production" or "staging."

Common label patterns:

  • Environment management: development, staging, production
  • Audience targeting: technical-audience, general-audience
  • Feature flags: feature-beta, feature-stable

Labels can be moved to point to different revisions as your prompts evolve, making environment promotion straightforward.

Template variables

Prompts support dynamic content through template variables. You can use placeholders in your prompt that get filled in at runtime with specific values.

Example: A prompt template might include {user_name} and {request_type} variables that get replaced with actual values when the prompt is rendered.

Patronus supports multiple template engines like f-string, Mustache, and Jinja2, so you can choose the format that fits your workflow.

How prompt management works

The typical workflow involves four steps:

1. Creating prompts

During development, you create prompts and push them to Patronus. This establishes the initial version and makes it available for your application to use.

2. Loading prompts

At runtime, your application loads prompts from Patronus. You can load by name (gets latest), by specific revision number, or by label.

3. Rendering prompts

If your prompt includes template variables, you render it with specific values for each request. This lets one prompt template serve many use cases.

4. Managing versions

As you iterate, you create new revisions and use labels to manage which revisions are active in different environments. This makes it easy to test changes in staging before promoting to production.

Benefits

Version control

Track all prompt changes over time with a complete audit trail. See what changed, when, and by whom.

Separation of concerns

Update prompts without touching application code. This means faster iteration and fewer deployments.

Environment management

Run different prompt versions across dev, staging, and production. Test changes safely before rolling out.

Centralized storage

Single source of truth for all prompts, shared across teams. No more duplicate or outdated prompts scattered across codebases.

Next steps

On this page