Concurrency and Retries
Retries
Patronus Evaluators are automatically retried in case of a failure.
If you want to implement retries for tasks or evaluations that may fail due to exceptions, you can use the built-in retry()
helper decorator provided by the framework. Please note that retry()
only supports asynchronous functions. You can also implement your own retry mechanism.
Enabling Debug Logging
To get more detailed logs and increase verbosity in the Patronus Experimentation Framework, you can use standard Python logging. By configuring Python's logging module, you can capture and display debug-level logs.
Here's an example of how to configure logging:
Change Concurrency Settings
You can control how many concurrent calls to Patronus get made through the max_concurrency
setting when creating an experiment. The default max_concurrency
is 10
. See below for an example:
Logging Individual Evaluation Calls to an Experiment
Although this approach is not recommended, there may be situations where you want to log individual evaluation calls to an experiment. To do so, you will need to create a project and then create an experiment in that project. You will then receive an experiment ID. You can add that experiment ID to evaluator calls and this will populate those results into that experiment, allowing you to use Experiments through the UI afterwards.