This endpoint allows you to create a dataset with a given name from a provided file.

The dataset can contain the following attributes:

  • evaluated_model_system_prompt - text
  • evaluated_model_retrieved_context - text array
  • evaluated_model_input - text
  • evaluated_model_output - text
  • evaluated_model_gold_answer - text
  • meta_evaluated_model_name - text
  • meta_evaluated_model_provider -text
  • meta_evaluated_model_selected_model - text
  • meta_evaluated_model_params - map (string -> text | number)

All attributes are optional, but in principle, at least evaluated_model_input or evaluated_model_output
should be provided.

If you decide to start an Evaluation Run with Model Integration,
evaluated_model_output is not required,
as the Evaluation Run will call an LLM to get the output before evaluation.

Whether other fields are required depends on the evaluations you plan to perform.
Some evaluators require retrieved context, so evaluated_model_retrieved_context must be provided for them.
For exact field requirements, see the evaluators' documentation.

File Format

The uploaded file should be in CSV or JSONL format.

  • CSV file should contain a header row with the attributes defined above.
    Fields evaluated_model_retrieved_context and meta_evaluated_model_params must be JSON encoded.
    The CSV should use commas as separators.

  • JSONL file should have JSON-encoded objects with keys set to the attributes defined above.
    JSON objects should be separated by a new line.

Limits

The file size cannot be larger than 2 MiB. The file cannot contain more than 1000 samples.

Example request

curl -X POST https://api.patronus.ai/v1/datasets \
     -H "x-api-key: <your_api_key>" \
     -F "file=@<path_to_your_file>" \
     -F "dataset_name=<name for created dataset>"
Language
Authorization
Header
Click Try It! to start a request and see the response here!