How To
Evaluate Datasets
Learn how to evaluate your AI outputs against expected results using Maxim’s Dataset evaluation tools
Get started with Dataset evaluation
Have a dataset ready directly for evaluation? Maxim lets you evaluate your AI’s performance directly without setting up workflows.
1
Prepare Your Dataset
Include these columns in your dataset:
- Input queries or prompts
- Your AI’s actual outputs
- Expected outputs (ground truth)
2
Configure the Test Run
- On the Dataset page, click the “Test” button, on the top right corner
- Your Dataset should be already be pre-selected in the drop-down. Attach the Evaluators and Context Sources you want to use.
- Click “Trigger Test Run”
The Dataset appears pre-selected in the drop-down. You can attach the evaluators and context sources (if any) you want to use.