Learn how to quickly get started with running agent evaluations via HTTP endpoints using the Maxim SDK
Get started with evaluating AI agents accessible via HTTP endpoints using the Maxim SDK. This guide will walk you through setting up the SDK and running your first agent evaluation test run.
Here’s how to create and run a basic agent evaluation test run using a workflow stored on the Maxim platform:
Copy
Ask AI
# Create a test run using a workflowresult = ( maxim.create_test_run( name="Basic Agent HTTP Evaluation", in_workspace_id="your-workspace-id" ) .with_data_structure({"input": "INPUT", "expected_output": "EXPECTED_OUTPUT"}) .with_data("your-dataset-id") .with_evaluators("Bias") .with_workflow_id("your-workflow-id") # Your agent workflow ID on Maxim platform .run())print(f"Test run completed! View results: {result.test_run_result.link}")