Skip to main content
POST
/
v1
/
test-runs
/
simulations
Trigger simulation test run
curl --request POST \
  --url https://api.getmaxim.ai/v1/test-runs/simulations \
  --header 'Content-Type: application/json' \
  --header 'x-maxim-api-key: <api-key>' \
  --data '
{
  "workspaceId": "<string>",
  "entityId": "<string>",
  "datasetId": "<string>",
  "simulationType": "PROMPT_SIMULATION",
  "simulationConfig": {
    "type": "MAXIM",
    "persona": "<string>",
    "maxTurns": 2,
    "tools": [
      "<string>"
    ],
    "context": {
      "type": "CONTEXT_SOURCE",
      "payload": "<string>"
    }
  },
  "datasetSplitId": "<string>",
  "evaluators": [
    {
      "id": "<string>",
      "variableMapping": {}
    }
  ],
  "contextToEvaluate": [
    {
      "type": "DATASOURCE",
      "payload": "<string>"
    }
  ]
}
'
{
  "data": {
    "id": "<string>",
    "workspaceId": "<string>",
    "status": "<string>",
    "entityType": "<string>",
    "createdAt": "<string>",
    "entityId": "<string>"
  }
}

Authorizations

x-maxim-api-key
string
header
required

API key for authentication

Body

application/json

Payload for triggering a simulation test run

workspaceId
string
required

Workspace ID where the simulation will run

entityId
string
required

ID of the entity to simulate (prompt version ID, workflow ID)

datasetId
string
required

Dataset ID to use for the simulation

simulationType
enum<string>
required
Available options:
PROMPT_SIMULATION
simulationConfig
object
required

Configuration for prompt simulation

datasetSplitId
string

Optional dataset split ID

evaluators
object[]

Array of evaluators with optional variable mapping to run on the simulation results.

contextToEvaluate
object[]

Context sources to include in evaluation

Response

Simulation test run triggered successfully

data
object
required

Response from triggering a simulation