Overview
Prompts
- Prompt
- Prompt Version
- Prompt Deployment
- Prompt Config
Datasets
- Dataset
- Dataset column
- Dataset entry
- Dataset split
Evaluators
- Evaluator
Log repositories
Folders
- Folder
- Folder contents
Integrations
- Integration
Alerts
- Alert
Test runs
- Test run
Test run entries
- Test run entries
Test run reports
- Test run report
Log repository
Get trace by ID
Get a specific trace by ID
GET
/
v1
/
log-repositories
/
logs
/
traces
Copy
Ask AI
curl --request GET \
--url https://api.getmaxim.ai/v1/log-repositories/logs/traces \
--header 'x-maxim-api-key: <api-key>'
Copy
Ask AI
{
"data": {
"id": "<string>",
"name": "<string>",
"tags": {},
"startTimestamp": "<string>",
"sessionId": "<string>",
"type": "<string>",
"input": "<any>",
"output": "<any>",
"feedback": {
"score": 123,
"comment": "<string>"
},
"duration": 123,
"endTimestamp": "<string>",
"version": 123,
"timeline": [
{
"traceId": "<string>",
"name": "<string>",
"id": "<string>",
"type": "<string>",
"startTimestamp": "<string>",
"tags": {},
"timeline": [
{
"spanId": "<string>",
"modelParameters": {
"presencePenalty": 123,
"maxTokens": 123,
"temperature": 123,
"frequencyPenalty": 123,
"topP": 123
},
"provider": "<string>",
"name": "<string>",
"messages": [
{
"role": "<string>",
"content": "<string>"
}
],
"model": "<string>",
"id": "<string>",
"type": "<string>",
"startTimestamp": "<string>",
"completionResult": {
"cost": {
"output": 123,
"input": 123,
"total": 123
},
"provider": "<string>",
"created": "<any>",
"usage": {
"completion_tokens": 123,
"prompt_tokens": 123,
"total_tokens": 123
},
"model": "<string>",
"model_params": "<any>",
"id": "<string>",
"choices": [
{
"index": 123,
"message": {
"role": "<string>",
"content": "<string>",
"function_call": "<any>",
"tool_calls": "<any>"
},
"finish_reason": "<string>",
"logprobs": "<any>"
}
],
"error": "<any>"
},
"duration": 123,
"endTimestamp": "<string>",
"timestamp": "<string>",
"tags": {},
"input": "<string>",
"docs": [
"<string>"
]
}
]
}
]
}
}
Authorizations
API key for authentication
Query Parameters
Unique identifier for the trace
Response
200
application/json
Trace retrieved successfully
The response is of type object
.
Was this page helpful?
Copy
Ask AI
curl --request GET \
--url https://api.getmaxim.ai/v1/log-repositories/logs/traces \
--header 'x-maxim-api-key: <api-key>'
Copy
Ask AI
{
"data": {
"id": "<string>",
"name": "<string>",
"tags": {},
"startTimestamp": "<string>",
"sessionId": "<string>",
"type": "<string>",
"input": "<any>",
"output": "<any>",
"feedback": {
"score": 123,
"comment": "<string>"
},
"duration": 123,
"endTimestamp": "<string>",
"version": 123,
"timeline": [
{
"traceId": "<string>",
"name": "<string>",
"id": "<string>",
"type": "<string>",
"startTimestamp": "<string>",
"tags": {},
"timeline": [
{
"spanId": "<string>",
"modelParameters": {
"presencePenalty": 123,
"maxTokens": 123,
"temperature": 123,
"frequencyPenalty": 123,
"topP": 123
},
"provider": "<string>",
"name": "<string>",
"messages": [
{
"role": "<string>",
"content": "<string>"
}
],
"model": "<string>",
"id": "<string>",
"type": "<string>",
"startTimestamp": "<string>",
"completionResult": {
"cost": {
"output": 123,
"input": 123,
"total": 123
},
"provider": "<string>",
"created": "<any>",
"usage": {
"completion_tokens": 123,
"prompt_tokens": 123,
"total_tokens": 123
},
"model": "<string>",
"model_params": "<any>",
"id": "<string>",
"choices": [
{
"index": 123,
"message": {
"role": "<string>",
"content": "<string>",
"function_call": "<any>",
"tool_calls": "<any>"
},
"finish_reason": "<string>",
"logprobs": "<any>"
}
],
"error": "<any>"
},
"duration": 123,
"endTimestamp": "<string>",
"timestamp": "<string>",
"tags": {},
"input": "<string>",
"docs": [
"<string>"
]
}
]
}
]
}
}
Assistant
Responses are generated using AI and may contain mistakes.