Maxim_Apis utilities for api client utilities for interacting with maxim services.
requests.Session
- The HTTP session objectName | Description |
---|---|
base_url | The base URL for the Maxim API |
api_key | The API key for authentication |
Name | Description |
---|---|
id | The prompt ID |
Name | Description |
---|---|
[VersionAndRulesWithPromptId](/sdk/python/references/models/prompt) | The prompt details |
Exception
- If the request failsName | Description |
---|---|
List[[VersionAndRulesWithPromptId](/sdk/python/references/models/prompt)] | List of all prompts |
Exception
- If the request failsName | Description |
---|---|
id | The prompt chain ID |
Name | Description |
---|---|
[VersionAndRulesWithPromptChainId](/sdk/python/references/models/prompt_chain) | The prompt chain details |
Exception
- If the request failsName | Description |
---|---|
List[[VersionAndRulesWithPromptChainId](/sdk/python/references/models/prompt_chain)] | List of all prompt chains |
Exception
- If the request failsName | Description |
---|---|
model | The model to use |
messages | List of chat messages |
tools | Optional list of tools to use |
**kwargs | Additional parameters to pass to the API |
Name | Description |
---|---|
[PromptResponse](/sdk/python/references/models/prompt) | The response from the model |
Exception
- If the request failsName | Description |
---|---|
prompt_version_id | The ID of the prompt version to run |
input | The input text for the prompt |
image_urls | Optional list of image URLs to include |
variables | Optional dictionary of variables to use |
Name | Description |
---|---|
Optional[[PromptResponse](/sdk/python/references/models/prompt)] | The response from the prompt |
Exception
- If the request failsName | Description |
---|---|
prompt_chain_version_id | The ID of the prompt chain version to run |
input | The input text for the prompt chain |
variables | Optional dictionary of variables to use |
Name | Description |
---|---|
Optional[[AgentResponse](/sdk/python/references/models/prompt_chain)] | The response from the prompt chain |
Exception
- If the request failsName | Description |
---|---|
id | The folder ID |
Name | Description |
---|---|
[Folder](/sdk/python/references/models/folder) | The folder details |
Exception
- If the request failsName | Description |
---|---|
List[[Folder](/sdk/python/references/models/folder)] | List of all folders |
Exception
- If the request failsName | Description |
---|---|
dataset_id | The ID of the dataset |
dataset_entries | List of dataset entries to add |
Exception
- If the request failsName | Description |
---|---|
dataset_id | The ID of the dataset |
Name | Description |
---|---|
int | The total number of rows |
Exception
- If the request failsName | Description |
---|---|
dataset_id | The ID of the dataset |
row_index | The index of the row to retrieve |
Name | Description |
---|---|
Optional[[DatasetRow](/sdk/python/references/models/dataset)] | The dataset row, or None if not found |
Exception
- If the request failsName | Description |
---|---|
dataset_id | The ID of the dataset |
Exception
- If the request failsName | Description |
---|---|
logger_id | The ID of the logger |
Name | Description |
---|---|
bool | True if the repository exists, False otherwise |
Name | Description |
---|---|
repository_id | The ID of the repository |
logs | The logs to push |
Exception
- If the request failsName | Description |
---|---|
name | The name of the evaluator |
in_workspace_id | The workspace ID |
Name | Description |
---|---|
[Evaluator](/sdk/python/references/models/evaluator) | The evaluator details |
Exception
- If the request failsName | Description |
---|---|
name | The name of the test run |
workspace_id | The workspace ID |
workflow_id | Optional workflow ID |
prompt_version_id | Optional prompt version ID |
prompt_chain_version_id | Optional prompt chain version ID |
run_type | The type of run |
evaluator_config | List of evaluators to use |
requires_local_run | Whether the test run requires local execution |
human_evaluation_config | Optional human evaluation configuration |
Name | Description |
---|---|
[TestRun](/sdk/python/references/models/test_run) | The created test run |
Exception
- If the request failsName | Description |
---|---|
test_run_id | The ID of the test run |
dataset_id | The ID of the dataset |
Exception
- If the request failsName | Description |
---|---|
test_run | The test run |
entry | The test run entry to push |
run_config | Optional run configuration |
Exception
- If the request failsName | Description |
---|---|
test_run_id | The ID of the test run |
Exception
- If the request failsName | Description |
---|---|
test_run_id | The ID of the test run |
Exception
- If the request failsName | Description |
---|---|
test_run_id | The ID of the test run |
Name | Description |
---|---|
[TestRunStatus](/sdk/python/references/models/test_run) | The status of the test run |
Exception
- If the request failsName | Description |
---|---|
test_run_id | The ID of the test run |
Name | Description |
---|---|
[TestRunResult](/sdk/python/references/models/test_run) | The final result of the test run |
Exception
- If the request failsName | Description |
---|---|
key | The key (filename) for the upload |
mime_type | The MIME type of the file |
size | The size of the file in bytes |
Name | Description |
---|---|
[SignedURLResponse](/sdk/python/references/models/attachment) | A dictionary containing the signed URL for upload |
Exception
- If the request failsName | Description |
---|---|
url | The signed URL to upload to |
data | The binary data to upload |
mime_type | The MIME type of the data |
Name | Description |
---|---|
bool | True if upload was successful, False otherwise |