1. Set up your environment

First, configure your AI model providers:

1

Go to `Settings` → `Models`.

Click on the tab of the provider for which you want to add an API key.
2

Configure model provider

Click on Add New and fill in the required details.
Maxim requires at least one provider with access to GPT-3.5 and GPT-4 models. We use industry-standard encryption to securely store your API keys.

To learn more about API keys, inviting users, and managing roles, refer to our Workspace and roles guide.

2. Create your first prompt or workflow

Create prompts to experiment and evaluate a call to a model with attached context or tools. Use workflows to easily test your complex AI agents using the HTTP endpoint for your application without any integration.

Prompt

1

Create prompt

Navigate to the Prompts tab under the Evaluate section and click on Single prompts. Click Create prompt or Try sample to get started.
2

Write your first prompt

Write your system prompt and user prompt in the respective fields.
3

Configure model and parameters.

Configure additional settings like model, temperature, and max tokens.
4

Iterate

Click Run to test your prompt and see the AI’s response. Iterate on your prompt based on the results.
5

Save prompt and publish a version.

When satisfied, click Save to create a new version of your prompt.

To learn more about prompts, refer to our detailed guide on prompts.

Workflow

1

Create workflow

Navigate to the Workflows tab under the Evaluate section. Click Create Workflow or Try sample.
2

Configure agent endpoint

Enter your API endpoint URL in the URL field. Configure any necessary headers or parameters. You can use dynamic variables like {input} to reference static context easily in any part of your workflow using {}
3

Test your agent

Click Run to test your endpoint in the playground.
4

Configure workflow for testing

In the Output Mapping section, select the part of the response you want to evaluate (e.g., data.response). Click Save to create your workflow.

To learn more about workflows, refer to our detailed guide on Workflows.

3. Prepare your dataset

Organize and manage the data you’ll use for testing and evaluation:

1

Create dataset

Navigate to the Datasets tab under the Library section. Click Create New or Upload CSV. We also have a sample dataset created for you. Click on View our sample dataset to get started.
2

Edit dataset

If creating a new dataset, enter a name and description for your dataset. Add columns to your dataset (e.g., ‘input’ and ‘expected_output’).
3

Save

Add entries to your dataset, filling in the values for each column. Click Save to create your dataset.

To learn more about datasets, refer to our detailed guide on Datasets.

5. Add evaluators

Set up evaluators to assess your prompt or workflow’s performance:

1

Add evaluators from store

Navigate to the Evaluators tab under the Library section. Click Add Evaluator to browse available evaluators.
2

Configure added evaluators

Choose an evaluator type (e.g., AI, Programmatic, API, or Human). Configure the evaluator settings as needed. Click Save to add the evaluator to your workspace.

To learn more about evaluators, refer to our detailed guide on Evaluators.

6. Run your first test

Execute a test run to evaluate your prompt or workflow:

1

Select workflow/prompt to test

Navigate to your saved prompt or workflow. Click Test in the top right corner.
2

Configure test run

Select the dataset you created earlier. Choose the evaluators you want to use for this test run.
3

Trigger

Click Trigger Test Run to start the evaluation process.
If you’ve added human evaluators, you’ll be prompted to set up human annotation on the report or via email.

7. Analyze test results

Review and analyze the results of your test run:

1

View report

Navigate to the Runs tab in the left navigation menu. Find your recent test run and click on it to view details.
2

Review performance

Review the overall performance metrics and scores for each evaluator. Drill down into individual queries to see specific scores and reasoning.
3

Iterate

Use these insights to identify areas for improvement in your prompt or workflow.

Next steps

Now that you’ve completed your first cycle on the Maxim platform, consider exploring these additional capabilities:

  1. Prompt comparisons: Evaluate different prompts side-by-side to determine which ones produce the best results for a given task.
  2. Prompt chains: Create complex, multi-step AI workflows. Learn how to connect prompts, code, and APIs to build powerful, real-world AI systems using our intuitive, no-code editor.
  3. Context sources: Integrate Retrieval-Augmented Generation (RAG) into your workflows.
  4. Prompt tools: Enhance your prompts with custom functions and agentic behaviors.
  5. Observability: Use our stateless SDK to monitor real-time production logs and run periodic quality checks.

By following this guide, you’ve learned how to set up your environment, create prompts, prepare datasets, set up workflows, add evaluators, run tests, and analyze results. This foundational knowledge will help you leverage Maxim’s powerful features to develop and improve your AI applications efficiently.