Last Week at Maxim: Week 1 of May

We're back with another round of powerful updates to help you build, test, and observe AI agents more effectively. Here's what we rolled out:
Agent Mode in Prompt Playground
You can now simulate full agentic behavior in the playground and test runs, enabling auto tool calling by the LLM. This is ideal for testing complex tool use flows, especially those involving multiple steps. Max Tool Calls for Agent Mode - Set execution limits on agentic loops to prevent runaway calls. This ensures your agents don't get stuck in infinite tool-calling cycles. (Note: This feature is only relevant when Agent Mode is active.)
New Model Providers
Mistral AI & Fireworks AIYou can now evaluate your agents across even more LLM providers. Both Mistral and Fireworks are now live on the Maxim platform.
File + URL Attachments (Beta)
You can now attach images, audio, PDFs, and links to traces and spans, enabling multimodal observability.
🔸 Requires maxim-py >= 5.6.x
🔸 Great for audio-based workflows, grounding examples, and debugging complex inputs
Tool Call Column in Test Run Reports
We've added a new column in test run reports to show the sequence of tool calls made during agentic flows, helping you understand what was called, in what order. For full arguments and tool call details, refer to the Overview tab in the test run entry’s Sheet View.
That’s it for this week! As always, we’re committed to building the most reliable and flexible evaluation platform for agent workflows. Stay tuned for more updates next week.