Documentation Index
Fetch the complete documentation index at: https://www.getmaxim.ai/docs/llms.txt
Use this file to discover all available pages before exploring further.
How to test local prompts?
Local prompt testing allows you to evaluate custom prompt implementations using theYields Output function. This approach is ideal when you want to test your own prompt logic, integrate with specific LLM providers, or implement complex prompt workflows.
Basic Local Prompt Testing
Use theYields Output function to define custom prompt logic that will be executed for each test case:
Advanced Prompt Testing with Context
You can also test prompts that use additional context or implement RAG (Retrieval-Augmented Generation):Best Practices
- Error Handling: Always include proper error handling in your
Yields Outputfunction - Token Tracking: Include usage and cost metadata when possible for better insights
- Context Management: Use
Retrieved Context to Evaluatewhen evaluating prompts that use RAG systems
Example Repository
For more complex examples including multi-turn conversations and advanced RAG implementations, check out our cookbooks repository for python or typescript.Next Steps
- Testing Maxim Prompts - Use prompts stored on the Maxim platform
- Prompt Management - Retrieve prompts for production use
- CI/CD Integration - Automate your prompt testing