Learn how to integrate Maxim observability and online evaluation with your LiteLLM Proxy in just one line of configuration.
Prerequisites
Install the required Python packages:Project Layout
1. Define the Tracer
Create a filemaxim_proxy_tracer.py
next to your proxy entrypoint:
maxim_proxy_tracer.py
2. Update config.yml
Point LiteLLM’s callback at your tracer:
config.yml
model_list
and general_settings
remain unchanged.)
3. Configure Environment Variables
Add the following to a.env
file or export in your shell:
4. Run the Proxy Locally
You can start the proxy directly via the LiteLLM CLI:5. Run with Docker Compose
If you prefer Docker, use the providedDockerfile
and docker-compose.yml
:
- Port: 8000
- Health check:
GET /health
- Logs: streamed to
proxy_logs.log