LiteLLM
LiteLLM Proxy one-line integration
Learn how to integrate Maxim with the LiteLLM Proxy
Learn how to integrate Maxim observability and online evaluation with your LiteLLM Proxy in just one line of configuration.
Prerequisites
Install the required Python packages:
Project Layout
1. Define the Tracer
Create a file maxim_proxy_tracer.py
next to your proxy entrypoint:
maxim_proxy_tracer.py
2. Update config.yml
Point LiteLLM’s callback at your tracer:
config.yml
(Your existing model_list
and general_settings
remain unchanged.)
3. Configure Environment Variables
Add the following to a .env
file or export in your shell:
4. Run the Proxy Locally
You can start the proxy directly via the LiteLLM CLI:
5. Run with Docker Compose
If you prefer Docker, use the provided Dockerfile
and docker-compose.yml
:
- Port: 8000
- Health check:
GET /health
- Logs: streamed to
proxy_logs.log
That’s it—no additional code changes required. Every request through your LiteLLM Proxy will now be traced, logged, and evaluated in Maxim.