Prerequisites
- Python 3.8+
- OpenTelemetry Python SDK and OTLP exporter
- OpenTelemetry OpenAI Instrumentation (
opentelemetry-instrumentation-openai) - OpenAI Python SDK
- Maxim API key and Log Repository ID
- python-dotenv (optional, for
.env)
1. Install Dependencies
2. Set Up Environment Variables
Create a.env file in your project root:
Create a Log Repository in the Maxim Dashboard under Logs > Repositories if you don’t have one yet.
3. Configure OpenTelemetry and Export to Maxim
Set up the OpenTelemetry SDK with an OTLP exporter that sends traces to Maxim:Use
BatchSpanProcessor for production (batches spans before sending). For development or debugging, use SimpleSpanProcessor instead for immediate span export.4. Make an OpenAI Call
Use the standard OpenAI client. Traces are automatically captured and sent to Maxim:5. Visualize in Maxim
All instrumented OpenAI calls are traced and appear in your Maxim dashboard. Navigate to your Log Repository to view:- Input and output messages
- Token usage and model information
- Latency and timing
Quick Test with curl
You can also send a minimal OTLP JSON payload directly:Enriching Traces with Maxim Attributes
You can add tags and metrics to traces using Maxim-specific attributes (maxim-trace-tags, maxim-tags, maxim-trace-metrics, maxim-metrics). Place them inside maxim.metadata or metadata depending on your convention. For the full attribute reference and OpenInference support, see Ingesting via OTLP.
For more details, see the OpenTelemetry Python documentation and Ingesting via OTLP.