Automatic Token and Cost Tracking
When you log LLM generations using the Maxim SDK, token usage and costs are captured automatically from the model response. When recording an LLM response, include theusage object with prompt_tokens, completion_tokens, and total_tokens:
Custom Metric Tracking via SDK
For more granular control, you can log token usage and cost metrics explicitly at different levels using theaddMetric method.
Track metrics at the trace level:
Configuring Custom Token Pricing
To ensure cost calculations reflect your actual expenses (such as negotiated enterprise rates), configure custom pricing structures:1
Navigate to Settings > Models > Pricing
2
Enter a model name pattern (string or regex) that matches your model names
3
Input your token usage cost per 1,000 tokens for both input and output tokens
Applying Pricing to Model Configs
1
Go to Settings > Models > Model Configs
2
Select a model config to edit
3
Locate the Pricing structure section
4
Choose your pricing structure from the dropdown
Applying Pricing to Log Repositories
1
Open Logs from the sidebar
2
Select the log repository you want to configure
3
Find the Pricing structure section
4
Choose your pricing structure from the dropdown
Setting Up Cost and Token Alerts
Monitor token usage and costs in real-time by configuring alerts:1
Navigate to your log repository and select the Alerts tab
2
Click Create alert and select Log metrics as the alert type
3
Configure thresholds for:
- Token Usage: Alert when consumption exceeds limits (e.g., trigger when hourly usage exceeds 1 million tokens)
- Cost: Alert when expenses exceed budgets (e.g., trigger when daily costs exceed $100)
4
Select notification channels (Slack or PagerDuty)
5
Click Create alert
Dashboard Visibility
Once logging is set up, you can view aggregated token and cost data in your log repository dashboard, including:- Total usage over time
- Cost per trace
- Token counts for each log entry (visible in the logs table)
- Latency and performance metrics