Maxim SDK
This is JS/TS SDK for enabling Maxim observability. Maxim is an enterprise grade evaluation and observability platform.
How to integrate
Install
Initialize Maxim logger
Start Sending Traces
Integrations with other frameworks
LangChain
You can use the built-inMaximLangchainTracer to integrate Maxim observability with your LangChain and LangGraph applications.
Installation
The LangChain integration is available as an optional dependency. Install the required LangChain package:⚡ 2-Line Integration
Add comprehensive observability to your existing LangChain code with just 2 lines:Complete Setup Example
LangGraph Integration
What gets tracked
TheMaximLangchainTracer automatically captures:
- Traces: Top-level executions with input/output
- Spans: Chain executions (sequences, parallel operations, etc.)
- Generations: LLM calls with messages, model parameters, and responses
- Retrievals: Vector store and retriever operations
- Tool Calls: Function/tool executions
- Errors: Failed operations with error details
Supported Providers
The tracer automatically detects and supports:- OpenAI (including Azure OpenAI)
- Anthropic
- Google (Vertex AI, Gemini)
- Amazon Bedrock
- Hugging Face
- Together AI
- Groq
- And more…
Custom Metadata
You can pass custom metadata through LangChain’s metadata system to customize how your operations appear in Maxim. All Maxim-specific metadata should be nested under themaxim key:
Available Metadata Fields
Entity Naming:traceName- Override the default trace namechainName- Override the default chain/span namegenerationName- Override the default LLM generation nameretrievalName- Override the default retrieval operation nametoolCallName- Override the default tool call name
traceTags- Add custom tags to the trace (object:{key: value})chainTags- Add custom tags to chains/spans (object:{key: value})generationTags- Add custom tags to LLM generations (object:{key: value})retrievalTags- Add custom tags to retrieval operations (object:{key: value})toolCallTags- Add custom tags to tool calls (object:{key: value})
sessionId- Link this trace to an existing sessiontraceId- Use a specific trace IDspanId- Use a specific span ID
Complete Example
Per-Component Examples
For LLM calls:Notes
- Automatic fallbacks: If you don’t provide custom names, the tracer uses sensible defaults based on the LangChain component names
- Session linking: Use
sessionIdto group multiple traces under the same user session for better analytics
Legacy Langchain Integration
For projects still using our separate package Maxim Langchain Tracer (now deprecated in favor of the built-in tracer above), you can use our built-in tracer as is by just replacing the import and installing@langchain/core.
Version changelog
v6.5.0
-
⚠️ BREAKING CHANGES:
-
Prompt.messagestype changed: Themessagesfield type has been updated for better type safety- Before:
{ role: string; content: string | CompletionRequestContent[] }[] - After:
(CompletionRequest | ChatCompletionMessage)[] - Migration: Update your code to use the new
CompletionRequestinterface which has more specific role types ("user" | "system" | "tool" | "function") instead of genericstring
- Before:
-
GenerationConfig.messagestype changed: For better type safety and tool call support- Before:
messages: CompletionRequest[] - After:
messages: (CompletionRequest | ChatCompletionMessage)[] - Migration: Your existing
CompletionRequest[]arrays will still work, but you can now also passChatCompletionMessage[]for assistant responses with tool calls
- Before:
-
Generation.addMessages()method signature changed:- Before:
addMessages(messages: CompletionRequest[]) - After:
addMessages(messages: (CompletionRequest | ChatCompletionMessage)[]) - Migration: Your existing calls will still work, but you can now also pass assistant messages with tool calls
- Before:
-
MaximLogger.generationAddMessage()method signature changed:- Before:
generationAddMessage(generationId: string, messages: CompletionRequest[]) - After:
generationAddMessage(generationId: string, messages: (CompletionRequest | ChatCompletionMessage)[]) - Migration: Your existing calls will still work, but you can now also pass assistant messages with tool calls
- Before:
-
-
feat: Added LangChain integration with
MaximLangchainTracer- Comprehensive tracing support for LangChain and LangGraph applications
- Automatic detection of 8+ LLM providers (OpenAI, Anthropic, Google, Bedrock, etc.)
- Support for chains, agents, retrievers, and tool calls
- Custom metadata and tagging capabilities
- Added
@langchain/coreas optional dependency
-
feat: Enhanced prompt and prompt chain execution capabilities
- NEW METHOD:
Prompt.run(input, options?)- Execute prompts directly from Prompt objects - NEW METHOD:
PromptChain.run(input, options?)- Execute prompt chains directly from PromptChain objects - Support for image URLs when running prompts via
ImageUrltype - Support for variables in prompt execution
- NEW METHOD:
-
feat: New types and interfaces for improved type safety
- NEW TYPE:
PromptResponse- Standardized response format for prompt executions - NEW TYPE:
AgentResponse- Standardized response format for prompt chain executions - ENHANCED TYPE:
ChatCompletionMessage- More specific interface for assistant messages with tool call support - ENHANCED TYPE:
CompletionRequest- More specific interface with type-safe roles - NEW TYPE:
Choice,Usage- Supporting types for response data with token usage - NEW TYPE:
ImageUrl- Type for image URL content in prompts (extracted fromCompletionRequestImageUrlContent) - NEW TYPE:
AgentCost,AgentUsage,AgentResponseMeta- Supporting types for agent responses
- NEW TYPE:
-
feat: Test run improvements with prompt chain support
- Enhanced test run execution with cost and usage tracking for prompt chains
- Support for prompt chains alongside existing prompt and workflow support
- NEW METHOD:
TestRunBuilder.withPromptChainVersionId(id, contextToEvaluate?)- Add prompt chain to test runs
-
feat: Enhanced exports for better developer experience
- NEW EXPORT:
MaximLangchainTracer- Main LangChain integration class - NEW EXPORTS:
ChatCompletionMessage,Choice,CompletionRequest,PromptResponse- Core types now available for external use - Enhanced type safety and IntelliSense support for prompt handling
- NEW EXPORT:
-
feat: Standalone package configuration
- MIGRATION: Moved from NX monorepo to standalone package (internal change, no user action needed)
- Added comprehensive build, test, and lint scripts
- Updated TypeScript configuration for ES2022 target
- Added Prettier and ESLint configuration files
- NEW EXPORT:
VariableTypefrom dataset models
-
deps: LangChain ecosystem support (all optional)
- NEW OPTIONAL:
@langchain/coreas optional dependency (^0.3.0) - only needed if usingMaximLangchainTracer
- NEW OPTIONAL:
- If you access
Prompt.messagesdirectly: Update your type annotations to useCompletionRequest | ChatCompletionMessagetypes - If you create custom prompt objects: Ensure your
messagesarray uses the new interface structure - If you use
Generation.addMessages(): The method now accepts(CompletionRequest | ChatCompletionMessage)[]- your existing code will work unchanged - If you use
MaximLogger.generationAddMessage(): The method now accepts(CompletionRequest | ChatCompletionMessage)[]- your existing code will work unchanged - If you create
GenerationConfigobjects: Themessagesfield now accepts(CompletionRequest | ChatCompletionMessage)[]- your existing code will work unchanged - To use LangChain integration: Install
@langchain/coreand importMaximLangchainTracer - No action needed for: Regular SDK usage through
maxim.logger(), test runs, or prompt management APIs
CompletionRequest[] is compatible with (CompletionRequest | ChatCompletionMessage)[]. You may only see TypeScript compilation errors if you have strict type checking enabled.
v6.4.0
- feat: adds
providerfield to thePrompttype. This field specifies the LLM provider (e.g., ‘openai’, ‘anthropic’, etc.) for the prompt. - feat: include Langchain integration in the main repository
v6.3.0
- feat: adds attachments support to
Trace,Span, andGenerationfor file uploads.- 3 attachment types are supported: file path, buffer data, and URL
- has auto-detection of MIME types, file sizes, and names for attachments wherever possible
- fix: refactored message handling for Generations to prevent keeping messages reference but rather duplicates the object to ensure point in time capture.
- fix: ensures proper cleanup of resources
v6.2.2
- fix: Added support for OpenAI’s
logprobsoutput in generation results (ChatCompletionResultandTextCompletionResult).
v6.2.1
- fix: Refactored message handling in Generation class to prevent duplicate messages
v6.2.0
- chore: Adds maximum payload limit to push to the server
- chore: Adds max in-memory size of the queue for pending commit logs. Beyond that limit, writer automatically flushes logs to the server
v6.1.8
- Feat: Adds new
errorcomponent - Chore: Adds ID validator for each entity. It will spit out error log or exception based on
raiseExceptionsflag.
v6.1.7
- Feat: Adds
trace.addToSessionmethod for attaching trace to a new session
v6.1.6
- Fix: minor bug fixes around queuing of logs.
v6.1.5
- Fix: updates create test run api to use v2 api
v6.1.4
- Fix: Handles marking test run as failed if the test run throws at any point after creating it on the platform.
- Feat: Adds support for
contextToEvaluateinwithPromptVersionIdandwithWorkflowId(by passing it as the second parameter) to be able to choose whichever variable or dataset column to use as context to evaluate, as opposed to only having the dataset column as context through theCONTEXT_TO_EVALUATEdatastructure mapping.
v6.1.3
- Feat: Adds
createCustomEvaluatorandcreateCustomCombinedEvaluatorsForfor adding custom evaluators to add them to the test runs. - Feat: Adds
withCustomLoggerto the test run builder chain to have a custom logger that follows theTestRunLoggerinterface. - Feat: Adds
createDataStructurefunction to create a data struture outside the test run builder. This is done to help use the data structure to infer types outside the test run builder. - Feat: Adds
withWorkflowIdandwithPromptVersionIdto the test run builder chain.
v6.1.2
- Fix: makes
eventIdmandatory while logging an event. - Feat: adds
addMetadatamethod to all log components for tracking any additional metadata. - Feat: adds
evaluatemethod toTrace,Span,GenerationandRetrievalclasses for agentic (or node level) evaluation.
v6.1.1
- Feat: Adds support for tool_calls as a separate entity.
v6.1.0
- Change: Adds a new config parameter
raiseExceptionsto control exceptions thrown by the SDK. Default value isfalse. getPrompt(s),getPromptChain(s)andgetFolder(s)could return undefined ifraiseExceptionsisfalse.
v6.0.4
- Change: Prompt management needs to be enabled via config.
- Chore: On multiple initializations of the SDK, SDK will warn the user. This start throwing exceptions in future releases.
v6.0.3
- Chore: removed optional dependencies
v6.0.2
- Feat: Adds a new
logger.flushmethod to explicitly flushing logs
v6.0.1
- Fix: fixes logger cleanup
v6.0.0
- Feat: Jinja 2.0 variables support
v5.2.6
- Fix: fixes incorrect message format for openai structured output params
v5.2.5
- Fix: fixes incorrect mapping of messages for old langchain sdk
v5.2.4
- Fix: config fixes for static classes
v5.2.3
- Improvement: Adds AWS lambda support for Maxim SDK.
v5.2.2
- Fix: There was a critical bug in the implementation of HTTP POST calls where some of the payloads were getting truncated.
v5.2.1
- Fix: For ending any entity, we make sure endTimestamp is captured from client side. This was not the case earlier in some scenarios.
- Fix: Data payload will always be a valid JSON
v5.2.0
- Improvement: Adds exponential retries to the API calls to Maxim server.
v5.1.2
- Improvement: Readme updates.
v5.1.1
- Improvement: Detailed logs in debug mode
v5.1.0
- Adds scaffold to support LangchainTracer for Maxim SDK.
v5.0.3
- Exposes MaximLogger for writing wrappers for different developer SDKs.
v5.0.2
- Adds more type-safety for generation messages
v5.0.1
- Adds support input/output for traces
v5.0.0
- Adds support for node 12+
V4.0.2
- Fixed a critical bug related to pushing generation results to the Maxim platform
- Improved error handling for network connectivity issues
- Enhanced performance when logging large volumes of data
V4.0.1
- Adds retrieval updates
- Adds ChatMessage support
v4.0.0 (Breaking changes)
- Adds prompt chain support
- Adds vision model support for prompts
v3.0.7
- Adds separate error reporting method for generations
v3.0.6
- Adds top level methods for easier SDK integration
v3.0.5
- Fixes logs push error
v3.0.4
- Minor bug fixes
v3.0.3
- Updates default base url
v3.0.2
- Prompt selection algorithm v2
v3.0.1
- Minor bug fixes
v3.0.0
- Moves to new base URL
- Adds all new logging support
v2.1.0
- Adds support for adding dataset entries via SDK.
v2.0.0
- Folders, Tags and advanced filtering support.
- Add support for customizing default matching algorithm.
v1.1.0
- Adds realtim sync for prompt deployment.
v1.0.0
- Adds support for deployment variables and custom fields. [Breaking change from earlier versions.]
v0.5.0
- Adds support for new SDK apis.
v0.4.0
- Adds support for custom fields for Prompts.