MaximLangchainTracer
to integrate Maxim observability with your LangChain and LangGraph applications.
MaximLangchainTracer
automatically captures:
maxim
key:
traceName
- Override the default trace namechainName
- Override the default chain/span namegenerationName
- Override the default LLM generation nameretrievalName
- Override the default retrieval operation nametoolCallName
- Override the default tool call nametraceTags
- Add custom tags to the trace (object: {key: value}
)chainTags
- Add custom tags to chains/spans (object: {key: value}
)generationTags
- Add custom tags to LLM generations (object: {key: value}
)retrievalTags
- Add custom tags to retrieval operations (object: {key: value}
)toolCallTags
- Add custom tags to tool calls (object: {key: value}
)sessionId
- Link this trace to an existing sessiontraceId
- Use a specific trace IDspanId
- Use a specific span IDsessionId
to group multiple traces under the same user session for better analytics@langchain/core
.
Prompt.messages
type changed: The messages
field type has been updated for better type safety
{ role: string; content: string | CompletionRequestContent[] }[]
(CompletionRequest | ChatCompletionMessage)[]
CompletionRequest
interface which has more specific role types ("user" | "system" | "tool" | "function"
) instead of generic string
GenerationConfig.messages
type changed: For better type safety and tool call support
messages: CompletionRequest[]
messages: (CompletionRequest | ChatCompletionMessage)[]
CompletionRequest[]
arrays will still work, but you can now also pass ChatCompletionMessage[]
for assistant responses with tool callsGeneration.addMessages()
method signature changed:
addMessages(messages: CompletionRequest[])
addMessages(messages: (CompletionRequest | ChatCompletionMessage)[])
MaximLogger.generationAddMessage()
method signature changed:
generationAddMessage(generationId: string, messages: CompletionRequest[])
generationAddMessage(generationId: string, messages: (CompletionRequest | ChatCompletionMessage)[])
MaximLangchainTracer
@langchain/core
as optional dependencyPrompt.run(input, options?)
- Execute prompts directly from Prompt objectsPromptChain.run(input, options?)
- Execute prompt chains directly from PromptChain objectsImageUrl
typePromptResponse
- Standardized response format for prompt executionsAgentResponse
- Standardized response format for prompt chain executionsChatCompletionMessage
- More specific interface for assistant messages with tool call supportCompletionRequest
- More specific interface with type-safe rolesChoice
, Usage
- Supporting types for response data with token usageImageUrl
- Type for image URL content in prompts (extracted from CompletionRequestImageUrlContent
)AgentCost
, AgentUsage
, AgentResponseMeta
- Supporting types for agent responsesTestRunBuilder.withPromptChainVersionId(id, contextToEvaluate?)
- Add prompt chain to test runsMaximLangchainTracer
- Main LangChain integration classChatCompletionMessage
, Choice
, CompletionRequest
, PromptResponse
- Core types now available for external useVariableType
from dataset models@langchain/core
as optional dependency (^0.3.0) - only needed if using MaximLangchainTracer
Prompt.messages
directly: Update your type annotations to use CompletionRequest | ChatCompletionMessage
typesmessages
array uses the new interface structureGeneration.addMessages()
: The method now accepts (CompletionRequest | ChatCompletionMessage)[]
- your existing code will work unchangedMaximLogger.generationAddMessage()
: The method now accepts (CompletionRequest | ChatCompletionMessage)[]
- your existing code will work unchangedGenerationConfig
objects: The messages
field now accepts (CompletionRequest | ChatCompletionMessage)[]
- your existing code will work unchanged@langchain/core
and import MaximLangchainTracer
maxim.logger()
, test runs, or prompt management APIsCompletionRequest[]
is compatible with (CompletionRequest | ChatCompletionMessage)[]
. You may only see TypeScript compilation errors if you have strict type checking enabled.
provider
field to the Prompt
type. This field specifies the LLM provider (e.g., ‘openai’, ‘anthropic’, etc.) for the prompt.Trace
, Span
, and Generation
for file uploads.
logprobs
output in generation results (ChatCompletionResult
and TextCompletionResult
).error
componentraiseExceptions
flag.trace.addToSession
method for attaching trace to a new sessioncontextToEvaluate
in withPromptVersionId
and withWorkflowId
(by passing it as the second parameter) to be able to choose whichever variable or dataset column to use as context to evaluate, as opposed to only having the dataset column as context through the CONTEXT_TO_EVALUATE
datastructure mapping.createCustomEvaluator
and createCustomCombinedEvaluatorsFor
for adding custom evaluators to add them to the test runs.withCustomLogger
to the test run builder chain to have a custom logger that follows the TestRunLogger
interface.createDataStructure
function to create a data struture outside the test run builder. This is done to help use the data structure to infer types outside the test run builder.withWorkflowId
and withPromptVersionId
to the test run builder chain.eventId
mandatory while logging an event.addMetadata
method to all log components for tracking any additional metadata.evaluate
method to Trace
, Span
, Generation
and Retrieval
classes for agentic (or node level) evaluation.raiseExceptions
to control exceptions thrown by the SDK. Default value is false
.getPrompt(s)
, getPromptChain(s)
and getFolder(s)
could return undefined if raiseExceptions
is false
.logger.flush
method to explicitly flushing logs