Introduction
Offline Evals
- Offline Evaluation Overview
- Offline Evaluation Concepts
- Via UI
- Via SDK
- Guides
Online Evals
Tracing
Simulations
Library
Tracing via SDK
Generations
Use generations to log individual calls to Large Language Models (LLMs)
Each trace/span can contain multiple generations.
1
Send and record LLM request
Copy
Ask AI
// Initialize a trace with a unique ID
const trace = logger.trace({id: "trace-id"});
// Adding a generation
const generation = trace.generation({
id: "generation-id",
name: "customer-support--gather-information",
provider: "openai",
model: "gpt-4o",
modelParameters: { temperature: 0.7 },
messages: [
{ "role": "system", "content": "you are a helpful assistant who helps gather customer information" },
{ "role": "user", "content": "My internet is not working" },
],
});
// Note: Replace 'trace.generation' with 'span.generation' when creating generations within an existing span
// Execute the LLM call
// const aiCompletion = await openai.chat.completions.create({ ... })
2
Record LLM response
Copy
Ask AI
generation.result({
id: "chatcmpl-123",
object: "chat.completion",
created: Date.now(),
model: "gpt-4o",
choices: [{
index: 0,
message: {
role: "assistant",
content: "Apologies for the inconvenience. Can you please share your customer id?"
},
finish_reason: "stop"
}],
usage: {
prompt_tokens: 100,
completion_tokens: 50,
total_tokens: 150
}
});
Was this page helpful?
Assistant
Responses are generated using AI and may contain mistakes.