Documentation Index
Fetch the complete documentation index at: https://www.getmaxim.ai/docs/llms.txt
Use this file to discover all available pages before exploring further.
View module source on GitHub
current_generation
def current_generation() -> Optional[Generation]
Get the current generation from the context variable.
Returns:
| Name | Description |
|---|
Optional[[Generation](/sdk/python/references/logger/components/generation)] | The current generation instance if one exists, |
otherwise None.
generation
def generation(logger: Optional[Logger] = None,
id: Optional[Union[Callable, str]] = None,
name: Optional[str] = None,
maxim_prompt_id: Optional[str] = None,
tags: Optional[Dict[str, str]] = None,
evaluators: Optional[List[str]] = None,
evaluator_variables: Optional[Dict[str, str]] = None)
Decorator for tracking AI model generations with Maxim logging.
This decorator wraps functions to automatically create and manage Generation
objects for tracking AI model calls, including inputs, outputs, and metadata.
The decorated function must be called within a @trace or @span decorated context.
Arguments:
| Name | Type | Description |
|---|
logger | Optional[Logger] | Maxim logger instance. If None, uses the current |
logger from context.
id Optional[Union[Callable, str]] - Generation ID. Can be a string or a
callable that returns a string. If None, generates a UUID.
name Optional[str] - Human-readable name for the generation.
maxim_prompt_id Optional[str] - ID of the Maxim prompt template used.
tags Optional[Dict[str, str]] - Key-value pairs for tagging the generation.
evaluators Optional[List[str]] - List of evaluator names to run on this
generation.
evaluator_variables Optional[Dict[str, str]] - Variables to pass to
evaluators.
Returns:
| Name | Description |
|---|
Callable | The decorator function that wraps the target function. |
Raises:
ValueError - If no logger is found or if called outside of a trace/span context
when raise_exceptions is True.
Example:
import maxim
logger = maxim.Logger(api_key="your-api-key")
@logger.trace()
def my_ai_workflow():
result = generate_text("Hello world")
return result
@generation(
name="text_generation",
tags={"model": "gpt-4", "temperature": "0.7"},
evaluators=["coherence", "relevance"]
)
def generate_text(prompt: str) -> str:
# Your AI generation logic here
return "Generated response"