Requirements

"llama-index"
"llama-index-llms-openai"
"llama-index-embeddings-openai"
"maxim-py"
"python-dotenv"

Environment Variables

MAXIM_API_KEY=
MAXIM_LOG_REPO_ID=
OPENAI_API_KEY=

Initialize Maxim Logger

import os
from maxim import Config, Maxim
from maxim.logger import LoggerConfig

# Initialize Maxim logger
maxim = Maxim(Config(api_key=os.getenv("MAXIM_API_KEY")))
logger = maxim.logger(LoggerConfig(id=os.getenv("MAXIM_LOG_REPO_ID")))

Enable LlamaIndex Instrumentation

from maxim.logger.llamaindex import instrument_llamaindex

# Instrument LlamaIndex with Maxim observability
instrument_llamaindex(logger, debug=True)
This single line automatically instruments:
  • AgentWorkflow.run() - Multi-agent workflow execution
  • FunctionAgent.run() - Function-based agent interactions
  • ReActAgent.run() - ReAct reasoning agent calls

Simple FunctionAgent Example

Create a basic calculator agent with automatic tracing:
from llama_index.core.agent import FunctionAgent
from llama_index.core.tools import FunctionTool
from llama_index.llms.openai import OpenAI

# Define calculator tools
def add_numbers(a: float, b: float) -> float:
    """Add two numbers together."""
    return a + b

def multiply_numbers(a: float, b: float) -> float:
    """Multiply two numbers together."""
    return a * b

def divide_numbers(a: float, b: float) -> float:
    """Divide first number by second number."""
    if b == 0:
        raise ValueError("Cannot divide by zero")
    return a / b

# Create function tools
add_tool = FunctionTool.from_defaults(fn=add_numbers)
multiply_tool = FunctionTool.from_defaults(fn=multiply_numbers)
divide_tool = FunctionTool.from_defaults(fn=divide_numbers)

# Initialize LLM and agent
llm = OpenAI(model="gpt-4o-mini", temperature=0)

agent = FunctionAgent(
    tools=[add_tool, multiply_tool, divide_tool],
    llm=llm,
    verbose=True,
    system_prompt="""You are a helpful calculator assistant.
    Use the provided tools to perform mathematical calculations.
    Always explain your reasoning step by step."""
)

# Run the agent (automatically traced by Maxim)
async def run_calculation():
    query = "What is (15 + 25) multiplied by 2, then divided by 8?"
    response = await agent.run(query)
    print(f"Response: {response}")

await run_calculation()

Multi-Modal Agent Example

Create an agent that can handle both text and images:
from llama_index.core.agent.workflow import FunctionAgent
from llama_index.core.llms import ChatMessage, ImageBlock, TextBlock
from llama_index.llms.openai import OpenAI

# Multi-modal tools
def describe_image_content(description: str) -> str:
    """Analyze and describe what's in an image."""
    return f"Image analysis complete: {description}"

def add(a: int, b: int) -> int:
    """Add two numbers together."""
    return a + b

# Create multi-modal agent
multimodal_llm = OpenAI(model="gpt-4o-mini")  # Vision-capable model

multimodal_agent = FunctionAgent(
    tools=[add, describe_image_content],
    llm=multimodal_llm,
    system_prompt="You are a helpful assistant that can analyze images and perform calculations."
)

# Use with images
async def analyze_image():
    msg = ChatMessage(
        role="user",
        blocks=[
            TextBlock(text="What do you see in this image? If there are numbers, perform calculations."),
            ImageBlock(url="path/to/your/image.jpg"),
        ],
    )
    response = await multimodal_agent.run(msg)
    print(f"Multi-modal response: {response}")

await analyze_image()

Multi-Agent Workflow Example

Create a complex workflow with multiple specialized agents:
from llama_index.core.agent.workflow import AgentWorkflow, FunctionAgent
from llama_index.llms.openai import OpenAI
from llama_index.core.tools import FunctionTool

# Research agent tools
def research_topic(topic: str) -> str:
    """Research a given topic and return key findings."""
    research_data = {
        "climate change": "Climate change refers to long-term shifts in global temperatures and weather patterns...",
        "renewable energy": "Renewable energy comes from sources that are naturally replenishing...",
        "artificial intelligence": "AI involves creating computer systems that can perform tasks typically requiring human intelligence..."
    }
    
    topic_lower = topic.lower()
    for key, info in research_data.items():
        if key in topic_lower:
            return f"Research findings on {topic}: {info}"
    
    return f"Research completed on {topic}. This is an emerging area requiring further investigation."

# Analysis agent tools
def analyze_data(research_data: str) -> str:
    """Analyze research data and provide insights."""
    if "climate change" in research_data.lower():
        return "Analysis indicates climate change requires immediate action through carbon reduction..."
    elif "renewable energy" in research_data.lower():
        return "Analysis shows renewable energy is becoming cost-competitive with fossil fuels..."
    else:
        return "Analysis suggests this topic has significant implications requiring strategic planning..."

# Report writing agent tools
def write_report(analysis: str, topic: str) -> str:
    """Write a comprehensive report based on analysis."""
    return f"""
═══════════════════════════════════════
COMPREHENSIVE RESEARCH REPORT: {topic.upper()}
═══════════════════════════════════════

EXECUTIVE SUMMARY:
{analysis}

KEY FINDINGS:
- Evidence-based analysis indicates significant implications
- Multiple stakeholder perspectives must be considered
- Implementation requires coordinated approach

RECOMMENDATIONS:
1. Develop comprehensive strategy framework
2. Engage key stakeholders early in process
3. Establish clear metrics and milestones
4. Create feedback mechanisms for continuous improvement

This report provides a foundation for informed decision-making.
"""

# Initialize LLM and create specialized agents
llm = OpenAI(model="gpt-4o-mini", temperature=0)

research_agent = FunctionAgent(
    name="research_agent",
    description="This agent researches topics and returns key findings.",
    tools=[FunctionTool.from_defaults(fn=research_topic)],
    llm=llm,
    system_prompt="You are a research specialist. Use the research tool to gather comprehensive information."
)

analysis_agent = FunctionAgent(
    name="analysis_agent", 
    description="This agent analyzes research data and provides actionable insights.",
    tools=[FunctionTool.from_defaults(fn=analyze_data)],
    llm=llm,
    system_prompt="You are a data analyst. Analyze research findings and provide actionable insights."
)

report_agent = FunctionAgent(
    name="report_agent",
    description="This agent creates comprehensive, well-structured reports.",
    tools=[FunctionTool.from_defaults(fn=write_report)],
    llm=llm,
    system_prompt="You are a report writer. Create comprehensive, well-structured reports."
)

# Create multi-agent workflow
multi_agent_workflow = AgentWorkflow(
    agents=[research_agent, analysis_agent, report_agent],
    root_agent="research_agent"
)

# Run the workflow (automatically traced by Maxim)
async def run_workflow():
    query = """I need a comprehensive report on renewable energy.
    Please research the current state of renewable energy,
    analyze the key findings, and create a structured report
    with recommendations for implementation."""
    
    response = await multi_agent_workflow.run(query)
    print(f"Multi-Agent Response: {response}")

await run_workflow()

What Gets Traced

With Maxim instrumentation enabled, you’ll automatically capture:

Agent Execution Traces

  • Function Calls: All tool executions with inputs and outputs
  • Agent Reasoning: Step-by-step decision making process
  • Multi-Agent Coordination: Agent handoffs and communication
  • Performance Metrics: Execution times and resource usage

LLM Interactions

  • Prompts and Responses: Complete conversation history
  • Model Parameters: Temperature, max tokens, model versions
  • Token Usage: Input/output token consumption
  • Error Handling: Failed requests and retry attempts

Workflow Orchestration

  • Agent Workflows: Complex multi-agent execution paths
  • Tool Chain Execution: Sequential and parallel tool usage
  • Multi-Modal Processing: Text, image, and mixed content handling

View Traces in Maxim Dashboard

All agent interactions, tool calls, and workflow executions are automatically traced and available in your Maxim dashboard. You can:
  • Monitor agent performance and success rates
  • Debug failed tool calls and agent reasoning
  • Analyze multi-agent coordination patterns
  • Track token usage and costs across workflows
  • Set up alerts for agent failures or performance issues

Advanced Configuration

Custom Debug Settings

# Enable detailed debug logging during development
instrument_llamaindex(logger, debug=True)

# Production setup with minimal logging
instrument_llamaindex(logger, debug=False)
llamaindex_traces.gif

Resources

You can quickly try the LlamaIndex One Line Integration here -