# Maxim Docs ## Docs - [Create Alert](https://www.getmaxim.ai/docs/alerts/alert/create-alert.md): Create a new alert - [Delete Alert](https://www.getmaxim.ai/docs/alerts/alert/delete-alert.md): Delete an alert - [Get Alerts](https://www.getmaxim.ai/docs/alerts/alert/get-alerts.md): Get alerts for a workspace - [Update Alert](https://www.getmaxim.ai/docs/alerts/alert/update-alert.md): Update an alert - [Building a Financial Conversational Agent with Agno and Maxim](https://www.getmaxim.ai/docs/cookbooks/integrations/agno.md): Learn how to build a multi-agent financial conversational assistant using Agno for agent orchestration and Maxim for observability and tracing. - [Tracing Anthropic Claude with Maxim](https://www.getmaxim.ai/docs/cookbooks/integrations/anthropic.md): Learn how to integrate Anthropic's Claude models with Maxim for full observability and tracing, including both standard and streaming completions. - [Maxim Observability with CrewAI Research Agent](https://www.getmaxim.ai/docs/cookbooks/integrations/crewai.md): Learn how to add Maxim observability and tracing to your CrewAI agent applications in just one line of code. - [Tracing Google Gemini Based Weather Agent Using Maxim](https://www.getmaxim.ai/docs/cookbooks/integrations/gemini.md): Learn how to integrate Maxim's tracing capabilities with Google Gemini to monitor and log your GenAI app's requests and tool calls. - [Stock Market Analysis with Groq and Maxim](https://www.getmaxim.ai/docs/cookbooks/integrations/groq.md): Learn how to add Maxim observability and tracing for Groq client - [Real Time Interview Voice Agent Using LiveKit](https://www.getmaxim.ai/docs/cookbooks/integrations/livekit.md): Learn how to add Maxim observability for LiveKit based voice agents - [Simple AI Agent with LlamaIndex](https://www.getmaxim.ai/docs/cookbooks/integrations/llamaindex.md): Learn how to add Maxim observability and tracing to your LlamaIndex applications with function agents, multi-modal capabilities, and multi-agent workflows. - [Observe Pydantic AI Based AI Agents](https://www.getmaxim.ai/docs/cookbooks/integrations/pydantic.md): Complete examples and cookbook for integrating Pydantic AI with Maxim for comprehensive agent monitoring and observability - [Tracing a ReAct Agent with Maxim](https://www.getmaxim.ai/docs/cookbooks/integrations/react-agent.md): Learn how to build a ReAct-style agent using OpenAI's GPT models and trace its reasoning, tool calls, and answers using Maxim's observability SDK. - [Agent Observability for Smolagents](https://www.getmaxim.ai/docs/cookbooks/integrations/smolagents.md): Learn how to add Maxim observability and tracing to your Smolagents applications with SQL database interactions and tool calling. - [LLM Observability for Together AI](https://www.getmaxim.ai/docs/cookbooks/integrations/together.md): Complete examples and cookbook for integrating Together AI with Maxim for comprehensive model monitoring and observability - [Maxim Observability with Vercel AI SDK](https://www.getmaxim.ai/docs/cookbooks/integrations/vercel.md): Learn how to add Maxim observability and tracing to your Vercel AI SDK applications in just one line of code. - [Reuse Parts of Prompts Using Maxim Prompt Partials](https://www.getmaxim.ai/docs/cookbooks/platform-features/prompt-partials.md): This cookbook demonstrates how to use Maxim's Prompt Partials feature to create reusable prompt components that maintain consistency across multiple prompts and reduce repetition. - [Creating Custom Evaluators in Maxim via SDK](https://www.getmaxim.ai/docs/cookbooks/sdk/sdk_custom_evaluator.md): This cookbook demonstrates how to create custom evaluators for Maxim test runs using the Python SDK. You'll learn to build AI-powered evaluators, programmatic evaluators, and integrate them with hosted datasets to comprehensively evaluate your prompts and agents from your coding environment. - [Using Local Datasets with Maxim SDK for Test Runs](https://www.getmaxim.ai/docs/cookbooks/sdk/sdk_test_run_local_dataset.md): This cookbook demonstrates how to trigger test runs using Maxim SDK with local datasets instead of hosted datasets. You'll learn to work with CSV files, manual data, SQL databases, and other local data sources while creating comprehensive evaluation pipelines with custom evaluators. - [Custom Logs Dashboards](https://www.getmaxim.ai/docs/dashboards/custom-logs-dashboard.md): Create custom dashboards to analyze and track your AI application logs across repositories using configurable metrics, filters, and charts. - [Test Runs Comparison Dashboard](https://www.getmaxim.ai/docs/dashboards/test-runs-comparison-dashboard.md): Learn how to create a comparison report for your test runs - [Create Dataset Columns](https://www.getmaxim.ai/docs/datasets/dataset-column/create-dataset-columns.md): Create dataset columns - [Delete Dataset Columns](https://www.getmaxim.ai/docs/datasets/dataset-column/delete-dataset-columns.md): Delete dataset columns - [Get Dataset Columns](https://www.getmaxim.ai/docs/datasets/dataset-column/get-dataset-columns.md): Get dataset columns - [Update Dataset Columns](https://www.getmaxim.ai/docs/datasets/dataset-column/update-dataset-columns.md): Update dataset columns - [Create Dataset entries](https://www.getmaxim.ai/docs/datasets/dataset-entry/create-dataset-entries.md): Create dataset entries - [Delete Dataset Entries](https://www.getmaxim.ai/docs/datasets/dataset-entry/delete-dataset-entries.md): Delete dataset entries - [Get Dataset Entries](https://www.getmaxim.ai/docs/datasets/dataset-entry/get-dataset-entries.md): Get dataset entries - [Update Dataset Entries](https://www.getmaxim.ai/docs/datasets/dataset-entry/update-dataset-entries.md): Update dataset entries - [Create Dataset Split](https://www.getmaxim.ai/docs/datasets/dataset-split/create-dataset-split.md): Create dataset split - [Delete Dataset Split](https://www.getmaxim.ai/docs/datasets/dataset-split/delete-dataset-split.md): Delete dataset split - [Get Dataset Splits](https://www.getmaxim.ai/docs/datasets/dataset-split/get-dataset-splits.md): Get dataset splits - [Update Dataset Split](https://www.getmaxim.ai/docs/datasets/dataset-split/update-dataset-split.md): Update dataset split - [Create Dataset](https://www.getmaxim.ai/docs/datasets/dataset/create-dataset.md): Create a new dataset - [Delete Dataset](https://www.getmaxim.ai/docs/datasets/dataset/delete-dataset.md): Delete a dataset - [Get Datasets](https://www.getmaxim.ai/docs/datasets/dataset/get-datasets.md): Get datasets or a specific dataset - [Update Dataset](https://www.getmaxim.ai/docs/datasets/dataset/update-dataset.md): Update a dataset - [Execute an evaluator](https://www.getmaxim.ai/docs/evaluators/evaluator/execute-an-evaluator.md): Execute an evaluator to assess content based on predefined criteria and return grading results, reasoning, and execution logs - [Get evaluators](https://www.getmaxim.ai/docs/evaluators/evaluator/get-evaluators.md): Get an evaluator by ID, name or fetch all evaluators for a workspace - [Get Folder Contents](https://www.getmaxim.ai/docs/folders/folder-contents/get-folder-contents.md): Get the contents (entities) of a specific folder, identified by folderId or name+parentFolderId. - [Create Folder](https://www.getmaxim.ai/docs/folders/folder/create-folder.md): Create a new folder for organizing entities - [Get Folders](https://www.getmaxim.ai/docs/folders/folder/get-folders.md): Get folder details. If id or name is provided, returns a single folder object. Otherwise, lists sub-folders under the parentFolderId (or root). - [Create a PagerDuty Integration](https://www.getmaxim.ai/docs/integrations/create-a-pagerduty-integration.md): Learn how to create a PagerDuty integration in Maxim to receive notifications when your AI application's performance metrics or quality scores exceed specified thresholds. - [Create a Slack Integration](https://www.getmaxim.ai/docs/integrations/create-a-slack-integration.md): Learn how to create a Slack integration in Maxim to receive notifications when your AI application's performance metrics or quality scores exceed specified thresholds. - [Create Integration](https://www.getmaxim.ai/docs/integrations/integration/create-integration.md): Create a new integration for notification channels - [Delete Integration](https://www.getmaxim.ai/docs/integrations/integration/delete-integration.md): Delete an integration - [Get Integrations](https://www.getmaxim.ai/docs/integrations/integration/get-integrations.md): Get integrations for a workspace - [Update Integration](https://www.getmaxim.ai/docs/integrations/integration/update-integration.md): Update an integration - [Overview](https://www.getmaxim.ai/docs/integrations/overview.md): Introduction to Maxim Integrations - [Simulate AWS Bedrock Agents](https://www.getmaxim.ai/docs/integrations/simulate-bedrock-agent.md): Test and evaluate your AWS Bedrock agents with Maxim AI - [Simulate Glean Agents](https://www.getmaxim.ai/docs/integrations/simulate-glean-agent.md): Test and evaluate your Glean agents with Maxim AI - [Platform Overview](https://www.getmaxim.ai/docs/introduction/overview.md): Maxim AI is an end-to-end platform for the simulation, evaluation and observability of AI agents and applications, which helps development teams build and deploy reliable generative AI products faster. Our advanced evaluation and observability tools help teams maintain quality, reliability, and speed throughout the AI application lifecycle. - [Running Your First Eval](https://www.getmaxim.ai/docs/introduction/running-your-first-eval.md): Get started with your first evaluation run in Maxim by setting up model providers, creating prompts or agent endpoints, and preparing your dataset. This page guides you step-by-step through launching and testing your first eval. - [Library Concepts](https://www.getmaxim.ai/docs/library/concepts.md): Explore key concepts in AI evaluation, including evaluators, datasets, and custom tools for assessing model performance and output quality. - [Context Sources](https://www.getmaxim.ai/docs/library/context-sources.md): Learn how to create, use, and evaluate context sources for your AI applications. Context sources in Maxim allow you to connect your RAG pipeline via a simple API endpoint and link it as a variable in your prompts or endpoints for evaluation. - [Curate Datasets](https://www.getmaxim.ai/docs/library/datasets/curate-datasets.md): Learn how to curate datasets from production logs and human annotations - [Import or Create Datasets](https://www.getmaxim.ai/docs/library/datasets/import-or-create-datasets.md): Learn how to import or create datasets in Maxim - [Manage Datasets](https://www.getmaxim.ai/docs/library/datasets/manage-datasets.md): Learn how to manage datasets - [Synthetic Data Generation](https://www.getmaxim.ai/docs/library/datasets/synthetic-data-generation.md): Generate synthetic datasets automatically to kickstart your evaluation process for prompt testing or agent simulation - [Local Datasets](https://www.getmaxim.ai/docs/library/datasets/use-local-datasets.md): Learn how to add new entries to a Dataset using the Maxim SDK - [Custom Evaluators](https://www.getmaxim.ai/docs/library/evaluators/custom-evaluators.md): Create and configure custom evaluators to meet your specific evaluation needs - [Folders in Evaluators](https://www.getmaxim.ai/docs/library/evaluators/folders-in-evaluators.md): Organize your evaluators using folders to keep your workspace tidy and manageable. - [Agent Trajectory](https://www.getmaxim.ai/docs/library/evaluators/pre-built-evaluators/ai-evaluators/agent-trajectory.md): Assesses whether an agent has completed all required steps to achieve a task, evaluating the logical progression and completeness of steps taken during a session. - [Bias](https://www.getmaxim.ai/docs/library/evaluators/pre-built-evaluators/ai-evaluators/bias.md): Evaluates content for the presence of biased statements across dimensions like gender, race, religion, age, and other protected characteristics, identifying potentially discriminatory or prejudiced language. - [Clarity](https://www.getmaxim.ai/docs/library/evaluators/pre-built-evaluators/ai-evaluators/clarity.md): Evaluates how clear, understandable, and well-structured the generated text is, assessing readability, logical flow, and communication effectiveness. - [Conciseness](https://www.getmaxim.ai/docs/library/evaluators/pre-built-evaluators/ai-evaluators/conciseness.md): Evaluates whether the output is appropriately brief and to the point without unnecessary verbosity, assessing efficiency in communication. - [Consistency](https://www.getmaxim.ai/docs/library/evaluators/pre-built-evaluators/ai-evaluators/consistency.md): Evaluates whether multiple outputs generated by a language model for the same input are consistent with each other. - [Context Precision](https://www.getmaxim.ai/docs/library/evaluators/pre-built-evaluators/ai-evaluators/context-precision.md): Accesses if relevant nodes in the retrieved context are prioritized over irrelevant ones for a specific input. - [Context Recall](https://www.getmaxim.ai/docs/library/evaluators/pre-built-evaluators/ai-evaluators/context-recall.md): Assesses how closely the retrieved context matches the expected output by identifying key statements in the expected output and evaluating whether each is represented in the retrieved context. - [Context Relevance](https://www.getmaxim.ai/docs/library/evaluators/pre-built-evaluators/ai-evaluators/context-relevance.md): Assesses how relevant the information in the retrieved context is to the given input and history (if present) - [Faithfulness](https://www.getmaxim.ai/docs/library/evaluators/pre-built-evaluators/ai-evaluators/faithfulness.md): Evaluates whether claims in the output factually align with the contents of the provided context and input by checking for contradictions, ultimately ensuring factual consistency. - [Output Relevance](https://www.getmaxim.ai/docs/library/evaluators/pre-built-evaluators/ai-evaluators/output-relevance.md): Evaluates whether each statement in the output relevantly addresses the input, breaking down the output into statements and assessing individual relevance. - [PII Detection](https://www.getmaxim.ai/docs/library/evaluators/pre-built-evaluators/ai-evaluators/pii-detection.md): Evaluates whether personally identifiable information (PII) that might have leaked in the output. This is crucial for maintaining privacy and compliance with data protection regulations - [SQL Correctness](https://www.getmaxim.ai/docs/library/evaluators/pre-built-evaluators/ai-evaluators/sql-correctness.md): Evaluates whether a generated SQL query correctly translates a natural language query into valid SQL based on a provided database schema. - [Step Completion Strict Match](https://www.getmaxim.ai/docs/library/evaluators/pre-built-evaluators/ai-evaluators/step-completion-strict-match.md): Evaluates whether an agent has completed all required steps in exactly the specified order, ensuring strict sequential compliance. - [Step Completion Unordered Match](https://www.getmaxim.ai/docs/library/evaluators/pre-built-evaluators/ai-evaluators/step-completion-unordered-match.md): Evaluates whether an agent has completed all required steps, considering flexible execution order. - [Step Utility](https://www.getmaxim.ai/docs/library/evaluators/pre-built-evaluators/ai-evaluators/step-utility.md): Evaluates the usefulness and contribution of each step in an agent's session towards achieving the overall task goal. - [Summarization](https://www.getmaxim.ai/docs/library/evaluators/pre-built-evaluators/ai-evaluators/summarization.md): Evaluates the quality of text summarization across multiple dimensions including content coverage, alignment with source, and information accuracy. - [Task Success](https://www.getmaxim.ai/docs/library/evaluators/pre-built-evaluators/ai-evaluators/task-success.md): Evaluates whether an agent has successfully accomplished the intended goal of a task based on the complete interaction. - [Tool Selection](https://www.getmaxim.ai/docs/library/evaluators/pre-built-evaluators/ai-evaluators/tool-selection.md): Evaluates whether an agent selected and used appropriate tools for each step in a task, including parameter configuration. - [Toxicity](https://www.getmaxim.ai/docs/library/evaluators/pre-built-evaluators/ai-evaluators/toxicity.md): Assesses content for harmful or toxic language, ensuring text does not contain offensive, abusive, or harmful language to individuals or groups. - [Overview](https://www.getmaxim.ai/docs/library/evaluators/pre-built-evaluators/overview.md): Get started quickly with ready-made evaluators for common AI evaluation scenarios - [containsSpecialCharacters](https://www.getmaxim.ai/docs/library/evaluators/pre-built-evaluators/programmatic-evaluators/contains-special-characters.md): Validates if a string contains special characters such as !@#$%^&*(),.?":{}|<>. - [containsValidEmail](https://www.getmaxim.ai/docs/library/evaluators/pre-built-evaluators/programmatic-evaluators/contains-valid-email.md): Checks whether a given string contains at least one email address. - [containsValidPhoneNumber](https://www.getmaxim.ai/docs/library/evaluators/pre-built-evaluators/programmatic-evaluators/contains-valid-phone-number.md): Validates if the given output contains exactly 10 consecutive digits, representing a standard phone number format. - [containsValidURL](https://www.getmaxim.ai/docs/library/evaluators/pre-built-evaluators/programmatic-evaluators/contains-valid-url.md): Validates if a string contains at least one valid URL with HTTP/HTTPS/FTP protocols. - [countWordOccurrences](https://www.getmaxim.ai/docs/library/evaluators/pre-built-evaluators/programmatic-evaluators/count-word-occurrences.md): Checks whether the word "test" appears at least once in the given string (case-insensitive). - [isBetweenRange](https://www.getmaxim.ai/docs/library/evaluators/pre-built-evaluators/programmatic-evaluators/is-between-range.md): Validates if a given output value is within the specified range (0-100, exclusive). - [isValidBase64](https://www.getmaxim.ai/docs/library/evaluators/pre-built-evaluators/programmatic-evaluators/is-valid-base64.md): Validates if a string is a valid base64 encoded value. - [isValidDate](https://www.getmaxim.ai/docs/library/evaluators/pre-built-evaluators/programmatic-evaluators/is-valid-date.md): Validates if a string matches supported date formats and is a valid calendar date. - [isValidEmail](https://www.getmaxim.ai/docs/library/evaluators/pre-built-evaluators/programmatic-evaluators/is-valid-email.md): Validates if the provided string is a valid email address. - [isValidHexColor](https://www.getmaxim.ai/docs/library/evaluators/pre-built-evaluators/programmatic-evaluators/is-valid-hex-color.md): Validates if a string is a valid hexadecimal color code. Supports 3-digit and 6-digit hex codes, with or without the leading - [isValidJSON](https://www.getmaxim.ai/docs/library/evaluators/pre-built-evaluators/programmatic-evaluators/is-valid-json.md): Validates if a string is in valid JSON format. - [isValidMD5](https://www.getmaxim.ai/docs/library/evaluators/pre-built-evaluators/programmatic-evaluators/is-valid-md5.md): Validates if the input string is a valid 32-character hexadecimal MD5 hash. - [isValidPhoneNumber](https://www.getmaxim.ai/docs/library/evaluators/pre-built-evaluators/programmatic-evaluators/is-valid-phone-number.md): Validates if the given string is exactly a 10-digit phone number. - [isValidSHA256](https://www.getmaxim.ai/docs/library/evaluators/pre-built-evaluators/programmatic-evaluators/is-valid-sha256.md): Validates if a string matches the SHA-256 hash pattern (64 hexadecimal characters). - [isValidURL](https://www.getmaxim.ai/docs/library/evaluators/pre-built-evaluators/programmatic-evaluators/is-valid-url.md): Validates if the given string is a valid URL. - [isValidUUID](https://www.getmaxim.ai/docs/library/evaluators/pre-built-evaluators/programmatic-evaluators/is-valid-uuid.md): Validates if a string matches the UUID format (8-4-4-4-12 hexadecimal characters separated by hyphens). - [BLEU](https://www.getmaxim.ai/docs/library/evaluators/pre-built-evaluators/statistical-evaluators/bleu.md): Measures translation quality by comparing the n-gram precision of a candidate text to reference translations, penalizing overly short outputs. - [Chebyshev Embedding Distance](https://www.getmaxim.ai/docs/library/evaluators/pre-built-evaluators/statistical-evaluators/chebyshev-embedding-distance.md): Calculates the L∞ distance between two text embeddings, defined as the greatest difference along any single dimension. - [Cosine Embedding Distance](https://www.getmaxim.ai/docs/library/evaluators/pre-built-evaluators/statistical-evaluators/cosine-embedding-distance.md): Measures the cosine of the angle between two embedding vectors to evaluate semantic similarity based on orientation, not magnitude. - [Euclidean Embedding Distance](https://www.getmaxim.ai/docs/library/evaluators/pre-built-evaluators/statistical-evaluators/euclidean-embedding-distance.md): Calculates the straight-line L2 distance between two text embeddings, providing a natural measure of semantic difference in the vector space. - [F1 Score](https://www.getmaxim.ai/docs/library/evaluators/pre-built-evaluators/statistical-evaluators/f1-score.md): Calculates the harmonic mean of precision and recall, providing a single, balanced score that is useful for imbalanced datasets. - [Hamming Embedding Distance](https://www.getmaxim.ai/docs/library/evaluators/pre-built-evaluators/statistical-evaluators/hamming-embedding-distance.md): Counts the number of positions at which two embedding vectors differ, making it suitable for comparing binary or categorical data. - [Manhattan Embedding Distance](https://www.getmaxim.ai/docs/library/evaluators/pre-built-evaluators/statistical-evaluators/manhattan-embedding-distance.md): Calculates the L1 distance between two text embeddings, representing the sum of absolute differences across all dimensions. - [Precision](https://www.getmaxim.ai/docs/library/evaluators/pre-built-evaluators/statistical-evaluators/precision.md): Measures the accuracy of positive predictions by calculating the proportion of true positives among all predicted positives. - [Recall](https://www.getmaxim.ai/docs/library/evaluators/pre-built-evaluators/statistical-evaluators/recall.md): Measures the completeness of positive predictions by calculating the proportion of true positives among all actual positives. - [ROUGE-1](https://www.getmaxim.ai/docs/library/evaluators/pre-built-evaluators/statistical-evaluators/rouge1.md): Measures summary quality by calculating the overlap of unigrams (individual words) between the generated and reference texts. It focuses on basic content coverage. - [ROUGE-2](https://www.getmaxim.ai/docs/library/evaluators/pre-built-evaluators/statistical-evaluators/rouge2.md): Measures summary quality and local fluency by calculating the overlap of bigrams (word pairs) between the generated and reference texts. It is more stringent than ROUGE-1. - [ROUGE-L](https://www.getmaxim.ai/docs/library/evaluators/pre-built-evaluators/statistical-evaluators/rougel.md): Measures summary quality by finding the longest common subsequence (LCS) of words, capturing sentence-level structural similarity without requiring consecutive matches. - [ROUGE-Lsum](https://www.getmaxim.ai/docs/library/evaluators/pre-built-evaluators/statistical-evaluators/rougelsum.md): Adapts ROUGE-L for multi-sentence texts by computing a summary-level longest common subsequence (LCS) score, suitable for document-level evaluation. - [Semantic Similarity](https://www.getmaxim.ai/docs/library/evaluators/pre-built-evaluators/statistical-evaluators/semantic-similarity.md): Evaluates how close two texts are in meaning by comparing their vector embeddings, typically using cosine similarity. It captures meaning beyond exact word matches. - [SQL Query Analysis](https://www.getmaxim.ai/docs/library/evaluators/pre-built-evaluators/statistical-evaluators/sql-query-analysis.md): Evaluates a generated SQL query's correctness by comparing its structure, semantics, and execution plan against a reference query. It goes beyond simple string matching. - [SQLite Validation](https://www.getmaxim.ai/docs/library/evaluators/pre-built-evaluators/statistical-evaluators/sqlite-validation.md): Checks if a generated SQL query is syntactically valid and executable against a given SQLite schema by actually running the query in an in-memory database. - [Tool Call Accuracy](https://www.getmaxim.ai/docs/library/evaluators/pre-built-evaluators/statistical-evaluators/tool-call-accuracy.md): Measures whether the model generated the correct set of tool calls for a given input by comparing actual tool calls against expected tool calls. - [Tree Similarity Editing Distance](https://www.getmaxim.ai/docs/library/evaluators/pre-built-evaluators/statistical-evaluators/tree-similarity-editing-distance.md): Evaluates the structural similarity of code or XML by calculating the minimum number of edits required to transform one text's abstract syntax tree into another's. - [Abrupt Disconnection](https://www.getmaxim.ai/docs/library/evaluators/pre-built-evaluators/voice-evaluators/abrupt-disconnection.md): Detects if there is unexpected call terminations or early termination at the end of a pre recorded audio recording - [AI Interrupting User](https://www.getmaxim.ai/docs/library/evaluators/pre-built-evaluators/voice-evaluators/ai-interrupting-user.md): Identifies and counts the number of instances where the AI interrupts or cuts off user speech during conversations. - [Sentiment Analysis](https://www.getmaxim.ai/docs/library/evaluators/pre-built-evaluators/voice-evaluators/sentiment-analysis.md): Analyzes the speech patterns, acoustic features like pitch and vocal tone and the overall sentiment of User in an pre recorded audio conversation to deliver precise sentiment classification - [SNR (Signal-To-Noise Ratio)](https://www.getmaxim.ai/docs/library/evaluators/pre-built-evaluators/voice-evaluators/signal-to-noise-ratio.md): Calculates a score based on the ratio between the desired signal power and background noise power in the audio. - [User Interrupting AI](https://www.getmaxim.ai/docs/library/evaluators/pre-built-evaluators/voice-evaluators/user-interrupting-ai.md): Identifies and counts the number of instances where the User or Simulated User interrupts or cuts off the AI during audio conversations. - [User Satisfaction](https://www.getmaxim.ai/docs/library/evaluators/pre-built-evaluators/voice-evaluators/user-satisfaction.md): Analyzes the vocal and emotional journey of the user throughout the conversation, the service quality given by the AI along with the overall value delivered to the user - [WER (Word Error Rate)](https://www.getmaxim.ai/docs/library/evaluators/pre-built-evaluators/voice-evaluators/wer.md): Quantifies the accuracy of machine-generated transcriptions or translations by measuring the proportion of words that are incorrectly transcribed or translated compared to the reference text. - [Sessions in Evaluators](https://www.getmaxim.ai/docs/library/evaluators/sessions-in-evaluators.md): Track changes to your evaluators using sessions to maintain a history of configurations. - [Third Party Evaluators](https://www.getmaxim.ai/docs/library/evaluators/third-party-evaluators.md): A comprehensive guide to supported third-party evaluation metrics for assessing AI model outputs - [Map Variables to Evaluators](https://www.getmaxim.ai/docs/library/evaluators/variables-mapping.md): Map variables from your prompts, workflows, or datasets to evaluator inputs using our flexible mapping system - [Library Overview](https://www.getmaxim.ai/docs/library/overview.md): Explore Maxim's library of supporting components for AI testing and evaluation. Access evaluators, datasets, context sources, and prompt tools to enhance your testing workflow and ensure high-quality AI applications. - [Creating Prompt Partials](https://www.getmaxim.ai/docs/library/prompt-partials.md): Learn how to create and use prompt partials in Maxim - [Prompt Tools](https://www.getmaxim.ai/docs/library/prompt-tools.md): This section includes comprehensive documentation for creating and using different types of prompt tools in Maxim. Learn how to create code-based, schema-based, and API-based tools to enhance your prompts with custom functions and agentic behaviors. - [Create a new log repository](https://www.getmaxim.ai/docs/log repositories/log-repository/create-a-new-log-repository.md): Create a new log repository - [Delete a log repository](https://www.getmaxim.ai/docs/log repositories/log-repository/delete-a-log-repository.md): Delete a log repository - [Get log repositories](https://www.getmaxim.ai/docs/log repositories/log-repository/get-log-repositories.md): Get log repositories - [Get trace by ID](https://www.getmaxim.ai/docs/log repositories/log-repository/get-trace-by-id.md): Get a specific trace by ID - [Get unique values for a tag](https://www.getmaxim.ai/docs/log repositories/log-repository/get-unique-values-for-a-tag.md): Get unique values for a tag - [Search logs in a log repository](https://www.getmaxim.ai/docs/log repositories/log-repository/search-logs-in-a-log-repository.md): Search logs in a log repository - [Update log repository](https://www.getmaxim.ai/docs/log repositories/log-repository/update-log-repository.md): Update log repository - [Push logs](https://www.getmaxim.ai/docs/logging/sdk-logging/push-logs.md): Push logs - [Offline Evaluation Concepts](https://www.getmaxim.ai/docs/offline-evals/concepts.md): This page introduces the core concepts of offline evaluation in Maxim, including how prompts work, prompt comparisons, and best practices for leveraging these features to assess AI model performance. Learn about prompts, agents, workflows, test runs, evaluators, and datasets. - [Create a Customer Support Email Agent](https://www.getmaxim.ai/docs/offline-evals/guides/create-customer-support-agent.md): Create a workflow that automatically categorizes support emails, creates help desk tickets, and sends responses - [Create a Product Description Generator](https://www.getmaxim.ai/docs/offline-evals/guides/create-product-description-generator.md): Build an AI workflow to generate product descriptions from images using Agents via no-code builder - [Evaluating the Quality of AI HR Assistants](https://www.getmaxim.ai/docs/offline-evals/guides/evaluating-the-quality-of-ai-hr-assistants.md): Learn how to evaluate the quality of AI HR assistants using Maxim's evaluation suite, ensuring accurate and efficient HR processes. - [Evaluating AI Healthcare Assistants](https://www.getmaxim.ai/docs/offline-evals/guides/evaluating-the-quality-of-healthcare-assistants-using-maxim-ai.md): Learn how to evaluate the quality and reliability of AI healthcare assistants using Maxim's evaluation suite, ensuring patient safety and clinical reliability. - [Offline Evaluation Overview](https://www.getmaxim.ai/docs/offline-evals/overview.md): Learn how to evaluate AI application performance through prompt testing, workflow automation, and continuous log monitoring. Streamline your AI testing pipeline with comprehensive evaluation tools. - [HTTP Agent CI/CD Integration](https://www.getmaxim.ai/docs/offline-evals/via-sdk/agent-http/ci-cd-integration.md): Learn how to integrate HTTP endpoint evaluations into your CI/CD pipeline using GitHub Actions - [Endpoint on Maxim](https://www.getmaxim.ai/docs/offline-evals/via-sdk/agent-http/endpoint-on-maxim.md): Learn how to test AI agents using workflows stored on the Maxim platform using the Maxim SDK - [Local Endpoint Testing](https://www.getmaxim.ai/docs/offline-evals/via-sdk/agent-http/local-endpoint.md): Learn to evaluate AI agents hosted on your own local or private endpoints using the Maxim SDK. This page shows how to call your HTTP services for agent testing with complete flexibility. - [SDK HTTP Agent Quickstart](https://www.getmaxim.ai/docs/offline-evals/via-sdk/agent-http/quickstart.md): Learn how to quickly get started with running agent evaluations via HTTP endpoints using the Maxim SDK - [Agent on Maxim](https://www.getmaxim.ai/docs/offline-evals/via-sdk/agent-no-code/agent-on-maxim.md): Learn how to test AI agents using no-code agents configured and stored on the Maxim platform - [SDK No-Code Agent Quickstart](https://www.getmaxim.ai/docs/offline-evals/via-sdk/agent-no-code/quickstart.md): Learn how to quickly get started with evaluating AI agents using no-code agents and the Maxim SDK - [Local Agent Testing](https://www.getmaxim.ai/docs/offline-evals/via-sdk/local-agent.md): This page shows how to integrate your own locally implemented AI agents with the Maxim SDK. Learn how to run and evaluate locally executed agents—including those built with frameworks like CrewAI or LangChain—using Maxim's testing and evaluation tools. - [Curate Dataset from Logs](https://www.getmaxim.ai/docs/offline-evals/via-sdk/logging/curate-dataset.md): Learn how to create evaluation datasets from your captured production logs using the Maxim dashboard or SDK. - [Offline Evals via Logs](https://www.getmaxim.ai/docs/offline-evals/via-sdk/logging/overview.md): Learn how to run offline evaluations on logs. Use production data to test new evaluators, compare model versions, or analyze historical performance. - [Run Test on Logs](https://www.getmaxim.ai/docs/offline-evals/via-sdk/logging/run-test.md): Learn how to execute offline evaluations on datasets curated from your production logs using the Maxim SDK. - [Prompt CI/CD Integration](https://www.getmaxim.ai/docs/offline-evals/via-sdk/prompts/ci-cd-integration.md): This guide shows you how to automate prompt testing using GitHub Actions in your CI/CD workflow. Learn to configure secrets, environment variables, and workflows for robust, continuous prompt evaluation. - [Local Prompt Testing](https://www.getmaxim.ai/docs/offline-evals/via-sdk/prompts/local-prompt.md): This page shows how to run tests on prompts you define in your own code using the Maxim SDK. Learn to implement custom prompt logic and connect to any LLM provider locally. - [Maxim Prompt Testing](https://www.getmaxim.ai/docs/offline-evals/via-sdk/prompts/maxim-prompt.md): This page shows how to evaluate prompts that are versioned and managed on the Maxim platform using the SDK. Learn to configure and run tests on your Maxim-hosted prompts for systematic evaluation. - [Prompt Management](https://www.getmaxim.ai/docs/offline-evals/via-sdk/prompts/prompt-management.md): Learn how to retrieve and use tested prompts from the Maxim platform for your production workflows - [SDK Prompt Quickstart](https://www.getmaxim.ai/docs/offline-evals/via-sdk/prompts/quickstart.md): This page provides a step-by-step guide to installing, configuring, and running prompt evaluations using the Maxim SDK. Follow along to quickly set up your environment and launch your first test run. - [Customized Reports](https://www.getmaxim.ai/docs/offline-evals/via-ui/advanced/customized-reports.md): The run report is a single source of truth for you to understand exactly how your AI system is performing during your experiments or pre-release testing. You can customize reports to gain insights and make decisions. - [Dataset Evaluation](https://www.getmaxim.ai/docs/offline-evals/via-ui/advanced/dataset-evaluation.md): Learn how to evaluate your AI outputs against expected results using Maxim's Dataset evaluation tools - [Notifications](https://www.getmaxim.ai/docs/offline-evals/via-ui/advanced/notifications.md): Test runs are a core part of continuous testing workflows and could be triggered via UI or in the CI/CD pipeline. Teams need visibility into triggered runs, status updates, and result summaries without having to come to the dashboard to constantly check. Integrations with Slack and PagerDuty allow notifications to be configured for some of these events. - [Presets](https://www.getmaxim.ai/docs/offline-evals/via-ui/advanced/presets.md): As your team starts running tests regularly on your entities, make it simple and quick to configure tests and see results. Test presets are a way to help you reuse your configurations with a single click, reducing the time it takes to start a run. You can create labeled presets combining a dataset and evaluators and use them with any entity you want to test. - [Scheduled Runs](https://www.getmaxim.ai/docs/offline-evals/via-ui/advanced/scheduled-runs.md): Learn how to schedule test runs for your prompts, agents and workflows at a regular interval. - [Tag test runs](https://www.getmaxim.ai/docs/offline-evals/via-ui/advanced/tag-reports.md): Tag your test runs to group and filter them effectively - [HTTP Agent Evals](https://www.getmaxim.ai/docs/offline-evals/via-ui/agents-via-http-endpoint/agent-evals.md): Learn how to evaluate your HTTP endpoint agents by running them across datasets of test cases, measuring performance with automated evaluators. This page walks you through setting up test runs, analyzing metrics, and improving reliability. - [Environments](https://www.getmaxim.ai/docs/offline-evals/via-ui/agents-via-http-endpoint/environments.md): Learn how to use environments to manage different configurations for your API requests and responses. This page covers referencing environment variables and integrating them into your HTTP endpoint workflows. - [HTTP Endpoint Quickstart](https://www.getmaxim.ai/docs/offline-evals/via-ui/agents-via-http-endpoint/quickstart.md): Run your first test on an AI application via HTTP endpoint with ease, no code changes needed. - [Scripts](https://www.getmaxim.ai/docs/offline-evals/via-ui/agents-via-http-endpoint/scripts.md): Learn how to use Workflow scripts to customize API request and response handling. This page shows how to transform data, set headers, and process results in Maxim HTTP endpoint agents. - [Agent Deployment](https://www.getmaxim.ai/docs/offline-evals/via-ui/agents-via-no-code-builder/agent-deployment.md): Quick iterations on agents should not require code deployments every time. With more and more stakeholders working on prompt engineering, its critical to keep deployments of agents as easy as possible without much overhead. Agent deployments on Maxim allow conditional deployment of agent changes that can be used via the SDK. - [No-Code Agent Evals](https://www.getmaxim.ai/docs/offline-evals/via-ui/agents-via-no-code-builder/agent-evals.md): Learn how to evaluate your no-code Agents by running them across datasets of test cases, measuring performance with automated evaluators. This page walks you through setting up test runs, analyzing metrics, and improving reliability. - [Error debugging](https://www.getmaxim.ai/docs/offline-evals/via-ui/agents-via-no-code-builder/error-debugging.md): Learn how to spot, diagnose, and resolve errors in your no-code agent workflows using detailed step-by-step diagnostics and execution logs. - [Loops](https://www.getmaxim.ai/docs/offline-evals/via-ui/agents-via-no-code-builder/loops.md): Learn how to use loops in your no-code agent to repeat steps or rerun parts of your workflow multiple times. This page explains loop configuration with examples and common use cases. - [Multi-agent System](https://www.getmaxim.ai/docs/offline-evals/via-ui/agents-via-no-code-builder/multi-agent-system.md): Multi-agent systems are a powerful way to build complex applications that can handle a wide variety of tasks. - [No-Code Agent Quickstart](https://www.getmaxim.ai/docs/offline-evals/via-ui/agents-via-no-code-builder/quickstart.md): Test your agentic workflows using Agents via no-code builder with Datasets and Evaluators in minutes. View results across your test cases to find areas where it works well or needs improvement. - [Types of Nodes](https://www.getmaxim.ai/docs/offline-evals/via-ui/agents-via-no-code-builder/types-of-nodes.md): Make external API calls at any point in your agent to integrate with third-party services. The API node lets you validate data, log events, fetch information, or perform any HTTP request without leaving your agent. Simply configure the endpoint, method, and payload to connect your AI workflow with external systems. - [Variables in Agents](https://www.getmaxim.ai/docs/offline-evals/via-ui/agents-via-no-code-builder/variables-in-agents.md): Learn how to inject and use custom variables from your Dataset in no-code agents. This guide shows how to create variables, reference them in your agent, and see their effect during testing. - [Folders and Tags](https://www.getmaxim.ai/docs/offline-evals/via-ui/prompts/folders-and-tags.md): Building AI applications collaboratively needs Prompts to be organized well for easy reference and access. Adding Prompts to folders, tagging them, and versioning on Maxim helps you maintain a holistic Prompt CMS. - [Human Annotation](https://www.getmaxim.ai/docs/offline-evals/via-ui/prompts/human-annotation.md): Human annotation is critical to improve your AI quality. Getting human raters to provide feedback on various dimensions can help measure the present status and be used to improve the system over time. Maxim's human-in-the-loop pipeline allows team members as well as external raters like subject matter experts to annotate AI outputs. - [MCP (Model Context Protocol)](https://www.getmaxim.ai/docs/offline-evals/via-ui/prompts/mcp.md): Discover how to integrate and use Model Context Protocol (MCP) servers in Maxim to test prompts with tool-assisted workflows. This page walks you through connecting MCP clients, adding tools, and leveraging agentic or non-agentic modes for prompt evaluation. - [Prompt Deployment](https://www.getmaxim.ai/docs/offline-evals/via-ui/prompts/prompt-deployment.md): Quick iterations on Prompts should not require code deployments every time. With more and more stakeholders working on prompt engineering, its critical to keep deployments of Prompts as easy as possible without much overhead. Prompt deployments on Maxim allow conditional deployment of prompt changes that can be used via the SDK. - [Prompt Evals](https://www.getmaxim.ai/docs/offline-evals/via-ui/prompts/prompt-evals.md): Experimenting across prompt versions at scale helps you compare results for performance and quality scores. By running experiments across datasets of test cases, you can make more informed decisions, prevent regressions and push to production with confidence and speed. - [Prompt Optimization](https://www.getmaxim.ai/docs/offline-evals/via-ui/prompts/prompt-optimization.md): Learn how to use prompt optimization in Maxim to automatically generate and test improved prompt versions. This page covers configuring optimization runs, prioritizing evaluation metrics, and reviewing performance improvements. - [Using Prompt Partials](https://www.getmaxim.ai/docs/offline-evals/via-ui/prompts/prompt-partials.md): Prompt partials let you modularize and reuse snippets of prompt text across multiple prompts. This page explains how to insert, configure, and manage prompt partials efficiently within the playground. - [Prompt Playground](https://www.getmaxim.ai/docs/offline-evals/via-ui/prompts/prompt-playground.md): Learn how to use the Prompt Playground to experiment with prompts, test their effectiveness, and ensure they work well before integrating them into more complex workflows for your application. - [Prompt Sessions](https://www.getmaxim.ai/docs/offline-evals/via-ui/prompts/prompt-sessions.md): Sessions act as a history by saving your prompt's complete state as you work. This allows you to experiment freely without fear of losing your progress. - [Prompt Versions](https://www.getmaxim.ai/docs/offline-evals/via-ui/prompts/prompt-versions.md): As teams build their AI applications, a big part of experimentation is iterating on the prompt structure. In order to collaborate effectively and organize your changes clearly, Maxim allows prompt versioning and comparison runs across versions. - [Prompt Testing Quickstart](https://www.getmaxim.ai/docs/offline-evals/via-ui/prompts/quickstart.md): Test your Prompts with Datasets and Evaluators in minutes. View results across your test cases to find areas where it works well or needs improvement. - [Prompt Retrieval Testing](https://www.getmaxim.ai/docs/offline-evals/via-ui/prompts/retrieval.md): Retrieval quality directly impacts the quality of output from your AI application. While testing prompts, Maxim allows you to connect your RAG pipeline via a simple API endpoint and evaluates the retrieved context for every run. Context specific evaluators for precision, recall and relevance make it easy to see where retrieval quality is low. - [Prompt Tool Calls](https://www.getmaxim.ai/docs/offline-evals/via-ui/prompts/tool-calls.md): Ensuring your prompt selects the accurate tool call (function) is crucial for building reliable and efficient AI workflows. Maxim’s playground allows you to attach your tools (API, code or schema) and measure tool call accuracy for agentic systems. - [Online Evaluation Overview](https://www.getmaxim.ai/docs/online-evals/overview.md): Get a quick overview of Maxim’s online evaluation capabilities. Learn how you can automatically assess AI performance at multiple levels (session, trace, and node) in real time to maintain quality and reliability in production. - [Set Up Alerts and Notifications](https://www.getmaxim.ai/docs/online-evals/set-up-alerts-and-notifications.md): Learn how to configure notification channels (Slack and PagerDuty) and set up alerts to monitor your AI application's performance and quality metrics. Maxim helps you stay updated about your AI application's performance and quality in real-time. - [Node-Level Evaluation](https://www.getmaxim.ai/docs/online-evals/via-sdk/node-level-evaluation.md): Evaluate any component of your trace or log to gain insights into your agent's behavior. Node-level evaluation enables you to evaluate a trace or its component (a span, generation or retrieval) in isolation, providing granular insight to identify bottlenecks or low quality areas. - [Set Up Auto Evaluation on Logs](https://www.getmaxim.ai/docs/online-evals/via-ui/set-up-auto-evaluation-on-logs.md): Evaluate captured logs automatically from the UI based on filters and sampling. Evaluation on logs helps cover cases or scenarios that might not be covered by test runs, ensuring that the LLM is performing optimally under various conditions. - [Set Up Human Annotation on Logs](https://www.getmaxim.ai/docs/online-evals/via-ui/set-up-human-annotation-on-logs.md): Set up human evaluation to assess log quality and improve your AI applications. Automated evaluators offer initial assessments, but human evaluation adds value with rich qualitative insights, in-depth comments, and improved rewritten outputs. - [Get Prompt Config](https://www.getmaxim.ai/docs/prompts/prompt-config/get-prompt-config.md): Get prompt configuration - [Update Prompt Config](https://www.getmaxim.ai/docs/prompts/prompt-config/update-prompt-config.md): Update prompt configuration - [Deploy Prompt Version](https://www.getmaxim.ai/docs/prompts/prompt-deployment/deploy-prompt-version.md): Deploy a prompt version - [Create a prompt version](https://www.getmaxim.ai/docs/prompts/prompt-version/create-a-prompt-version.md): Create a prompt version - [Get Prompt Versions](https://www.getmaxim.ai/docs/prompts/prompt-version/get-prompt-versions.md): Get versions of a prompt - [Run Prompt Version](https://www.getmaxim.ai/docs/prompts/prompt-version/run-prompt-version.md): Run a specific version of a prompt - [Create Prompt](https://www.getmaxim.ai/docs/prompts/prompt/create-prompt.md): Create a new prompt - [Delete Prompt](https://www.getmaxim.ai/docs/prompts/prompt/delete-prompt.md): Delete a prompt - [Get Prompts](https://www.getmaxim.ai/docs/prompts/prompt/get-prompts.md): Get prompts for a workspace - [Update Prompt](https://www.getmaxim.ai/docs/prompts/prompt/update-prompt.md): Update an existing prompt - [API Reference Overview](https://www.getmaxim.ai/docs/public-apis/overview.md): Welcome to the Maxim API documentation. This guide provides comprehensive information about our available APIs, their endpoints, and how to use them to integrate Maxim's capabilities into your applications. - [Custom Metric Support](https://www.getmaxim.ai/docs/release-notes/Aug 2025/custom-metric-support.md): 19 August 2025 - [Flexi evals](https://www.getmaxim.ai/docs/release-notes/Aug 2025/flexi-evals.md): 27 August 2025 - [Google Cloud Marketplace x Maxim AI](https://www.getmaxim.ai/docs/release-notes/Aug 2025/google-cloud-marketplace.md): 23 August 2025 - [New providers: OpenRouter and Cerebras ](https://www.getmaxim.ai/docs/release-notes/Aug 2025/new-providers-open-router-cerebras.md): 13 August 2025 - [OpenAI's GPT-5 model is live on Maxim](https://www.getmaxim.ai/docs/release-notes/Aug 2025/openai-gpt-5.md): 15 August 2025 - [SAML-based Single Sign-On (SSO)](https://www.getmaxim.ai/docs/release-notes/Aug 2025/saml-based-sso.md): 15 August 2025 - [Workspace Duplication](https://www.getmaxim.ai/docs/release-notes/Aug 2025/workspace-duplication.md): 20 August 2025 - [AI-powered Simulations in Prompt Playground](https://www.getmaxim.ai/docs/release-notes/July 2025/ai-simulations-prompt-playground.md): 3 July 2025 - [Datasets now support file attachments](https://www.getmaxim.ai/docs/release-notes/July 2025/datasets-file-attachments.md): 22 July 2025 - [xAI's Grok 4 model is live on Maxim!](https://www.getmaxim.ai/docs/release-notes/July 2025/grok-4-model.md): 11 July 2025 - [Human annotation on logs: Revamped](https://www.getmaxim.ai/docs/release-notes/July 2025/human-annotation-revamp.md): 27 July 2025 - [No More 1MB Log Size Limit – Unlimited Log Ingestion](https://www.getmaxim.ai/docs/release-notes/July 2025/unlimited-log-size.md): 10 July 2025 - [Flexible Evaluators on All Test Run Entities](https://www.getmaxim.ai/docs/release-notes/Oct 2025/flexible-evaluators.md): 7 October 2025 - [LiteLLM Support](https://www.getmaxim.ai/docs/release-notes/Oct 2025/litellm-support.md): 4 October 2025 - [Retroactive Evaluations on Logs](https://www.getmaxim.ai/docs/release-notes/Oct 2025/retroactive-evals.md): 14 October 2025 - [Synthetic Data Generation](https://www.getmaxim.ai/docs/release-notes/Oct 2025/synthetic-data-generation.md): 27 October 2025 - [Workspace-level RBAC](https://www.getmaxim.ai/docs/release-notes/Oct 2025/workspace-rbac.md): 22 October 2025 - [Audit logs (Maxim Enterprise)](https://www.getmaxim.ai/docs/release-notes/Sep 2025/audit-logs.md): 24 September 2025 - [Revamped Graphs and Omnibar for Logs](https://www.getmaxim.ai/docs/release-notes/Sep 2025/enhanced-graphs-and-search.md): 26 September 2025 - [Manage variables using Environments](https://www.getmaxim.ai/docs/release-notes/Sep 2025/environment-management.md): 17 September 2025 - [Sessions in Evaluator](https://www.getmaxim.ai/docs/release-notes/Sep 2025/evaluator-session-history.md): 16 September 2025 - [Responses API support](https://www.getmaxim.ai/docs/release-notes/Sep 2025/openai-responses-api.md): 5 September 2025 - [Voice simulation and evals are live on Maxim!](https://www.getmaxim.ai/docs/release-notes/Sep 2025/voice-agent-simulation.md): 1 September 2025 - [Introduction](https://www.getmaxim.ai/docs/sdk/overview.md): Dive into the Maxim SDK to supercharge your AI application development. The Maxim SDK exposes Maxim's most critical functionalities behind a simple set of function calls, allowing developers to integrate Maxim workflows into their own workflows seamlessly. - [Maxim Integration for Agno](https://www.getmaxim.ai/docs/sdk/python/integrations/agno/agno.md): Integrate Maxim with your Agno Agents for Observability - [Anthropic SDK](https://www.getmaxim.ai/docs/sdk/python/integrations/anthropic/anthropic.md): Learn how to integrate Maxim observability with the Anthropic SDK in just one line of code. - [CrewAI Integration](https://www.getmaxim.ai/docs/sdk/python/integrations/crewai/crewai.md): Start Agent monitoring, evaluation, and observability for your CrewAI applications - [Fireworks SDK](https://www.getmaxim.ai/docs/sdk/python/integrations/fireworks/fireworks.md): Learn how to integrate Maxim observability with the Fireworks SDK for building AI product experiences with open source AI models. - [Google Gemini](https://www.getmaxim.ai/docs/sdk/python/integrations/gemini/gemini.md): Learn how to integrate Maxim observability with the Google Gemini SDK in just one line of code. - [Google ADK Integration](https://www.getmaxim.ai/docs/sdk/python/integrations/google-adk/google-adk.md): Integrate Maxim with Google's Agent Development Kit (ADK) for comprehensive observability and monitoring of multi-agent systems. - [Groq SDK](https://www.getmaxim.ai/docs/sdk/python/integrations/groq/groq.md): Learn how to integrate Maxim observability with the Groq SDK for fast language model inference. - [LangChain With & Without Streaming](https://www.getmaxim.ai/docs/sdk/python/integrations/langchain/langchain.md): Learn how to integrate Maxim observability with LangChain OpenAI calls. - [LangGraph Agent with Maxim Observability using Decorators](https://www.getmaxim.ai/docs/sdk/python/integrations/langgraph/langgraph-with-decorator.md): Tutorial showing how to integrate Tavily Search API with LangChain and LangGraph, plus instrumentation using Maxim for full observability in just 5 lines. - [LangGraph Agent with Maxim Observability without using Decorators](https://www.getmaxim.ai/docs/sdk/python/integrations/langgraph/langgraph-without-decorator.md): Creating a LangGraph agent with Tavily Search API and observing it using Maxim Single Line Integration - [LiteLLM Proxy One-Line Integration](https://www.getmaxim.ai/docs/sdk/python/integrations/litellm/litellm-proxy.md): Learn how to integrate Maxim with the LiteLLM Proxy - [LiteLLM SDK](https://www.getmaxim.ai/docs/sdk/python/integrations/litellm/litellm-sdk.md): Learn how to integrate Maxim with LiteLLM for tracing and monitoring - [LiveKit SDK](https://www.getmaxim.ai/docs/sdk/python/integrations/livekit/livekit.md): Learn how to integrate Maxim observability with LiveKit agents for real-time voice AI applications with comprehensive tracing and monitoring. - [LlamaIndex Integration](https://www.getmaxim.ai/docs/sdk/python/integrations/llamaindex/llamaindex.md): Learn how to integrate Maxim observability with LlamaIndex agents and workflows for comprehensive tracing and monitoring. - [Mistral SDK](https://www.getmaxim.ai/docs/sdk/python/integrations/mistral/mistral.md): Learn how to integrate Maxim observability with the Mistral SDK in just one line of code. - [Agents SDK](https://www.getmaxim.ai/docs/sdk/python/integrations/openai/agents-sdk.md): Learn how to integrate Maxim with the OpenAI Agents SDK - [OpenAI SDK](https://www.getmaxim.ai/docs/sdk/python/integrations/openai/one-line-integration.md): Learn how to integrate Maxim observability with the OpenAI SDK in just one line of code. - [Pydantic AI Integration](https://www.getmaxim.ai/docs/sdk/python/integrations/pydantic-ai/pydantic_ai.md): Start agent monitoring, evaluation, and observability for your Pydantic AI applications - [Smolagents Integration](https://www.getmaxim.ai/docs/sdk/python/integrations/smolagents/smolagents.md): Start agent monitoring, evaluation, and observability for your Smolagents applications - [Together SDK](https://www.getmaxim.ai/docs/sdk/python/integrations/together/together.md): Learn how to integrate Maxim observability with the Together SDK in just one line of code. - [Overview](https://www.getmaxim.ai/docs/sdk/python/overview.md): Introduction to Maxim python SDK. - [MaximApis](https://www.getmaxim.ai/docs/sdk/python/references/apis/maxim_apis.md): Maxim_Apis utilities for api client utilities for interacting with maxim services. - [Cache](https://www.getmaxim.ai/docs/sdk/python/references/cache/cache.md): Cache utilities for caching mechanisms and utilities for optimizing performance. - [Inmemory](https://www.getmaxim.ai/docs/sdk/python/references/cache/inMemory.md): Inmemory utilities for caching mechanisms and utilities for optimizing performance. - [dataset.Dataset](https://www.getmaxim.ai/docs/sdk/python/references/dataset/dataset.md): Dataset utilities for dataset management and manipulation utilities. - [decorators.Generation](https://www.getmaxim.ai/docs/sdk/python/references/decorators/generation.md): Generation utilities for decorators for automatic logging and instrumentation of functions and methods. - [decorators.Retrieval](https://www.getmaxim.ai/docs/sdk/python/references/decorators/retrieval.md): Retrieval utilities for decorators for automatic logging and instrumentation of functions and methods. - [decorators.Span](https://www.getmaxim.ai/docs/sdk/python/references/decorators/span.md): Span utilities for decorators for automatic logging and instrumentation of functions and methods. - [decorators.ToolCall](https://www.getmaxim.ai/docs/sdk/python/references/decorators/tool_call.md): Tool_Call utilities for decorators for automatic logging and instrumentation of functions and methods. - [decorators.Trace](https://www.getmaxim.ai/docs/sdk/python/references/decorators/trace.md): Trace utilities for decorators for automatic logging and instrumentation of functions and methods. - [BaseEvaluator](https://www.getmaxim.ai/docs/sdk/python/references/evaluators/base_evaluator.md): Base_Evaluator utilities for evaluation tools and utilities for assessing model performance. - [evaluators.Utils](https://www.getmaxim.ai/docs/sdk/python/references/evaluators/utils.md): Utility functions and helpers for Evaluators integration. - [Expiring Key Value Store](https://www.getmaxim.ai/docs/sdk/python/references/expiring_key_value_store.md): Expiring\_Key\_Value\_Store module utilities and functionality. - [Filter Objects](https://www.getmaxim.ai/docs/sdk/python/references/filter_objects.md): Filter\_Objects module utilities and functionality. - [anthropic.Client](https://www.getmaxim.ai/docs/sdk/python/references/logger/anthropic/client.md): Anthropic client implementation for API interactions and model integration. - [Message](https://www.getmaxim.ai/docs/sdk/python/references/logger/anthropic/message.md): Message utilities for anthropic ai model integration and logging utilities. - [StreamManager](https://www.getmaxim.ai/docs/sdk/python/references/logger/anthropic/stream_manager.md): Stream_Manager utilities for anthropic ai model integration and logging utilities. - [anthropic.Utils](https://www.getmaxim.ai/docs/sdk/python/references/logger/anthropic/utils.md): Utility functions and helpers for Anthropic integration. - [bedrock.AsyncClient](https://www.getmaxim.ai/docs/sdk/python/references/logger/bedrock/async_client.md): Async_Client utilities for aws bedrock integration utilities. - [bedrock.Client](https://www.getmaxim.ai/docs/sdk/python/references/logger/bedrock/client.md): Bedrock client implementation for API interactions and model integration. - [bedrock.Utils](https://www.getmaxim.ai/docs/sdk/python/references/logger/bedrock/utils.md): Utility functions and helpers for Bedrock integration. - [components.Attachment](https://www.getmaxim.ai/docs/sdk/python/references/logger/components/attachment.md): Attachment functionality for Components integration. - [Base](https://www.getmaxim.ai/docs/sdk/python/references/logger/components/base.md): Base functionality for Components integration. - [Error](https://www.getmaxim.ai/docs/sdk/python/references/logger/components/error.md): Error functionality for Components integration. - [Feedback](https://www.getmaxim.ai/docs/sdk/python/references/logger/components/feedback.md): Feedback functionality for Components integration. - [components.Generation](https://www.getmaxim.ai/docs/sdk/python/references/logger/components/generation.md): Generation functionality for Components integration. - [components.Retrieval](https://www.getmaxim.ai/docs/sdk/python/references/logger/components/retrieval.md): Retrieval functionality for Components integration. - [Session](https://www.getmaxim.ai/docs/sdk/python/references/logger/components/session.md): Session functionality for Components integration. - [components.Span](https://www.getmaxim.ai/docs/sdk/python/references/logger/components/span.md): Span functionality for Components integration. - [components.ToolCall](https://www.getmaxim.ai/docs/sdk/python/references/logger/components/tool_call.md): Tool Call functionality for Components integration. - [components.Trace](https://www.getmaxim.ai/docs/sdk/python/references/logger/components/trace.md): Trace functionality for Components integration. - [Types](https://www.getmaxim.ai/docs/sdk/python/references/logger/components/types.md): Types functionality for Components integration. - [components.Utils](https://www.getmaxim.ai/docs/sdk/python/references/logger/components/utils.md): Utility functions and helpers for Components integration. - [crewai.Client](https://www.getmaxim.ai/docs/sdk/python/references/logger/crewai/client.md): Crewai client implementation for API interactions and model integration. - [crewai.Utils](https://www.getmaxim.ai/docs/sdk/python/references/logger/crewai/utils.md): Utility functions and helpers for Crewai integration. - [gemini.AsyncClient](https://www.getmaxim.ai/docs/sdk/python/references/logger/gemini/async_client.md): Async_Client utilities for google gemini model integration and logging utilities. - [gemini.Client](https://www.getmaxim.ai/docs/sdk/python/references/logger/gemini/client.md): Gemini client implementation for API interactions and model integration. - [gemini.Utils](https://www.getmaxim.ai/docs/sdk/python/references/logger/gemini/utils.md): Utility functions and helpers for Gemini integration. - [langchain.Tracer](https://www.getmaxim.ai/docs/sdk/python/references/logger/langchain/tracer.md): Tracing and instrumentation utilities for Langchain integration. - [langchain.Utils](https://www.getmaxim.ai/docs/sdk/python/references/logger/langchain/utils.md): Utility functions and helpers for Langchain integration. - [litellm.Tracer](https://www.getmaxim.ai/docs/sdk/python/references/logger/litellm/tracer.md): Tracing and instrumentation utilities for Litellm integration. - [litellm_proxy.Tracer](https://www.getmaxim.ai/docs/sdk/python/references/logger/litellm_proxy/tracer.md): Tracing and instrumentation utilities for Litellm_Proxy integration. - [AgentSession](https://www.getmaxim.ai/docs/sdk/python/references/logger/livekit/agent_session.md): Agent_Session utilities for livekit real-time communication integration utilities. - [GeminiRealtimeSession](https://www.getmaxim.ai/docs/sdk/python/references/logger/livekit/gemini/gemini_realtime_session.md): Gemini_Realtime_Session utilities for google gemini model integration and logging utilities. - [Instrumenter](https://www.getmaxim.ai/docs/sdk/python/references/logger/livekit/instrumenter.md): Instrumenter utilities for livekit real-time communication integration utilities. - [Handler](https://www.getmaxim.ai/docs/sdk/python/references/logger/livekit/openai/realtime/handler.md): Handler functionality for Realtime integration. - [RealtimeSession](https://www.getmaxim.ai/docs/sdk/python/references/logger/livekit/realtime_session.md): Realtime_Session utilities for livekit real-time communication integration utilities. - [Store](https://www.getmaxim.ai/docs/sdk/python/references/logger/livekit/store.md): Store utilities for livekit real-time communication integration utilities. - [livekit.Utils](https://www.getmaxim.ai/docs/sdk/python/references/logger/livekit/utils.md): Utility functions and helpers for Livekit integration. - [Logger](https://www.getmaxim.ai/docs/sdk/python/references/logger/logger.md): Logger utilities for logging and instrumentation utilities for tracking ai model interactions. - [mistral.Utils](https://www.getmaxim.ai/docs/sdk/python/references/logger/mistral/utils.md): Utility functions and helpers for Mistral integration. - [Container](https://www.getmaxim.ai/docs/sdk/python/references/logger/models/container.md): Container utilities for data models and type definitions used throughout the maxim sdk. - [AsyncChat](https://www.getmaxim.ai/docs/sdk/python/references/logger/openai/async_chat.md): Async_Chat utilities for openai model integration and logging utilities. - [openai.AsyncClient](https://www.getmaxim.ai/docs/sdk/python/references/logger/openai/async_client.md): Async_Client utilities for openai model integration and logging utilities. - [AsyncCompletions](https://www.getmaxim.ai/docs/sdk/python/references/logger/openai/async_completions.md): Async_Completions utilities for openai model integration and logging utilities. - [Chat](https://www.getmaxim.ai/docs/sdk/python/references/logger/openai/chat.md): Chat utilities for openai model integration and logging utilities. - [openai.Utils](https://www.getmaxim.ai/docs/sdk/python/references/logger/openai/utils.md): Utility functions and helpers for Openai integration. - [GenerationParser](https://www.getmaxim.ai/docs/sdk/python/references/logger/parsers/generation_parser.md): Generation Parser functionality for Parsers integration. - [TagsParser](https://www.getmaxim.ai/docs/sdk/python/references/logger/parsers/tags_parser.md): Tags Parser functionality for Parsers integration. - [portkey.Client](https://www.getmaxim.ai/docs/sdk/python/references/logger/portkey/client.md): Portkey client implementation for API interactions and model integration. - [Portkey](https://www.getmaxim.ai/docs/sdk/python/references/logger/portkey/portkey.md): Portkey utilities for portkey integration utilities. - [logger.Utils](https://www.getmaxim.ai/docs/sdk/python/references/logger/utils.md): Utility functions and helpers for Logger integration. - [Writer](https://www.getmaxim.ai/docs/sdk/python/references/logger/writer.md): Writer utilities for logging and instrumentation utilities for tracking ai model interactions. - [Maxim](https://www.getmaxim.ai/docs/sdk/python/references/maxim.md): Core Maxim Python SDK functionality and main entry point. - [models.Attachment](https://www.getmaxim.ai/docs/sdk/python/references/models/attachment.md): Attachment utilities for data models and type definitions used throughout the maxim sdk. - [models.Dataset](https://www.getmaxim.ai/docs/sdk/python/references/models/dataset.md): Dataset utilities for data models and type definitions used throughout the maxim sdk. - [Evaluator](https://www.getmaxim.ai/docs/sdk/python/references/models/evaluator.md): Evaluator utilities for data models and type definitions used throughout the maxim sdk. - [Folder](https://www.getmaxim.ai/docs/sdk/python/references/models/folder.md): Folder utilities for data models and type definitions used throughout the maxim sdk. - [Metadata](https://www.getmaxim.ai/docs/sdk/python/references/models/metadata.md): Metadata utilities for data models and type definitions used throughout the maxim sdk. - [Prompt](https://www.getmaxim.ai/docs/sdk/python/references/models/prompt.md): Prompt utilities for data models and type definitions used throughout the maxim sdk. - [PromptChain](https://www.getmaxim.ai/docs/sdk/python/references/models/prompt_chain.md): Prompt_Chain utilities for data models and type definitions used throughout the maxim sdk. - [QueryBuilder](https://www.getmaxim.ai/docs/sdk/python/references/models/query_builder.md): Query_Builder utilities for data models and type definitions used throughout the maxim sdk. - [TestRun](https://www.getmaxim.ai/docs/sdk/python/references/models/test_run.md): Test_Run utilities for data models and type definitions used throughout the maxim sdk. - [Scribe](https://www.getmaxim.ai/docs/sdk/python/references/scribe.md): Scribe module utilities and functionality. - [TestRunBuilder](https://www.getmaxim.ai/docs/sdk/python/references/test_runs/test_run_builder.md): Test_Run_Builder utilities for test execution and management utilities. - [test_runs.Utils](https://www.getmaxim.ai/docs/sdk/python/references/test_runs/utils.md): Utility functions and helpers for Test_Runs integration. - [MockWriter](https://www.getmaxim.ai/docs/sdk/python/references/tests/mock_writer.md): Mock Writer functionality for Tests integration. - [TestAnthropic](https://www.getmaxim.ai/docs/sdk/python/references/tests/test_anthropic.md): Test Anthropic functionality for Tests integration. - [TestConnectionRetryLogic](https://www.getmaxim.ai/docs/sdk/python/references/tests/test_connection_retry_logic.md): Test Connection Retry Logic functionality for Tests integration. - [TestLoggerLangchain03x](https://www.getmaxim.ai/docs/sdk/python/references/tests/test_logger_langchain_03x.md): Test Logger Langchain 03X functionality for Tests integration. - [TestMaximCoreSimple](https://www.getmaxim.ai/docs/sdk/python/references/tests/test_maxim_core_simple.md): Test Maxim Core Simple functionality for Tests integration. - [TestPortkey](https://www.getmaxim.ai/docs/sdk/python/references/tests/test_portkey.md): Test Portkey functionality for Tests integration. - [TestTestRuns](https://www.getmaxim.ai/docs/sdk/python/references/tests/test_test_runs.md): Test Test Runs functionality for Tests integration. - [Upgrading to v3](https://www.getmaxim.ai/docs/sdk/python/upgrading-to-v3.md): Changes in the Maxim SDK - [LangChain Integration](https://www.getmaxim.ai/docs/sdk/typescript/integrations/langchain/langchain.md): Complete guide to integrating Maxim observability with LangChain applications in TypeScript/JavaScript - [LangGraph Integration](https://www.getmaxim.ai/docs/sdk/typescript/integrations/langgraph/langgraph.md): Complete guide to integrating Maxim observability with LangGraph applications in TypeScript/JavaScript - [Vercel Integration](https://www.getmaxim.ai/docs/sdk/typescript/integrations/vercel/vercel.md): Learn how to integrate Maxim observability with the Vercel AI SDK in just one line of code. - [BaseContainer](https://www.getmaxim.ai/docs/sdk/typescript/reference/core/classes/BaseContainer.md) - [CSVFile](https://www.getmaxim.ai/docs/sdk/typescript/reference/core/classes/CSVFile.md) - [CommitLog](https://www.getmaxim.ai/docs/sdk/typescript/reference/core/classes/CommitLog.md) - [Error](https://www.getmaxim.ai/docs/sdk/typescript/reference/core/classes/Error.md) - [EvaluatableBaseContainer](https://www.getmaxim.ai/docs/sdk/typescript/reference/core/classes/EvaluatableBaseContainer.md) - [EvaluateContainer](https://www.getmaxim.ai/docs/sdk/typescript/reference/core/classes/EvaluateContainer.md) - [EventEmittingBaseContainer](https://www.getmaxim.ai/docs/sdk/typescript/reference/core/classes/EventEmittingBaseContainer.md) - [Generation](https://www.getmaxim.ai/docs/sdk/typescript/reference/core/classes/Generation.md) - [LogWriter](https://www.getmaxim.ai/docs/sdk/typescript/reference/core/classes/LogWriter.md) - [Maxim](https://www.getmaxim.ai/docs/sdk/typescript/reference/core/classes/Maxim.md) - [MaximLogger](https://www.getmaxim.ai/docs/sdk/typescript/reference/core/classes/MaximLogger.md) - [MaximLogsAPI](https://www.getmaxim.ai/docs/sdk/typescript/reference/core/classes/MaximLogsAPI.md) - [QueryBuilder](https://www.getmaxim.ai/docs/sdk/typescript/reference/core/classes/QueryBuilder.md) - [Retrieval](https://www.getmaxim.ai/docs/sdk/typescript/reference/core/classes/Retrieval.md) - [Session](https://www.getmaxim.ai/docs/sdk/typescript/reference/core/classes/Session.md) - [Span](https://www.getmaxim.ai/docs/sdk/typescript/reference/core/classes/Span.md) - [ToolCall](https://www.getmaxim.ai/docs/sdk/typescript/reference/core/classes/ToolCall.md) - [Trace](https://www.getmaxim.ai/docs/sdk/typescript/reference/core/classes/Trace.md) - [Entity](https://www.getmaxim.ai/docs/sdk/typescript/reference/core/enumerations/Entity.md) - [QueryRuleType](https://www.getmaxim.ai/docs/sdk/typescript/reference/core/enumerations/QueryRuleType.md) - [VariableType](https://www.getmaxim.ai/docs/sdk/typescript/reference/core/enumerations/VariableType.md) - [ChatCompletionChoice](https://www.getmaxim.ai/docs/sdk/typescript/reference/core/interfaces/ChatCompletionChoice.md) - [ChatCompletionMessage](https://www.getmaxim.ai/docs/sdk/typescript/reference/core/interfaces/ChatCompletionMessage.md) - [ChatCompletionResult](https://www.getmaxim.ai/docs/sdk/typescript/reference/core/interfaces/ChatCompletionResult.md) - [ChatCompletionToolCall](https://www.getmaxim.ai/docs/sdk/typescript/reference/core/interfaces/ChatCompletionToolCall.md) - [CompletionRequest](https://www.getmaxim.ai/docs/sdk/typescript/reference/core/interfaces/CompletionRequest.md) - [GenerationError](https://www.getmaxim.ai/docs/sdk/typescript/reference/core/interfaces/GenerationError.md) - [Logprobs](https://www.getmaxim.ai/docs/sdk/typescript/reference/core/interfaces/Logprobs.md) - [MaximCache](https://www.getmaxim.ai/docs/sdk/typescript/reference/core/interfaces/MaximCache.md) - [TestRunLogger](https://www.getmaxim.ai/docs/sdk/typescript/reference/core/interfaces/TestRunLogger.md) - [TextCompletionChoice](https://www.getmaxim.ai/docs/sdk/typescript/reference/core/interfaces/TextCompletionChoice.md) - [TextCompletionResult](https://www.getmaxim.ai/docs/sdk/typescript/reference/core/interfaces/TextCompletionResult.md) - [ToolCallConfig](https://www.getmaxim.ai/docs/sdk/typescript/reference/core/interfaces/ToolCallConfig.md) - [ToolCallError](https://www.getmaxim.ai/docs/sdk/typescript/reference/core/interfaces/ToolCallError.md) - [ToolCallFunction](https://www.getmaxim.ai/docs/sdk/typescript/reference/core/interfaces/ToolCallFunction.md) - [Usage](https://www.getmaxim.ai/docs/sdk/typescript/reference/core/interfaces/Usage.md) - [Core](https://www.getmaxim.ai/docs/sdk/typescript/reference/core/overview.md) - [MaximLangchainTracer](https://www.getmaxim.ai/docs/sdk/typescript/reference/langchain/classes/MaximLangchainTracer.md) - [LangChain](https://www.getmaxim.ai/docs/sdk/typescript/reference/langchain/overview.md) - [Modules](https://www.getmaxim.ai/docs/sdk/typescript/reference/modules.md) - [Getting Started](https://www.getmaxim.ai/docs/sdk/typescript/setup.md): Maxim SDK Setup for NodeJS / React Native Projects - [Data Plane Deployment](https://www.getmaxim.ai/docs/self-hosting/dataplane.md): This guide details Maxim's data plane deployment process, outlining how to establish data processing infrastructure within your cloud environment. It emphasizes enhanced security, control, and data tenancy, ensuring compliance with data residency requirements while leveraging cloud-based services. - [Self-Hosting Overview](https://www.getmaxim.ai/docs/self-hosting/overview.md): Maxim offers self hosting and flexible enterprise deployment options with either full VPC isolation (Zero Touch) or hybrid setup with secure VPC peering (Data Plane), tailored to your security needs. - [Zero Touch Deployment](https://www.getmaxim.ai/docs/self-hosting/zerotouch.md): This guide outlines Maxim's zero-touch deployment process, covering infrastructure components, security protocols, and supported cloud providers. - [Custom Pricing](https://www.getmaxim.ai/docs/settings/custom-pricing.md): Learn how to set up custom token pricing in Maxim for accurate cost reporting in AI evaluations and logs, ensuring displayed costs match your actual expenses. - [Environment](https://www.getmaxim.ai/docs/settings/environment.md): Learn how to set up environments in Maxim. - [Maxim API keys](https://www.getmaxim.ai/docs/settings/maxim-api-keys.md): Learn how to create Maxim API keys. - [Members and Roles](https://www.getmaxim.ai/docs/settings/members-and-roles.md): Learn how to invite team members and manage organization and workspace roles in Maxim. - [Model Configuration](https://www.getmaxim.ai/docs/settings/model-configuration.md): Learn how to configure models in Maxim. - [Set up Single Sign-On (SSO) with Google](https://www.getmaxim.ai/docs/settings/setup-sso-with-google.md): Step-by-step guide to configure Google Workspace SAML 2.0 Single Sign-On (SSO) for Maxim. - [Set up Single Sign-On (SSO) with Okta](https://www.getmaxim.ai/docs/settings/setup-sso-with-okta.md): Step-by-step guide to configure Okta SAML 2.0 Single Sign-On (SSO) for Maxim AI. - [Two-Factor Authentication](https://www.getmaxim.ai/docs/settings/two-factor-authentication.md): Learn how to set up two-factor authentication in Maxim. - [Vault](https://www.getmaxim.ai/docs/settings/vault.md): Learn how to set up vault in Maxim. - [Simulation Overview](https://www.getmaxim.ai/docs/simulations/text-simulation/overview.md) - [Simulation Runs](https://www.getmaxim.ai/docs/simulations/text-simulation/simulation-runs.md): Test your AI's conversational abilities with realistic, scenario-based simulations - [Voice Simulation](https://www.getmaxim.ai/docs/simulations/voice-simulation/voice-simulation.md): Test your Voice Agent's interaction capabilities with realistic voice simulations - [Get test run entries](https://www.getmaxim.ai/docs/test run entries/test-run-entries/get-test-run-entries.md): Get test run entries - [Share test run report](https://www.getmaxim.ai/docs/test run reports/test-run-report/share-test-run-report.md): Share a test run report - [Delete test runs](https://www.getmaxim.ai/docs/test runs/test-run/delete-test-runs.md): Delete test runs from a workspace - [Get test runs](https://www.getmaxim.ai/docs/test runs/test-run/get-test-runs.md): Get test runs for a workspace - [Tracing Concepts](https://www.getmaxim.ai/docs/tracing/concepts.md): Get a quick overview of Maxim's distributed tracing concepts for AI apps. Learn how logs, traces, and repositories enable deep monitoring, troubleshooting, and evaluation through Maxim's observability platform. - [Dashboard](https://www.getmaxim.ai/docs/tracing/dashboard.md): Learn how to use the dashboard to filter and sort your logs using custom criteria to streamline debugging and create saved views to quickly access your most-used search patterns. - [Exports](https://www.getmaxim.ai/docs/tracing/exports.md): Learn how to export your logs and evaluation data in Maxim. Download your logs and their associated evaluation data in a single CSV file, with options to filter exports based on your specific requirements and time ranges. - [Forwarding via Data Connectors](https://www.getmaxim.ai/docs/tracing/opentelemetry/forwarding-via-data-connectors.md): Send your traces to Maxim once and we'll forward them to your preferred observability platforms - New Relic, Snowflake, or any OpenTelemetry collector. - [Ingesting via OTLP Endpoint](https://www.getmaxim.ai/docs/tracing/opentelemetry/ingesting-via-otlp.md): Learn how to send OpenTelemetry (OTLP) traces to Maxim for AI and LLM Observability. Maxim supports OTLP ingestion, offering advanced visibility into your AI infrastructure with the widely adopted OpenTelemetry Protocol. - [Tracing Overview](https://www.getmaxim.ai/docs/tracing/overview.md): Monitor AI applications in real-time with Maxim's enterprise-grade LLM observability platform. Build and monitor reliable AI applications for consistent results with comprehensive distributed tracing, real-time monitoring, and alerting capabilities. - [Tracing Quickstart](https://www.getmaxim.ai/docs/tracing/quickstart.md): Set up distributed tracing for your GenAI applications to monitor performance and debug issues across services. This guide demonstrates distributed tracing setup using an enterprise search chatbot example with multiple microservices. - [Reporting](https://www.getmaxim.ai/docs/tracing/reporting.md): Learn how to set up reporting for your logs and evaluation data in Maxim. Monitor your log repository performance with weekly statistical email updates and receive key metrics about your repository. - [Attachments](https://www.getmaxim.ai/docs/tracing/tracing-via-sdk/attachments.md): Learn how to attach files and URLs to traces and spans for richer observability in Maxim. Attachments let you add files (audio, images, text, etc.) or URLs to your traces and spans, providing extra context for debugging, analytics, and audit trails. - [Custom Metrics](https://www.getmaxim.ai/docs/tracing/tracing-via-sdk/custom-metrics.md): Learn how to track and log metrics from LLM generations, traces, retrievals, and sessions in your AI application. Monitor performance, quality, and resource usage by tracking trace-level metrics like tool call counts, costs, and evaluation scores. - [Errors](https://www.getmaxim.ai/docs/tracing/tracing-via-sdk/errors.md): Learn how to effectively track and log errors from LLM results and Tool calls in your AI application traces. Improve performance and reliability by capturing error details including messages, types, and error codes for better debugging and monitoring. - [Events](https://www.getmaxim.ai/docs/tracing/tracing-via-sdk/events.md): Track application milestones and state changes using event logging. Create events to mark specific points in time during your application execution and capture additional metadata such as intermediate states and system milestones. - [Generations](https://www.getmaxim.ai/docs/tracing/tracing-via-sdk/generations.md): Use generations to log individual calls to Large Language Models (LLMs). Each trace or span can contain multiple generations, allowing you to track and monitor all LLM interactions within your AI application. - [Metadata](https://www.getmaxim.ai/docs/tracing/tracing-via-sdk/metadata.md): Add custom key-value pairs to components for enhanced observability. The metadata functionality allows you to add custom key-value pairs to all components like trace, generation, retrieval, event, and span for storing additional context, configuration, user information, or any custom data. - [Retrieval](https://www.getmaxim.ai/docs/tracing/tracing-via-sdk/retrieval.md): Retrieval-Augmented Generation (RAG) is a technique that enhances large language models by retrieving relevant information from external sources before generating responses. - [Sessions](https://www.getmaxim.ai/docs/tracing/tracing-via-sdk/sessions.md): Learn how to group related traces into sessions to track complete user interactions with your GenAI system. Sessions help you capture and review the full context of conversations or workflows spanning multiple interactions. - [Spans](https://www.getmaxim.ai/docs/tracing/tracing-via-sdk/spans.md): Spans help you organize and track requests across microservices within traces. A trace represents the entire journey of a request through your system, while spans are smaller units of work within that trace. - [Tags](https://www.getmaxim.ai/docs/tracing/tracing-via-sdk/tags.md): Tag your traces to group and filter endpoint data effectively. Add tags to any node type - spans, generations, retrievals, events, and more. Use tags to organize your data, run experiments, and quickly filter through traces based on specific criteria. - [Tool Calls](https://www.getmaxim.ai/docs/tracing/tracing-via-sdk/tool-calls.md): Track external system calls triggered by LLM responses in your agentic endpoints. Tool calls represent interactions with external services, allowing you to monitor execution time and responses. - [Traces](https://www.getmaxim.ai/docs/tracing/tracing-via-sdk/traces.md): Learn how to set up tracing using the Maxim platform. This guide covers the necessary steps to instrument your AI application and start monitoring and evaluating its performance. - [User Feedback](https://www.getmaxim.ai/docs/tracing/tracing-via-sdk/user-feedback.md): Track and collect user feedback in application traces using Maxim's Feedback entity. Enhance your AI applications with structured user ratings and comments to measure user satisfaction and improve your AI system's performance over time. ## Optional - [Blog](https://www.getmaxim.ai/blog) - [Cookbooks](https://github.com/maximhq/maxim-cookbooks) - [Tutorials](https://www.youtube.com/playlist?list=PLJh32rQ0yHHIC_nNZ6i2taEzAYiH8s6rP)