Try Bifrost Enterprise free for 14 days.
Explore now

llama-4-maverick-17b-128e-instruct Cost Calculator - Groq

Calculate the cost of using llama-4-maverick-17b-128e-instruct from Groq for your AI applications

llama-4-maverick-17b-128e-instruct Cost Calculator

Mode: Chat

Max: 131,072 tokens

Max: 8,192 tokens

Cost Breakdown

Input Cost$0.00020000
Output Cost$0.00060000
Total Cost$0.00080000

Pricing Details

Input: $0.0000002000 per token
Output: $0.0000006000 per token
[ WE'RE OPEN SOURCE ]

Scale with the Fastest LLM Gateway

Built for enterprise-grade reliability, governance, and scale. Deploy in seconds.

Model Specifications

Capabilities

Function Calling
Vision

Limits

Max Input Tokens131,072
Max Output Tokens8,192
Max Tokens8,192

About llama-4-maverick-17b-128e-instruct

llama-4-maverick-17b-128e-instruct is a powerful chat AI model offered by Groq. This comprehensive guide provides detailed pricing information, technical specifications, and capabilities to help you understand the costs and features of using llama-4-maverick-17b-128e-instruct in your applications.

Pricing Information

Input Cost$0.20 per 1M tokens
Output Cost$0.60 per 1M tokens

Note: Use the interactive calculator above to estimate costs for your specific usage patterns.

Technical Specifications

Maximum Input Tokens131,072
Maximum Output Tokens8,192
Maximum Total Tokens8,192

Pro Tip

Use the maximum token limits shown above to understand the model's capacity. This model can handle up to 131,072 input tokens. The maximum output length is 8,192 tokens.

Model Capabilities

Function Calling - Execute custom functions and tools
Vision - Process and understand images
Response Schema - Structured output formatting
When should you use llama-4-maverick-17b-128e-instruct?

llama-4-maverick-17b-128e-instruct is best suited for the following scenarios:

  • Long-context chat and document analysis
  • Agent workflows with large memory windows
  • Agentic systems with function or tool calling
  • Workflow automation and API orchestration
  • Multimodal applications requiring image or audio processing
  • Content analysis across multiple media types
When should you avoid llama-4-maverick-17b-128e-instruct?
  • High-volume text generation where output cost dominates
  • Streaming or verbose response workloads
  • Complex multi-step reasoning or planning tasks
How does llama-4-maverick-17b-128e-instruct compare to similar models?

Compared to other models in a similar category, this model is more cost-efficient on input tokens but relatively expensive on output tokens. It is better suited for retrieval-heavy or context-rich workflows than generation-heavy use cases.

Understanding llama-4-maverick-17b-128e-instruct pricing
  • llama-4-maverick-17b-128e-instruct is a general-purpose AI model provided by Groq.
  • Input tokens are priced at $0.20 per 1M tokens.
  • Output tokens are priced at $0.60 per 1M tokens.
  • The model supports a maximum input capacity of 131,072 tokens.
  • Maximum output length is 8,192 tokens.
  • For this model, input tokens are less expensive than output tokens, so optimizing your prompts can help manage costs.
  • The model includes vision capabilities for processing and analysing images.
  • Supports function calling for executing custom functions and tools.
  • Groq offers llama-4-maverick-17b-128e-instruct for general-purpose AI workloads — general-purpose AI workloads.

How to Use This Calculator

Step 1: Enter the number of input tokens you expect to use. Input tokens include your prompt, system messages, and any context you provide to the model.

Step 2: Specify the number of output tokens you anticipate. Output tokens are the text generated by the model in response to your input.

Step 3: Review the cost breakdown to see the total estimated cost for your usage. The calculator automatically updates as you adjust the token counts.

llama-4-maverick-17b-128e-instruct Cost Calculator - Groq | Bifrost