Try Bifrost Enterprise free for 14 days.
Explore now

meta.llama4-maverick-17b-instruct-v1:0 Cost Calculator - Bedrock Converse

Calculate the cost of using meta.llama4-maverick-17b-instruct-v1:0 from Bedrock Converse for your AI applications

meta.llama4-maverick-17b-instruct-v1:0 Cost Calculator

Mode: Chat

Max: 128,000 tokens

Max: 4,096 tokens

Cost Breakdown

Input Cost$0.00024000
Output Cost$0.00097000
Total Cost$0.001210

Pricing Details

Input: $0.0000002400 per token
Output: $0.0000009700 per token
[ WE'RE OPEN SOURCE ]

Scale with the Fastest LLM Gateway

Built for enterprise-grade reliability, governance, and scale. Deploy in seconds.

Model Specifications

Capabilities

Function Calling

Limits

Max Input Tokens128,000
Max Output Tokens4,096
Max Tokens4,096

About meta.llama4-maverick-17b-instruct-v1:0

meta.llama4-maverick-17b-instruct-v1:0 is a powerful chat AI model offered by Bedrock Converse. This comprehensive guide provides detailed pricing information, technical specifications, and capabilities to help you understand the costs and features of using meta.llama4-maverick-17b-instruct-v1:0 in your applications.

Pricing Information

Input Cost$0.24 per 1M tokens
Output Cost$0.97 per 1M tokens

Note: Use the interactive calculator above to estimate costs for your specific usage patterns.

Technical Specifications

Maximum Input Tokens128,000
Maximum Output Tokens4,096
Maximum Total Tokens4,096

Pro Tip

Use the maximum token limits shown above to understand the model's capacity. This model can handle up to 128,000 input tokens. The maximum output length is 4,096 tokens.

Model Capabilities

Function Calling - Execute custom functions and tools
When should you use meta.llama4-maverick-17b-instruct-v1:0?

meta.llama4-maverick-17b-instruct-v1:0 is best suited for the following scenarios:

  • Long-context chat and document analysis
  • Agent workflows with large memory windows
  • Agentic systems with function or tool calling
  • Workflow automation and API orchestration
When should you avoid meta.llama4-maverick-17b-instruct-v1:0?
  • High-volume text generation where output cost dominates
  • Streaming or verbose response workloads
  • Complex multi-step reasoning or planning tasks
  • Applications requiring image, audio, or multimodal inputs
How does meta.llama4-maverick-17b-instruct-v1:0 compare to similar models?

Compared to other models in a similar category, this model is more cost-efficient on input tokens but relatively expensive on output tokens. It is better suited for retrieval-heavy or context-rich workflows than generation-heavy use cases.

Understanding meta.llama4-maverick-17b-instruct-v1:0 pricing
  • meta.llama4-maverick-17b-instruct-v1:0 is a general-purpose AI model provided by Bedrock Converse.
  • Input tokens are priced at $0.24 per 1M tokens.
  • Output tokens are priced at $0.97 per 1M tokens.
  • The model supports a maximum input capacity of 128,000 tokens.
  • Maximum output length is 4,096 tokens.
  • For this model, input tokens are less expensive than output tokens, so optimizing your prompts can help manage costs.
  • Supports function calling for executing custom functions and tools.
  • Bedrock Converse offers meta.llama4-maverick-17b-instruct-v1:0 for general-purpose AI workloads — general-purpose AI workloads.

How to Use This Calculator

Step 1: Enter the number of input tokens you expect to use. Input tokens include your prompt, system messages, and any context you provide to the model.

Step 2: Specify the number of output tokens you anticipate. Output tokens are the text generated by the model in response to your input.

Step 3: Review the cost breakdown to see the total estimated cost for your usage. The calculator automatically updates as you adjust the token counts.

meta.llama4-maverick-17b-instruct-v1:0 Cost Calculator - Bedrock Converse | Bifrost