Try Bifrost Enterprise free for 14 days.
Explore now

Llama-4-Maverick-17B-128E-Instruct-FP8 Cost Calculator - Together AI

Calculate the cost of using Llama-4-Maverick-17B-128E-Instruct-FP8 from Together AI for your AI applications

Llama-4-Maverick-17B-128E-Instruct-FP8 Cost Calculator

Mode: Chat

Cost Breakdown

Input Cost$0.00027000
Output Cost$0.00085000
Total Cost$0.001120

Pricing Details

Input: $0.0000002700 per token
Output: $0.0000008500 per token
[ WE'RE OPEN SOURCE ]

Scale with the Fastest LLM Gateway

Built for enterprise-grade reliability, governance, and scale. Deploy in seconds.

About Llama-4-Maverick-17B-128E-Instruct-FP8

Llama-4-Maverick-17B-128E-Instruct-FP8 is a powerful chat AI model offered by Together AI. This comprehensive guide provides detailed pricing information, technical specifications, and capabilities to help you understand the costs and features of using Llama-4-Maverick-17B-128E-Instruct-FP8 in your applications.

Pricing Information

Input Cost$0.27 per 1M tokens
Output Cost$0.85 per 1M tokens

Note: Use the interactive calculator above to estimate costs for your specific usage patterns.

Model Capabilities

Function Calling - Execute custom functions and tools
Parallel Function Calling - Execute multiple functions simultaneously
Response Schema - Structured output formatting
When should you use Llama-4-Maverick-17B-128E-Instruct-FP8?

Llama-4-Maverick-17B-128E-Instruct-FP8 is best suited for the following scenarios:

  • Agentic systems with function or tool calling
  • Workflow automation and API orchestration
When should you avoid Llama-4-Maverick-17B-128E-Instruct-FP8?
  • High-volume text generation where output cost dominates
  • Streaming or verbose response workloads
  • Complex multi-step reasoning or planning tasks
  • Applications requiring image, audio, or multimodal inputs
  • Very large documents or long conversational histories
How does Llama-4-Maverick-17B-128E-Instruct-FP8 compare to similar models?

Compared to other models in a similar category, this model is more cost-efficient on input tokens but relatively expensive on output tokens. It is better suited for retrieval-heavy or context-rich workflows than generation-heavy use cases.

Understanding Llama-4-Maverick-17B-128E-Instruct-FP8 pricing
  • Llama-4-Maverick-17B-128E-Instruct-FP8 is a general-purpose AI model provided by Together AI.
  • Input tokens are priced at $0.27 per 1M tokens.
  • Output tokens are priced at $0.85 per 1M tokens.
  • For this model, input tokens are less expensive than output tokens, so optimizing your prompts can help manage costs.
  • Supports function calling for executing custom functions and tools.
  • Together AI offers Llama-4-Maverick-17B-128E-Instruct-FP8 for general-purpose AI workloads — general-purpose AI workloads.

How to Use This Calculator

Step 1: Enter the number of input tokens you expect to use. Input tokens include your prompt, system messages, and any context you provide to the model.

Step 2: Specify the number of output tokens you anticipate. Output tokens are the text generated by the model in response to your input.

Step 3: Review the cost breakdown to see the total estimated cost for your usage. The calculator automatically updates as you adjust the token counts.