Try Bifrost Enterprise free for 14 days.
Explore now

llama-3.1-sonar-small-128k-online Cost Calculator - Perplexity

Calculate the cost of using llama-3.1-sonar-small-128k-online from Perplexity for your AI applications

llama-3.1-sonar-small-128k-online Cost Calculator

Mode: Chat

Max: 127,072 tokens

Max: 127,072 tokens

Cost Breakdown

Input Cost$0.00020000
Output Cost$0.00020000
Total Cost$0.00040000

Pricing Details

Input: $0.0000002000 per token
Output: $0.0000002000 per token
[ WE'RE OPEN SOURCE ]

Scale with the Fastest LLM Gateway

Built for enterprise-grade reliability, governance, and scale. Deploy in seconds.

Model Specifications

Limits

Max Input Tokens127,072
Max Output Tokens127,072
Max Tokens127,072

About llama-3.1-sonar-small-128k-online

llama-3.1-sonar-small-128k-online is a powerful chat AI model offered by Perplexity. This comprehensive guide provides detailed pricing information, technical specifications, and capabilities to help you understand the costs and features of using llama-3.1-sonar-small-128k-online in your applications.

Pricing Information

Input Cost$0.20 per 1M tokens
Output Cost$0.20 per 1M tokens

Note: Use the interactive calculator above to estimate costs for your specific usage patterns.

Technical Specifications

Maximum Input Tokens127,072
Maximum Output Tokens127,072
Maximum Total Tokens127,072

Pro Tip

Use the maximum token limits shown above to understand the model's capacity. This model can handle up to 127,072 input tokens. The maximum output length is 127,072 tokens.

When should you use llama-3.1-sonar-small-128k-online?

llama-3.1-sonar-small-128k-online is best suited for the following scenarios:

  • Long-context chat and document analysis
  • Agent workflows with large memory windows
When should you avoid llama-3.1-sonar-small-128k-online?
  • Complex multi-step reasoning or planning tasks
  • Applications requiring image, audio, or multimodal inputs
How does llama-3.1-sonar-small-128k-online compare to similar models?

This model supports a larger context window than many alternatives, making it suitable for long-form inputs and memory-intensive applications.

Understanding llama-3.1-sonar-small-128k-online pricing
  • llama-3.1-sonar-small-128k-online is a general-purpose AI model provided by Perplexity.
  • Input tokens are priced at $0.20 per 1M tokens.
  • Output tokens are priced at $0.20 per 1M tokens.
  • The model supports a maximum input capacity of 127,072 tokens.
  • Maximum output length is 127,072 tokens.
  • For this model, input tokens are more expensive than output tokens, so optimizing your prompts can help manage costs.
  • Perplexity offers llama-3.1-sonar-small-128k-online for general-purpose AI workloads — general-purpose AI workloads.

How to Use This Calculator

Step 1: Enter the number of input tokens you expect to use. Input tokens include your prompt, system messages, and any context you provide to the model.

Step 2: Specify the number of output tokens you anticipate. Output tokens are the text generated by the model in response to your input.

Step 3: Review the cost breakdown to see the total estimated cost for your usage. The calculator automatically updates as you adjust the token counts.