Try Bifrost Enterprise free for 14 days.
Request access
[ MODEL COMPARISON ]

Compare llama-3.1-sonar-small-128k-online with other models

Select another model to compare pricing, limits, and capabilities with llama-3.1-sonar-small-128k-online.

Perplexity logo
VS
Models
Perplexity logollama-3.1-sonar-small-128k-online
perplexity
Context Length
127K
Max Output
127K
Input Cost
$0.20/M
Output Cost
$0.20/M
Mode
Chat
Max Input Tokens
127K
Max Tokens
127K
Provider
Perplexity
Deprecation Date
2025-02-22
[ WE'RE OPEN SOURCE ]

Scale with the Fastest LLM Gateway

Built for enterprise-grade reliability, governance, and scale. Deploy in seconds.

Comparison Insights

Comprehensive analysis based on the latest model metadata from the comparison table above.

What should I know about llama-3.1-sonar-small-128k-online?

Overview

  • llama-3.1-sonar-small-128k-online is a chat model provided by Perplexity.
  • With a context window of 127K tokens, this model can handle substantial inputs such as detailed documents or extended conversation histories.

Pricing

  • Input processing costs $0.20 per million tokens.
  • Output generation costs $0.20 per million tokens.

Output Capabilities

  • The model can generate up to 127K tokens in a single response.

Availability

  • Please note: This model is scheduled for deprecation on 2025-02-22.