Try Bifrost Enterprise free for 14 days.
Explore now

us.twelvelabs.pegasus-1-2-v1:0 Cost Calculator - AWS Bedrock

Calculate the cost of using us.twelvelabs.pegasus-1-2-v1:0 from AWS Bedrock for your AI applications

us.twelvelabs.pegasus-1-2-v1:0 Cost Calculator

Mode: Chat

Cost Breakdown

Output Cost$0.007500
Total Cost$0.007500

Pricing Details

Output: $0.00000750 per token
[ WE'RE OPEN SOURCE ]

Scale with the Fastest LLM Gateway

Built for enterprise-grade reliability, governance, and scale. Deploy in seconds.

About us.twelvelabs.pegasus-1-2-v1:0

us.twelvelabs.pegasus-1-2-v1:0 is a powerful chat AI model offered by AWS Bedrock. This comprehensive guide provides detailed pricing information, technical specifications, and capabilities to help you understand the costs and features of using us.twelvelabs.pegasus-1-2-v1:0 in your applications.

Pricing Information

Output Cost$7.50 per 1M tokens

Note: Use the interactive calculator above to estimate costs for your specific usage patterns.

When should you use us.twelvelabs.pegasus-1-2-v1:0?

us.twelvelabs.pegasus-1-2-v1:0 is best suited for the following scenarios:

  • General-purpose chat and text generation workloads
When should you avoid us.twelvelabs.pegasus-1-2-v1:0?
  • Complex multi-step reasoning or planning tasks
  • Applications requiring image, audio, or multimodal inputs
  • Very large documents or long conversational histories
How does us.twelvelabs.pegasus-1-2-v1:0 compare to similar models?

This model sits in the middle of its category in terms of pricing and capabilities, making it a balanced option for general workloads.

Understanding us.twelvelabs.pegasus-1-2-v1:0 pricing
  • us.twelvelabs.pegasus-1-2-v1:0 is a general-purpose AI model provided by AWS Bedrock.
  • Output tokens are priced at $7.50 per 1M tokens.
  • AWS Bedrock offers us.twelvelabs.pegasus-1-2-v1:0 for general-purpose AI workloads — general-purpose AI workloads.

How to Use This Calculator

Step 1: Enter the number of input tokens you expect to use. Input tokens include your prompt, system messages, and any context you provide to the model.

Step 2: Specify the number of output tokens you anticipate. Output tokens are the text generated by the model in response to your input.

Step 3: Review the cost breakdown to see the total estimated cost for your usage. The calculator automatically updates as you adjust the token counts.