🎯 Supported Providers

ProviderModelsFeaturesEnterprise
OpenAIGPT-4o, GPT-4 Turbo, GPT-4, GPT-3.5Function calling, streaming, vision
AnthropicClaude 3.5 Sonnet, Claude 3 Opus/HaikuTool use, vision, 200K context
Azure OpenAIEnterprise GPT deploymentPrivate networks, compliance
Amazon BedrockClaude, Titan, Cohere, MetaMulti-model platform, VPC
Google VertexGemini Pro, PaLM, CodeyEnterprise AI platform
CohereCommand, Embed, RerankEnterprise NLP, multilingual
MistralMistral Large, Medium, SmallEuropean AI, cost-effective
OllamaLlama, Mistral, CodeLlamaLocal deployment, privacy
GroqMixtral, Llama, GemmaEnterprise AI platform
SGLangQwenEnterprise AI platform

⚡ Basic Provider Usage

Single Provider Setup


🚀 Multi-Provider Setup

Configure multiple providers for fallbacks and load distribution.

🔧 Provider-Specific Configuration

Enterprise Providers


📋 Provider Features Matrix

FeatureOpenAIAnthropicAzureBedrockVertexCohereMistralOllamaGroqSGLang
Chat Completion
Function Calling
Streaming
Vision
JSON Mode
Custom Base URL
Proxy Support

🎯 Next Steps

TaskDocumentation
🔑 Configure multiple API keysKey Management
🌐 Set up networking & proxiesNetworking
⚡ Optimize performanceMemory Management
❌ Handle errors gracefullyError Handling
🔧 Go Package deep diveGo Package Usage
🌐 HTTP Transport setupHTTP Transport Usage
💡 Tip: All responses from Bifrost follow OpenAI’s format regardless of the underlying provider, ensuring consistent integration across your application.