Complete guide to using Bifrost as an HTTP API service for multi-provider AI access, drop-in integrations, and production deployment.
Endpoint | Purpose | Documentation |
---|---|---|
POST /v1/chat/completions | Chat conversations | Endpoints Guide |
POST /v1/text/completions | Text generation | Endpoints Guide |
POST /v1/mcp/tool/execute | Tool execution | Endpoints Guide |
GET /metrics | Prometheus metrics | Endpoints Guide |
Provider | Endpoint | Compatibility |
---|---|---|
OpenAI | POST /openai/v1/chat/completions | OpenAI Compatible |
Anthropic | POST /anthropic/v1/messages | Anthropic Compatible |
Google GenAI | POST /genai/v1beta/models/{model} | GenAI Compatible |
π Migration: See Migration Guide for step-by-step migration from existing providers.
Component | Configuration | Time to Setup |
---|---|---|
π§ Providers | API keys, models, fallbacks | 5 min |
π οΈ MCP Integration | Tool servers and connections | 10 min |
π Plugins | Custom middleware (coming soon) | 5 min |
Goal | Integration Type | Guide |
---|---|---|
Replace OpenAI API | Drop-in replacement | OpenAI Compatible |
Replace Anthropic API | Drop-in replacement | Anthropic Compatible |
Use with existing SDKs | Change base URL only | Migration Guide |
Add multiple providers | Provider configuration | Providers Config |
Add external tools | MCP integration | MCP Config |
Custom monitoring | Plugin configuration | Plugins Config |
Production deployment | Docker + config | Deployment Guide |
ποΈ Architecture: For HTTP transport design and performance details, see Architecture Documentation.