🚀 Zero-Config Setup (15 seconds!)

1. Start Bifrost (No config needed!)

# 🐳 Docker (fastest)
docker pull maximhq/bifrost
docker run -p 8080:8080 maximhq/bifrost

# 🔧 OR Go Binary (Make sure Go is in your PATH)
go install github.com/maximhq/bifrost/transports/bifrost-http@latest
bifrost-http -port 8080

2. Open the Web Interface

# 🖥️ Beautiful web UI for zero-config setup
# macOS:
open http://localhost:8080
# Linux:
xdg-open http://localhost:8080
# Windows:
start http://localhost:8080
# Or simply open http://localhost:8080 manually in your browser
🎉 That’s it! Configure providers visually, monitor requests in real-time, and get analytics - all through the web interface!

📂 File-Based Configuration (Optional)

Want to use a config file instead? Bifrost automatically looks for config.json in your app directory:

1. Create config.json in your app directory

{
  "providers": {
    "openai": {
      "keys": [
        {
          "value": "env.OPENAI_API_KEY",
          "models": ["gpt-4o-mini"],
          "weight": 1.0
        }
      ]
    }
  }
}

2. Set environment variables and start

export OPENAI_API_KEY="your-openai-api-key"

# Docker with volume mount for persistence
docker run -p 8080:8080 \
  -v $(pwd):/app/data \
  -e OPENAI_API_KEY \
  maximhq/bifrost

# OR Go Binary with app directory
bifrost-http -app-dir . -port 8080

📁 Understanding App Directory & Docker Volumes

How the -app-dir Flag Works

The -app-dir flag tells Bifrost where to store and look for data:
# Use current directory as app directory
bifrost-http -app-dir .

# Use specific directory as app directory
bifrost-http -app-dir /path/to/bifrost-data

# Default: current directory if no flag specified
bifrost-http -port 8080
What Bifrost stores in the app directory:
  • config.json - Configuration file (if using file-based config)
  • logs/ - Database logs and request history
  • Any other persistent data

How Docker Volumes Work with App Directory

Docker volumes map your host directory to Bifrost’s app directory:
# Map current host directory → /app/data inside container
docker run -p 8080:8080 -v $(pwd):/app/data maximhq/bifrost

# Map specific host directory → /app/data inside container
docker run -p 8080:8080 -v /host/path/bifrost-data:/app/data maximhq/bifrost

# No volume = ephemeral storage (lost when container stops)
docker run -p 8080:8080 maximhq/bifrost

Persistence Scenarios

ScenarioCommandResult
Ephemeral (testing)docker run -p 8080:8080 maximhq/bifrostNo persistence, configure via web UI
Persistent (recommended)docker run -p 8080:8080 -v $(pwd):/app/data maximhq/bifrostSaves config & logs to host directory
Pre-configuredCreate config.json, then run with volumeStarts with your existing configuration

Best Practices

  • 🔧 Development: Use -v $(pwd):/app/data to persist config between restarts
  • 🚀 Production: Mount dedicated volume for data persistence
  • 🧪 Testing: Run without volume for clean ephemeral instances
  • 👥 Teams: Share config.json in version control, mount directory with volume

3. Test the API

# Make your first request
curl -X POST http://localhost:8080/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "openai/gpt-4o-mini",
    "messages": [{"role": "user", "content": "Hello, Bifrost!"}]
  }'
🎉 Success! You should see an AI response in JSON format.
📋 Note: All Bifrost responses follow OpenAI’s response structure, regardless of the underlying provider. This ensures consistent integration across different AI providers.

🔄 Drop-in Integrations (Zero Code Changes!)

Already using OpenAI, Anthropic, or Google GenAI? Get instant benefits with zero code changes:

🤖 OpenAI SDK Replacement

# Before
from openai import OpenAI
client = OpenAI(api_key="your-key")

# After - Just change base_url!
from openai import OpenAI
client = OpenAI(
    api_key="dummy",  # Not used
    base_url="http://localhost:8080/openai"
)

# All your existing code works unchanged! ✨
response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": "Hello!"}]
)

🧠 Anthropic SDK Replacement

# Before
from anthropic import Anthropic
client = Anthropic(api_key="your-key")

# After - Just change base_url!
from anthropic import Anthropic
client = Anthropic(
    api_key="dummy",  # Not used
    base_url="http://localhost:8080/anthropic"
)

# All your existing code works unchanged! ✨

🔍 Google GenAI Replacement

# Before
from google import genai
client = genai.Client(api_key="your-key")

# After - Just change base_url!
from google import genai
client = genai.Client(
    api_key="dummy",  # Not used
    http_options=genai.types.HttpOptions(
        base_url="http://localhost:8080/genai"
    )
)

# All your existing code works unchanged! ✨

🚀 Next Steps (30 seconds each)

🖥️ Add Multiple Providers via Web UI

  1. Open http://localhost:8080 in your browser
  2. Click “Add Provider”
  3. Select OpenAI, enter your API key, choose models
  4. Click “Add Provider” again
  5. Select Anthropic, enter your API key, choose models
  6. Done! Your providers are now load-balanced automatically

📡 Or Add Multiple Providers via API

# Add OpenAI
curl -X POST http://localhost:8080/api/providers \
  -H "Content-Type: application/json" \
  -d '{"provider": "openai", "keys": [{"value": "env.OPENAI_API_KEY", "models": ["gpt-4o-mini"], "weight": 1.0}]}'

# Add Anthropic
curl -X POST http://localhost:8080/api/providers \
  -H "Content-Type: application/json" \
  -d '{"provider": "anthropic", "keys": [{"value": "env.ANTHROPIC_API_KEY", "models": ["claude-3-sonnet-20240229"], "weight": 1.0}]}'

# Set environment variables
export OPENAI_API_KEY="your-openai-key"
export ANTHROPIC_API_KEY="your-anthropic-key"

⚡ Test Different Providers

# Use OpenAI
curl -X POST http://localhost:8080/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{"model": "openai/gpt-4o-mini", "messages": [{"role": "user", "content": "Hello from OpenAI!"}]}'

# Use Anthropic
curl -X POST http://localhost:8080/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{"model": "anthropic/claude-3-sonnet-20240229", "messages": [{"role": "user", "content": "Hello from Anthropic!"}], "params":{"max_tokens": 100}}'

🔄 Add Automatic Fallbacks

# Request with fallback
curl -X POST http://localhost:8080/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "openai/gpt-4o-mini",
    "messages": [{"role": "user", "content": "Hello!"}],
    "fallbacks": ["anthropic/claude-3-sonnet-20240229"],
    "params": {"max_tokens": 100}
  }'

🔗 Language Examples

Python

import requests

response = requests.post(
    "http://localhost:8080/v1/chat/completions",
    json={
        "model": "openai/gpt-4o-mini",
        "messages": [{"role": "user", "content": "Hello from Python!"}]
    }
)
print(response.json())

JavaScript/Node.js

const response = await fetch("http://localhost:8080/v1/chat/completions", {
  method: "POST",
  headers: { "Content-Type": "application/json" },
  body: JSON.stringify({
    model: "openai/gpt-4o-mini",
    messages: [{ role: "user", content: "Hello from Node.js!" }],
  }),
});
console.log(await response.json());

Go

response, err := http.Post(
    "http://localhost:8080/v1/chat/completions",
    "application/json",
    strings.NewReader(`{
        "model": "openai/gpt-4o-mini",
        "messages": [{"role": "user", "content": "Hello from Go!"}]
    }`)
)

🔧 Setup Methods Comparison

MethodProsUse When
Zero ConfigNo files needed, visual setup, instant startQuick testing, demos, new users
File-BasedVersion control, automation, reproducible deploymentProduction, CI/CD, team setups
DockerNo Go installation needed, isolated environmentProduction, CI/CD, quick testing
Go BinaryDirect execution, easier debuggingDevelopment, custom builds
Note: When using file-based config, Bifrost only looks for config.json in your specified app directory.

💬 Need Help?

🔗 Join our Discord for real-time setup assistance and HTTP integration support!

📚 Learn More

What You WantWhere to GoTime
Drop-in integrations guideIntegrations5 min
Complete HTTP setupHTTP Transport Usage10 min
Production configurationConfiguration15 min
All endpointsAPI EndpointsReference
OpenAPI specificationOpenAPI SpecReference

🔄 Prefer Go Package?

If you’re building a Go application and want direct integration, try the Go Package Quick Start** instead.

💡 Why HTTP Transport?

  • 🖥️ Built-in Web UI - Visual configuration, monitoring, and analytics
  • 🚀 Zero configuration - Start instantly, configure dynamically
  • 🌐 Language agnostic - Use from Python, Node.js, PHP, etc.
  • 🔄 Drop-in replacement - Zero code changes for existing apps
  • 🔗 OpenAI compatible - All responses follow OpenAI structure
  • ⚙️ Microservices ready - Centralized AI gateway
  • 📊 Production features - Health checks, metrics, monitoring
🎯 Ready for production? Check out Complete HTTP Usage Guide →