Skip to content

phosae/bifrost

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Bifrost

Go Report Card

Bifrost is an open-source middleware that serves as a unified gateway to various AI model providers, enabling seamless integration and fallback mechanisms for your AI-powered applications.

⚑ Quickstart

Prerequisites

  • Go 1.23 or higher (not needed if using Docker)
  • Access to at least one AI model provider (OpenAI, Anthropic, etc.)
  • API keys for the providers you wish to use

A. Using Bifrost as an HTTP Server

  1. Create config.json: This file should contain your provider settings and API keys.

    {
      "providers": {
        "openai": {
          "keys": [
            {
              "value": "env.OPENAI_API_KEY",
              "models": ["gpt-4o-mini"],
              "weight": 1.0
            }
          ]
        }
      }
    }
  2. Set Up Your Environment: Add your environment variable to the session.

    export OPENAI_API_KEY=your_openai_api_key
    export ANTHROPIC_API_KEY=your_anthropic_api_key

    Note: Ensure you add all variables stated in your config.json file.

  3. Start the Bifrost HTTP Server:

    You can run the server using either a Go Binary or Docker (if Go is not installed).

    i) Using Go Binary

    • Install the transport package:

      go install github.com/maximhq/bifrost/transports/bifrost-http@latest
    • Run the server (ensure Go is in your PATH):

      bifrost-http -config config.json -port 8080 -pool-size 300

    ii) OR Using Docker

    • Pull the Docker image:

      docker pull maximhq/bifrost
    • Run the Docker container:

      docker run -p 8080:8080 \
        -v $(pwd)/config.json:/app/config/config.json \
        -e OPENAI_API_KEY \
        -e ANTHROPIC_API_KEY \
        maximhq/bifrost

      Note: Ensure you mount your config file and add all environment variables referenced in your config.json file.

  4. Using the API: Once the server is running, you can send requests to the HTTP endpoints.

    curl -X POST http://localhost:8080/v1/chat/completions \
    -H "Content-Type: application/json" \
    -d '{
      "provider": "openai",
      "model": "gpt-4o-mini",
      "messages": [
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Tell me about Bifrost in Norse mythology."}
      ]
    }'

For additional HTTP server configuration options, read this.

B. Using Bifrost as a Go Package

  1. Implement Your Account Interface: First, create an account that follows Bifrost's account interface.

    type BaseAccount struct{}
    
    func (baseAccount *BaseAccount) GetConfiguredProviders() ([]schemas.ModelProvider, error) {
      return []schemas.ModelProvider{schemas.OpenAI}, nil
    }
    
    func (baseAccount *BaseAccount) GetKeysForProvider(providerKey schemas.ModelProvider) ([]schemas.Key, error) {
        return []schemas.Key{
          {
            Value:  os.Getenv("OPENAI_API_KEY"),
            Models: []string{"gpt-4o-mini"},
            Weight: 1.0,
          },
        }, nil
    }
    
    func (baseAccount *BaseAccount) GetConfigForProvider(providerKey schemas.ModelProvider) (*schemas.ProviderConfig, error) {
        return &schemas.ProviderConfig{
           NetworkConfig:            schemas.DefaultNetworkConfig,
           ConcurrencyAndBufferSize: schemas.DefaultConcurrencyAndBufferSize,
        }, nil
    }

    Bifrost uses these methods to get all the keys and configurations it needs to call the providers. See the Additional Configurations section for additional customization options.

  2. Initialize Bifrost: Set up the Bifrost instance by providing your account implementation.

    account := BaseAccount{}
    
    client, err := bifrost.Init(schemas.BifrostConfig{
      Account: &account,
    })
  3. Use Bifrost: Make your First LLM Call!

      bifrostResult, bifrostErr := bifrost.ChatCompletionRequest(
       context.Background(),
       &schemas.BifrostRequest{
          Provider: schemas.OpenAI,
          Model: "gpt-4o-mini", // make sure you have configured gpt-4o-mini in your account interface
          Input: schemas.RequestInput{
            ChatCompletionInput: bifrost.Ptr([]schemas.BifrostMessage{{
             Role: schemas.ModelChatMessageRoleUser,
             Content: schemas.MessageContent{
               ContentStr: bifrost.Ptr("What is a LLM gateway?"),
             },
            }}),
          },
        },
      )

    You can add model parameters by including Params: &schemas.ModelParameters{...yourParams} in ChatCompletionRequest.

πŸ“‘ Table of Contents


πŸ” Overview

Bifrost acts as a bridge between your applications and multiple AI providers (OpenAI, Anthropic, Amazon Bedrock, Mistral, Ollama, etc.). It provides a consistent API while handling:

  • Authentication and key management
  • Request routing and load balancing
  • Fallback mechanisms for reliability
  • Unified request and response formatting
  • Connection pooling and concurrency control

With Bifrost, you can focus on building your AI-powered applications without worrying about the underlying provider-specific implementations. It handles all the complexities of key and provider management, providing a fixed input and output format so you don't need to modify your codebase for different providers.


✨ Features

  • Multi-Provider Support: Integrate with OpenAI, Anthropic, Amazon Bedrock, Mistral, Ollama, and more through a single API
  • Fallback Mechanisms: Automatically retry failed requests with alternative models or providers
  • Dynamic Key Management: Rotate and manage API keys efficiently with weighted distribution
  • Connection Pooling: Optimize network resources for better performance
  • Concurrency Control: Manage rate limits and parallel requests effectively
  • Flexible Transports: Multiple transports for easy integration into your infra
  • Plugin First Architecture: No callback hell, simple addition/creation of custom plugins
  • MCP Integration: Built-in Model Context Protocol (MCP) support for external tool integration and execution
  • Custom Configuration: Offers granular control over pool sizes, network retry settings, fallback providers, and network proxy configurations
  • Built-in Observability: Native Prometheus metrics out of the box, no wrappers, no sidecars, just drop it in and scrape

πŸ—οΈ Repository Structure

Bifrost is built with a modular architecture:

bifrost/
β”œβ”€β”€ core/                 # Core functionality and shared components
β”‚   β”œβ”€β”€ providers/        # Provider-specific implementations
β”‚   β”œβ”€β”€ schemas/          # Interfaces and structs used in bifrost
β”‚   β”œβ”€β”€ bifrost.go        # Main Bifrost implementation
β”‚
β”œβ”€β”€ docs/                 # Documentations for Bifrost's configurations and contribution guides
β”‚   └── ...
β”‚
β”œβ”€β”€ tests/                # All test setups related to /core and /transports
β”‚   └── ...
β”‚
β”œβ”€β”€ transports/           # Interface layers (HTTP, gRPC, etc.)
β”‚   β”œβ”€β”€ bifrost-http/             # HTTP transport implementation
β”‚   └── ...
β”‚
└── plugins/              # Plugin Implementations
    β”œβ”€β”€ maxim/
    └── ...

The system uses a provider-agnostic approach with well-defined interfaces to easily extend to new AI providers. All interfaces are defined in core/schemas/ and can be used as a reference for contributions.


πŸš€ Getting Started

If you want to set up the Bifrost API quickly, check the transports documentation.

Package Structure

Bifrost is divided into three Go packages: core, plugins, and transports.

  1. core: This package contains the core implementation of Bifrost as a Go package.
  2. plugins: This package serves as an extension to core. You can download individual packages using go get github.com/maximhq/bifrost/plugins/{plugin-name} and pass the plugins while initializing Bifrost.
// go get github.com/maximhq/bifrost/plugins/maxim

maximPlugin, err := maxim.NewMaximLoggerPlugin(os.Getenv("MAXIM_API_KEY"), os.Getenv("MAXIM_LOGGER_ID"))
if err != nil {
  return nil, err
}

// Initialize Bifrost
client, err := bifrost.Init(schemas.BifrostConfig{
  Account: &account,
  Plugins: []schemas.Plugin{maximPlugin},
})
  1. transports: This package contains transport clients like HTTP to expose your Bifrost client. You can either go get this package or directly use the independent Dockerfile to quickly spin up your Bifrost API (read more on this).

Additional Configurations


πŸ“Š Benchmarks

Bifrost has been tested under high load conditions to ensure optimal performance. The following results were obtained from benchmark tests running at 5000 requests per second (RPS) on different AWS EC2 instances.

Test Environment

1. t3.medium(2 vCPUs, 4GB RAM)

  • Buffer Size: 15,000
  • Initial Pool Size: 10,000

2. t3.xlarge(4 vCPUs, 16GB RAM)

  • Buffer Size: 20,000
  • Initial Pool Size: 15,000

Performance Metrics

Metric t3.medium t3.xlarge
Success Rate 100.00% 100.00%
Average Request Size 0.13 KB 0.13 KB
Average Response Size 1.37 KB 10.32 KB
Average Latency 2.12s 1.61s
Peak Memory Usage 1312.79 MB 3340.44 MB
Queue Wait Time 47.13 Β΅s 1.67 Β΅s
Key Selection Time 16 ns 10 ns
Message Formatting 2.19 Β΅s 2.11 Β΅s
Params Preparation 436 ns 417 ns
Request Body Preparation 2.65 Β΅s 2.36 Β΅s
JSON Marshaling 63.47 Β΅s 26.80 Β΅s
Request Setup 6.59 Β΅s 7.17 Β΅s
HTTP Request 1.56s 1.50s
Error Handling 189 ns 162 ns
Response Parsing 11.30 ms 2.11 ms
Bifrost's Overhead 59 Β΅s\* 11 Β΅s\*

*Bifrost's overhead is measured at 59 Β΅s on t3.medium and 11 Β΅s on t3.xlarge, excluding the time taken for JSON marshalling and the HTTP call to the LLM, both of which are required in any custom implementation.

Note: On the t3.xlarge, we tested with significantly larger response payloads (~10 KB average vs ~1 KB on t3.medium). Even so, response parsing time dropped dramatically thanks to better CPU throughput and Bifrost's optimized memory reuse.

Key Performance Highlights

  • Perfect Success Rate: 100% request success rate under high load on both instances
  • Total Overhead: Less than only 15Β΅s added per request on average
  • Efficient Queue Management: Minimal queue wait time (1.67 Β΅s on t3.xlarge)
  • Fast Key Selection: Near-instantaneous key selection (10 ns on t3.xlarge)
  • Improved Performance on t3.xlarge:
    • 24% faster average latency
    • 81% faster response parsing
    • 58% faster JSON marshaling
    • Significantly reduced queue wait times

One of Bifrost's key strengths is its flexibility in configuration. You can freely decide the tradeoff between memory usage and processing speed by adjusting Bifrost's configurations. This flexibility allows you to optimize Bifrost for your specific use case, whether you prioritize speed, memory efficiency, or a balance between the two.

  • Higher buffer and pool sizes (like in t3.xlarge) improve speed but use more memory

  • Lower configurations (like in t3.medium) use less memory but may have slightly higher latencies

  • You can fine-tune these parameters based on your specific needs and available resources

    • Initial Pool Size: Determines the initial allocation of resources
    • Buffer and Concurrency Settings: Controls the queue size and maximum number of concurrent requests (adjustable per provider).
    • Retry and Timeout Configurations: Customizable based on your requirements for each provider.

Curious? Run your own benchmarks. The Bifrost Benchmarking repo has everything you need to test it in your own environment.


🀝 Contributing

We welcome contributions of all kindsβ€”whether it's bug fixes, features, documentation improvements, or new ideas. Feel free to open an issue, and once it's assigned, submit a Pull Request.

Here's how to get started (after picking up an issue):

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add some amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request and describe your changes

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

Built with ❀️ by Maxim

About

Fastest LLM gateway

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Go 78.4%
  • Python 20.6%
  • Other 1.0%