Skip to main content

Connect Claude Code to alternative models from OpenAI or Gemini.

Project description

claude-code-proxy

Connect Claude Code to alternative models from OpenAI or Gemini.

Forked from: 1rgs/claude-code-proxy

Claude Code Proxy

Quick start

uvx claude-code-proxy

Installation

Prerequisites

Configuration

[!WARNING]
Readme rewrite is in progress on everything below

  1. Configure Environment Variables: Copy the example environment file:

    cp .env.example .env
    

    Edit .env and fill in your API keys and model configurations:

    • ANTHROPIC_API_KEY: (Optional) Needed only if proxying to Anthropic models.
    • OPENAI_API_KEY: Your OpenAI API key (Required if using the default OpenAI preference or as fallback).
    • GEMINI_API_KEY: Your Google AI Studio (Gemini) API key (Required if PREFERRED_PROVIDER=google).
    • PREFERRED_PROVIDER (Optional): Set to openai (default) or google. This determines the primary backend for mapping haiku/sonnet.
    • BIG_MODEL (Optional): The model to map sonnet requests to. Defaults to gpt-4.1 (if PREFERRED_PROVIDER=openai) or gemini-2.5-pro-preview-03-25.
    • SMALL_MODEL (Optional): The model to map haiku requests to. Defaults to gpt-4.1-mini (if PREFERRED_PROVIDER=openai) or gemini-2.0-flash.

    Mapping Logic:

    • If PREFERRED_PROVIDER=openai (default), haiku/sonnet map to SMALL_MODEL/BIG_MODEL prefixed with openai/.
    • If PREFERRED_PROVIDER=google, haiku/sonnet map to SMALL_MODEL/BIG_MODEL prefixed with gemini/ if those models are in the server's known GEMINI_MODELS list (otherwise falls back to OpenAI mapping).
  2. Run the server:

    uv run uvicorn server:app --host 0.0.0.0 --port 8082 --reload
    

    (--reload is optional, for development)

Using with Claude Code 🎮

  1. Install Claude Code (if you haven't already):

    npm install -g @anthropic-ai/claude-code
    
  2. Connect to your proxy:

    ANTHROPIC_BASE_URL=http://localhost:8082 claude
    
  3. That's it! Your Claude Code client will now use the configured backend models (defaulting to Gemini) through the proxy. 🎯

Model Mapping 🗺️

The proxy automatically maps Claude models to either OpenAI or Gemini models based on the configured model:

Claude Model Default Mapping When BIG_MODEL/SMALL_MODEL is a Gemini model
haiku openai/gpt-4o-mini gemini/[model-name]
sonnet openai/gpt-4o gemini/[model-name]

Supported Models

OpenAI Models

The following OpenAI models are supported with automatic openai/ prefix handling:

  • o3-mini
  • o1
  • o1-mini
  • o1-pro
  • gpt-4.5-preview
  • gpt-4o
  • gpt-4o-audio-preview
  • chatgpt-4o-latest
  • gpt-4o-mini
  • gpt-4o-mini-audio-preview
  • gpt-4.1
  • gpt-4.1-mini

Gemini Models

The following Gemini models are supported with automatic gemini/ prefix handling:

  • gemini-2.5-pro-preview-03-25
  • gemini-2.0-flash

Model Prefix Handling

The proxy automatically adds the appropriate prefix to model names:

  • OpenAI models get the openai/ prefix
  • Gemini models get the gemini/ prefix
  • The BIG_MODEL and SMALL_MODEL will get the appropriate prefix based on whether they're in the OpenAI or Gemini model lists

For example:

  • gpt-4o becomes openai/gpt-4o
  • gemini-2.5-pro-preview-03-25 becomes gemini/gemini-2.5-pro-preview-03-25
  • When BIG_MODEL is set to a Gemini model, Claude Sonnet will map to gemini/[model-name]

Customizing Model Mapping

Control the mapping using environment variables in your .env file or directly:

Example 1: Default (Use OpenAI) No changes needed in .env beyond API keys, or ensure:

OPENAI_API_KEY="your-openai-key"
GEMINI_API_KEY="your-google-key" # Needed if PREFERRED_PROVIDER=google
# PREFERRED_PROVIDER="openai" # Optional, it's the default
# BIG_MODEL="gpt-4.1" # Optional, it's the default
# SMALL_MODEL="gpt-4.1-mini" # Optional, it's the default

Example 2: Prefer Google

GEMINI_API_KEY="your-google-key"
OPENAI_API_KEY="your-openai-key" # Needed for fallback
PREFERRED_PROVIDER="google"
# BIG_MODEL="gemini-2.5-pro-preview-03-25" # Optional, it's the default for Google pref
# SMALL_MODEL="gemini-2.0-flash" # Optional, it's the default for Google pref

Example 3: Use Specific OpenAI Models

OPENAI_API_KEY="your-openai-key"
GEMINI_API_KEY="your-google-key"
PREFERRED_PROVIDER="openai"
BIG_MODEL="gpt-4o" # Example specific model
SMALL_MODEL="gpt-4o-mini" # Example specific model

How It Works 🧩

This proxy works by:

  1. Receiving requests in Anthropic's API format 📥
  2. Translating the requests to OpenAI format via LiteLLM 🔄
  3. Sending the translated request to OpenAI 📤
  4. Converting the response back to Anthropic format 🔄
  5. Returning the formatted response to the client ✅

The proxy handles both streaming and non-streaming responses, maintaining compatibility with all Claude clients. 🌊

Contributing 🤝

Contributions are welcome! Please feel free to submit a Pull Request. 🎁

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

claude_code_proxy-0.1.5.tar.gz (16.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

claude_code_proxy-0.1.5-py3-none-any.whl (18.2 kB view details)

Uploaded Python 3

File details

Details for the file claude_code_proxy-0.1.5.tar.gz.

File metadata

  • Download URL: claude_code_proxy-0.1.5.tar.gz
  • Upload date:
  • Size: 16.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.7.8

File hashes

Hashes for claude_code_proxy-0.1.5.tar.gz
Algorithm Hash digest
SHA256 4234b19ed9e0dadd7ecd430adbc2728aa374771f32bb0f1530b5d4e6e63627e2
MD5 8751d6f589d640128a7dcad9dc5060d9
BLAKE2b-256 ec3bfa61383655deab9f2146a68dbf0007b8cdbf5401abf26f354f219f38f1fb

See more details on using hashes here.

File details

Details for the file claude_code_proxy-0.1.5-py3-none-any.whl.

File metadata

File hashes

Hashes for claude_code_proxy-0.1.5-py3-none-any.whl
Algorithm Hash digest
SHA256 0e410a4ce6332dfc1fcc8d63c6ab6304a032c0d3744678dd5628c67eefbad897
MD5 e79060cd9ff330073f289939879966df
BLAKE2b-256 1c97bcd53d9c45c6866c8b076beb2d133eb2fad8b4317aa0bd98011a78976e82

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page