Skip to content

Commit e9c8cf8

Browse files
author
Rahul Sengottuvelu
committed
refactor: set default provider to openai and update default models to gpt-4.1/mini
1 parent e07c97b commit e9c8cf8

File tree

3 files changed

+51
-46
lines changed

3 files changed

+51
-46
lines changed

.env.example

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -5,17 +5,17 @@ GEMINI_API_KEY="your-google-ai-studio-key"
55

66
# Optional: Provider Preference and Model Mapping
77
# Controls which provider (google or openai) is preferred for mapping haiku/sonnet.
8-
# Defaults to google if not set.
9-
PREFERRED_PROVIDER="google"
8+
# Defaults to openai if not set.
9+
PREFERRED_PROVIDER="openai"
1010

1111
# Optional: Specify the exact models to map haiku/sonnet to.
1212
# If PREFERRED_PROVIDER=google, these MUST be valid Gemini model names known to the server.
1313
# Defaults to gemini-2.5-pro-preview-03-25 and gemini-2.0-flash if PREFERRED_PROVIDER=google.
14-
# Defaults to gpt-4o and gpt-4o-mini if PREFERRED_PROVIDER=openai.
15-
# BIG_MODEL="gemini-2.5-pro-preview-03-25"
16-
# SMALL_MODEL="gemini-2.0-flash"
14+
# Defaults to gpt-4.1 and gpt-4.1-mini if PREFERRED_PROVIDER=openai.
15+
# BIG_MODEL="gpt-4.1"
16+
# SMALL_MODEL="gpt-4.1-mini"
1717

18-
# Example OpenAI mapping:
19-
# PREFERRED_PROVIDER="openai"
20-
# BIG_MODEL="gpt-4o"
21-
# SMALL_MODEL="gpt-4o-mini"
18+
# Example Google mapping:
19+
# PREFERRED_PROVIDER="google"
20+
# BIG_MODEL="gemini-2.5-pro-preview-03-25"
21+
# SMALL_MODEL="gemini-2.0-flash"

README.md

Lines changed: 34 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ A proxy server that lets you use Anthropic clients with Gemini or OpenAI models
1212
### Prerequisites
1313

1414
- OpenAI API key 🔑
15-
- Google AI Studio (Gemini) API key (if using default provider) 🔑
15+
- Google AI Studio (Gemini) API key (if using Google provider) 🔑
1616
- [uv](https://github.com/astral-sh/uv) installed.
1717

1818
### Setup 🛠️
@@ -37,15 +37,15 @@ A proxy server that lets you use Anthropic clients with Gemini or OpenAI models
3737
Edit `.env` and fill in your API keys and model configurations:
3838

3939
* `ANTHROPIC_API_KEY`: (Optional) Needed only if proxying *to* Anthropic models.
40-
* `OPENAI_API_KEY`: Your OpenAI API key (Required if using OpenAI models as fallback or primary).
41-
* `GEMINI_API_KEY`: Your Google AI Studio (Gemini) API key (Required if using the default Gemini preference).
42-
* `PREFERRED_PROVIDER` (Optional): Set to `google` (default) or `openai`. This determines the primary backend for mapping `haiku`/`sonnet`.
43-
* `BIG_MODEL` (Optional): The model to map `sonnet` requests to. Defaults to `gemini-2.5-pro-preview-03-25` (if `PREFERRED_PROVIDER=google` and model is known) or `gpt-4o`.
44-
* `SMALL_MODEL` (Optional): The model to map `haiku` requests to. Defaults to `gemini-2.0-flash` (if `PREFERRED_PROVIDER=google` and model is known) or `gpt-4o-mini`.
40+
* `OPENAI_API_KEY`: Your OpenAI API key (Required if using the default OpenAI preference or as fallback).
41+
* `GEMINI_API_KEY`: Your Google AI Studio (Gemini) API key (Required if PREFERRED_PROVIDER=google).
42+
* `PREFERRED_PROVIDER` (Optional): Set to `openai` (default) or `google`. This determines the primary backend for mapping `haiku`/`sonnet`.
43+
* `BIG_MODEL` (Optional): The model to map `sonnet` requests to. Defaults to `gpt-4.1` (if `PREFERRED_PROVIDER=openai`) or `gemini-2.5-pro-preview-03-25`.
44+
* `SMALL_MODEL` (Optional): The model to map `haiku` requests to. Defaults to `gpt-4.1-mini` (if `PREFERRED_PROVIDER=openai`) or `gemini-2.0-flash`.
4545

4646
**Mapping Logic:**
47-
- If `PREFERRED_PROVIDER=google` (default), `haiku`/`sonnet` map to `SMALL_MODEL`/`BIG_MODEL` prefixed with `gemini/` *if* those models are in the server's known `GEMINI_MODELS` list.
48-
- Otherwise (if `PREFERRED_PROVIDER=openai` or the specified Google model isn't known), they map to `SMALL_MODEL`/`BIG_MODEL` prefixed with `openai/`.
47+
- If `PREFERRED_PROVIDER=openai` (default), `haiku`/`sonnet` map to `SMALL_MODEL`/`BIG_MODEL` prefixed with `openai/`.
48+
- If `PREFERRED_PROVIDER=google`, `haiku`/`sonnet` map to `SMALL_MODEL`/`BIG_MODEL` prefixed with `gemini/` *if* those models are in the server's known `GEMINI_MODELS` list (otherwise falls back to OpenAI mapping).
4949

5050
4. **Run the server**:
5151
```bash
@@ -90,6 +90,8 @@ The following OpenAI models are supported with automatic `openai/` prefix handli
9090
- chatgpt-4o-latest
9191
- gpt-4o-mini
9292
- gpt-4o-mini-audio-preview
93+
- gpt-4.1
94+
- gpt-4.1-mini
9395

9496
#### Gemini Models
9597
The following Gemini models are supported with automatic `gemini/` prefix handling:
@@ -109,33 +111,34 @@ For example:
109111

110112
### Customizing Model Mapping
111113

112-
You can customize which models are used via environment variables:
114+
Control the mapping using environment variables in your `.env` file or directly:
113115

114-
- `BIG_MODEL`: The model to use for Claude Sonnet models (default: "gpt-4o")
115-
- `SMALL_MODEL`: The model to use for Claude Haiku models (default: "gpt-4o-mini")
116-
117-
Add these to your `.env` file to customize:
118-
```
119-
OPENAI_API_KEY=your-openai-key
120-
# For OpenAI models (default)
121-
BIG_MODEL=gpt-4o
122-
SMALL_MODEL=gpt-4o-mini
123-
124-
# For Gemini models
125-
# BIG_MODEL=gemini-2.5-pro-preview-03-25
126-
# SMALL_MODEL=gemini-2.0-flash
116+
**Example 1: Default (Use OpenAI)**
117+
No changes needed in `.env` beyond API keys, or ensure:
118+
```dotenv
119+
OPENAI_API_KEY="your-openai-key"
120+
GEMINI_API_KEY="your-google-key" # Needed if PREFERRED_PROVIDER=google
121+
# PREFERRED_PROVIDER="openai" # Optional, it's the default
122+
# BIG_MODEL="gpt-4.1" # Optional, it's the default
123+
# SMALL_MODEL="gpt-4.1-mini" # Optional, it's the default
127124
```
128125

129-
Or set them directly when running the server:
130-
```bash
131-
# Using OpenAI models (with uv)
132-
BIG_MODEL=gpt-4o SMALL_MODEL=gpt-4o-mini uv run uvicorn server:app --host 0.0.0.0 --port 8082
133-
134-
# Using Gemini models (with uv)
135-
BIG_MODEL=gemini-2.5-pro-preview-03-25 SMALL_MODEL=gemini-2.0-flash uv run uvicorn server:app --host 0.0.0.0 --port 8082
126+
**Example 2: Prefer Google**
127+
```dotenv
128+
GEMINI_API_KEY="your-google-key"
129+
OPENAI_API_KEY="your-openai-key" # Needed for fallback
130+
PREFERRED_PROVIDER="google"
131+
# BIG_MODEL="gemini-2.5-pro-preview-03-25" # Optional, it's the default for Google pref
132+
# SMALL_MODEL="gemini-2.0-flash" # Optional, it's the default for Google pref
133+
```
136134

137-
# Mix and match (with uv)
138-
BIG_MODEL=gemini-2.5-pro-preview-03-25 SMALL_MODEL=gpt-4o-mini uv run uvicorn server:app --host 0.0.0.0 --port 8082
135+
**Example 3: Use Specific OpenAI Models**
136+
```dotenv
137+
OPENAI_API_KEY="your-openai-key"
138+
GEMINI_API_KEY="your-google-key"
139+
PREFERRED_PROVIDER="openai"
140+
BIG_MODEL="gpt-4o" # Example specific model
141+
SMALL_MODEL="gpt-4o-mini" # Example specific model
139142
```
140143

141144
## How It Works 🧩

server.py

Lines changed: 8 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -82,13 +82,13 @@ def format(self, record):
8282
OPENAI_API_KEY = os.environ.get("OPENAI_API_KEY")
8383
GEMINI_API_KEY = os.environ.get("GEMINI_API_KEY")
8484

85-
# Get preferred provider (default to google)
86-
PREFERRED_PROVIDER = os.environ.get("PREFERRED_PROVIDER", "google").lower()
85+
# Get preferred provider (default to openai)
86+
PREFERRED_PROVIDER = os.environ.get("PREFERRED_PROVIDER", "openai").lower()
8787

8888
# Get model mapping configuration from environment
89-
# Default to latest Gemini models if not set
90-
BIG_MODEL = os.environ.get("BIG_MODEL", "gemini-2.5-pro-preview-03-25")
91-
SMALL_MODEL = os.environ.get("SMALL_MODEL", "gemini-2.0-flash")
89+
# Default to latest OpenAI models if not set
90+
BIG_MODEL = os.environ.get("BIG_MODEL", "gpt-4.1")
91+
SMALL_MODEL = os.environ.get("SMALL_MODEL", "gpt-4.1-mini")
9292

9393
# List of OpenAI models
9494
OPENAI_MODELS = [
@@ -101,7 +101,9 @@ def format(self, record):
101101
"gpt-4o-audio-preview",
102102
"chatgpt-4o-latest",
103103
"gpt-4o-mini",
104-
"gpt-4o-mini-audio-preview"
104+
"gpt-4o-mini-audio-preview",
105+
"gpt-4.1", # Added default big model
106+
"gpt-4.1-mini" # Added default small model
105107
]
106108

107109
# List of Gemini models

0 commit comments

Comments
 (0)