You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+34-31Lines changed: 34 additions & 31 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,7 +12,7 @@ A proxy server that lets you use Anthropic clients with Gemini or OpenAI models
12
12
### Prerequisites
13
13
14
14
- OpenAI API key 🔑
15
-
- Google AI Studio (Gemini) API key (if using default provider) 🔑
15
+
- Google AI Studio (Gemini) API key (if using Google provider) 🔑
16
16
-[uv](https://github.com/astral-sh/uv) installed.
17
17
18
18
### Setup 🛠️
@@ -37,15 +37,15 @@ A proxy server that lets you use Anthropic clients with Gemini or OpenAI models
37
37
Edit `.env` and fill in your API keys and model configurations:
38
38
39
39
*`ANTHROPIC_API_KEY`: (Optional) Needed only if proxying *to* Anthropic models.
40
-
*`OPENAI_API_KEY`: Your OpenAI API key (Required if using OpenAI models as fallback or primary).
41
-
*`GEMINI_API_KEY`: Your Google AI Studio (Gemini) API key (Required if using the default Gemini preference).
42
-
*`PREFERRED_PROVIDER` (Optional): Set to `google` (default) or `openai`. This determines the primary backend for mapping `haiku`/`sonnet`.
43
-
*`BIG_MODEL` (Optional): The model to map `sonnet` requests to. Defaults to `gemini-2.5-pro-preview-03-25` (if `PREFERRED_PROVIDER=google` and model is known) or `gpt-4o`.
44
-
*`SMALL_MODEL` (Optional): The model to map `haiku` requests to. Defaults to `gemini-2.0-flash` (if `PREFERRED_PROVIDER=google` and model is known) or `gpt-4o-mini`.
40
+
*`OPENAI_API_KEY`: Your OpenAI API key (Required if using the default OpenAI preference or as fallback).
41
+
*`GEMINI_API_KEY`: Your Google AI Studio (Gemini) API key (Required if PREFERRED_PROVIDER=google).
42
+
*`PREFERRED_PROVIDER` (Optional): Set to `openai` (default) or `google`. This determines the primary backend for mapping `haiku`/`sonnet`.
43
+
*`BIG_MODEL` (Optional): The model to map `sonnet` requests to. Defaults to `gpt-4.1` (if `PREFERRED_PROVIDER=openai`) or `gemini-2.5-pro-preview-03-25`.
44
+
*`SMALL_MODEL` (Optional): The model to map `haiku` requests to. Defaults to `gpt-4.1-mini` (if `PREFERRED_PROVIDER=openai`) or `gemini-2.0-flash`.
45
45
46
46
**Mapping Logic:**
47
-
- If `PREFERRED_PROVIDER=google` (default), `haiku`/`sonnet` map to `SMALL_MODEL`/`BIG_MODEL` prefixed with `gemini/`*if* those models are in the server's known `GEMINI_MODELS` list.
48
-
-Otherwise (if `PREFERRED_PROVIDER=openai` or the specified Google model isn't known), they map to `SMALL_MODEL`/`BIG_MODEL` prefixed with `openai/`.
47
+
- If `PREFERRED_PROVIDER=openai` (default), `haiku`/`sonnet` map to `SMALL_MODEL`/`BIG_MODEL` prefixed with `openai/`.
48
+
-If `PREFERRED_PROVIDER=google`, `haiku`/`sonnet` map to `SMALL_MODEL`/`BIG_MODEL` prefixed with `gemini/`*if* those models are in the server's known `GEMINI_MODELS` list (otherwise falls back to OpenAI mapping).
49
49
50
50
4.**Run the server**:
51
51
```bash
@@ -90,6 +90,8 @@ The following OpenAI models are supported with automatic `openai/` prefix handli
90
90
- chatgpt-4o-latest
91
91
- gpt-4o-mini
92
92
- gpt-4o-mini-audio-preview
93
+
- gpt-4.1
94
+
- gpt-4.1-mini
93
95
94
96
#### Gemini Models
95
97
The following Gemini models are supported with automatic `gemini/` prefix handling:
@@ -109,33 +111,34 @@ For example:
109
111
110
112
### Customizing Model Mapping
111
113
112
-
You can customize which models are used via environment variables:
114
+
Control the mapping using environment variables in your `.env` file or directly:
113
115
114
-
-`BIG_MODEL`: The model to use for Claude Sonnet models (default: "gpt-4o")
115
-
-`SMALL_MODEL`: The model to use for Claude Haiku models (default: "gpt-4o-mini")
116
-
117
-
Add these to your `.env` file to customize:
118
-
```
119
-
OPENAI_API_KEY=your-openai-key
120
-
# For OpenAI models (default)
121
-
BIG_MODEL=gpt-4o
122
-
SMALL_MODEL=gpt-4o-mini
123
-
124
-
# For Gemini models
125
-
# BIG_MODEL=gemini-2.5-pro-preview-03-25
126
-
# SMALL_MODEL=gemini-2.0-flash
116
+
**Example 1: Default (Use OpenAI)**
117
+
No changes needed in `.env` beyond API keys, or ensure:
118
+
```dotenv
119
+
OPENAI_API_KEY="your-openai-key"
120
+
GEMINI_API_KEY="your-google-key" # Needed if PREFERRED_PROVIDER=google
121
+
# PREFERRED_PROVIDER="openai" # Optional, it's the default
122
+
# BIG_MODEL="gpt-4.1" # Optional, it's the default
123
+
# SMALL_MODEL="gpt-4.1-mini" # Optional, it's the default
127
124
```
128
125
129
-
Or set them directly when running the server:
130
-
```bash
131
-
# Using OpenAI models (with uv)
132
-
BIG_MODEL=gpt-4o SMALL_MODEL=gpt-4o-mini uv run uvicorn server:app --host 0.0.0.0 --port 8082
133
-
134
-
# Using Gemini models (with uv)
135
-
BIG_MODEL=gemini-2.5-pro-preview-03-25 SMALL_MODEL=gemini-2.0-flash uv run uvicorn server:app --host 0.0.0.0 --port 8082
126
+
**Example 2: Prefer Google**
127
+
```dotenv
128
+
GEMINI_API_KEY="your-google-key"
129
+
OPENAI_API_KEY="your-openai-key" # Needed for fallback
130
+
PREFERRED_PROVIDER="google"
131
+
# BIG_MODEL="gemini-2.5-pro-preview-03-25" # Optional, it's the default for Google pref
132
+
# SMALL_MODEL="gemini-2.0-flash" # Optional, it's the default for Google pref
133
+
```
136
134
137
-
# Mix and match (with uv)
138
-
BIG_MODEL=gemini-2.5-pro-preview-03-25 SMALL_MODEL=gpt-4o-mini uv run uvicorn server:app --host 0.0.0.0 --port 8082
135
+
**Example 3: Use Specific OpenAI Models**
136
+
```dotenv
137
+
OPENAI_API_KEY="your-openai-key"
138
+
GEMINI_API_KEY="your-google-key"
139
+
PREFERRED_PROVIDER="openai"
140
+
BIG_MODEL="gpt-4o" # Example specific model
141
+
SMALL_MODEL="gpt-4o-mini" # Example specific model
0 commit comments