Skip to content

Commit 2536d51

Browse files
Boris DevBoris Dev
authored andcommitted
fixed big and small model in .env
1 parent 9daaf07 commit 2536d51

File tree

2 files changed

+87
-64
lines changed

2 files changed

+87
-64
lines changed

.env.example

Lines changed: 9 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -4,12 +4,13 @@ OPENAI_API_KEY="sk-..."
44
GEMINI_API_KEY="your-google-ai-studio-key"
55

66
# Optional: Provider Preference and Model Mapping
7-
# Controls which provider (google or openai) is preferred for mapping haiku/sonnet.
7+
# Controls which provider (google, openai, or azure) is preferred for mapping haiku/sonnet.
88
# Defaults to openai if not set.
99
PREFERRED_PROVIDER="openai"
1010

1111
# Optional: Specify the exact models to map haiku/sonnet to.
1212
# If PREFERRED_PROVIDER=google, these MUST be valid Gemini model names known to the server.
13+
# If PREFERRED_PROVIDER=azure, these should match your Azure deployment names.
1314
# Defaults to gemini-2.5-pro-preview-03-25 and gemini-2.0-flash if PREFERRED_PROVIDER=google.
1415
# Defaults to gpt-4.1 and gpt-4.1-mini if PREFERRED_PROVIDER=openai.
1516
# BIG_MODEL="gpt-4.1"
@@ -20,10 +21,15 @@ PREFERRED_PROVIDER="openai"
2021
# BIG_MODEL="gemini-2.5-pro-preview-03-25"
2122
# SMALL_MODEL="gemini-2.0-flash"
2223

24+
# Example Azure mapping:
25+
# PREFERRED_PROVIDER="azure"
26+
# BIG_MODEL="your-deployment-name"
27+
# SMALL_MODEL="your-deployment-name"
28+
2329
# Azure OpenAI Configuration (optional)
2430
# Uncomment and set these if you want to use Azure OpenAI
2531
# Use model format: azure/your-deployment-name in requests
2632
# AZURE_OPENAI_ENDPOINT="https://your-resource.openai.azure.com"
2733
# AZURE_OPENAI_API_KEY="your-azure-openai-api-key"
28-
# AZURE_API_VERSION="2025-01-01-preview"
29-
# AZURE_DEPLOYMENT_NAME="gpt-4"
34+
# AZURE_API_VERSION="your-api-version"
35+
# AZURE_DEPLOYMENT_NAME="your-deployment-name"

README.md

Lines changed: 78 additions & 61 deletions
Original file line numberDiff line numberDiff line change
@@ -4,117 +4,132 @@
44

55
A proxy server that lets you use Anthropic clients with Gemini or OpenAI models via LiteLLM. 🌉
66

7-
87
![Anthropic API Proxy](pic.png)
98

109
## Quick Start ⚡
1110

1211
### Prerequisites
1312

14-
- OpenAI API key 🔑
15-
- Google AI Studio (Gemini) API key (if using Google provider) 🔑
16-
- [uv](https://github.com/astral-sh/uv) installed.
13+
- OpenAI API key 🔑
14+
- Google AI Studio (Gemini) API key (if using Google provider) 🔑
15+
- [uv](https://github.com/astral-sh/uv) installed.
1716

1817
### Setup 🛠️
1918

2019
1. **Clone this repository**:
21-
```bash
22-
git clone https://github.com/1rgs/claude-code-openai.git
23-
cd claude-code-openai
24-
```
20+
21+
```bash
22+
git clone https://github.com/1rgs/claude-code-openai.git
23+
cd claude-code-openai
24+
```
2525

2626
2. **Install uv** (if you haven't already):
27-
```bash
28-
curl -LsSf https://astral.sh/uv/install.sh | sh
29-
```
30-
*(`uv` will handle dependencies based on `pyproject.toml` when you run the server)*
27+
28+
```bash
29+
curl -LsSf https://astral.sh/uv/install.sh | sh
30+
```
31+
32+
_(`uv` will handle dependencies based on `pyproject.toml` when you run the server)_
3133
3234
3. **Configure Environment Variables**:
3335
Copy the example environment file:
34-
```bash
35-
cp .env.example .env
36-
```
37-
Edit `.env` and fill in your API keys and model configurations:
38-
39-
* `ANTHROPIC_API_KEY`: (Optional) Needed only if proxying *to* Anthropic models.
40-
* `OPENAI_API_KEY`: Your OpenAI API key (Required if using the default OpenAI preference or as fallback).
41-
* `GEMINI_API_KEY`: Your Google AI Studio (Gemini) API key (Required if PREFERRED_PROVIDER=google).
42-
* `PREFERRED_PROVIDER` (Optional): Set to `openai` (default) or `google`. This determines the primary backend for mapping `haiku`/`sonnet`.
43-
* `BIG_MODEL` (Optional): The model to map `sonnet` requests to. Defaults to `gpt-4.1` (if `PREFERRED_PROVIDER=openai`) or `gemini-2.5-pro-preview-03-25`.
44-
* `SMALL_MODEL` (Optional): The model to map `haiku` requests to. Defaults to `gpt-4.1-mini` (if `PREFERRED_PROVIDER=openai`) or `gemini-2.0-flash`.
45-
46-
**Mapping Logic:**
47-
- If `PREFERRED_PROVIDER=openai` (default), `haiku`/`sonnet` map to `SMALL_MODEL`/`BIG_MODEL` prefixed with `openai/`.
48-
- If `PREFERRED_PROVIDER=google`, `haiku`/`sonnet` map to `SMALL_MODEL`/`BIG_MODEL` prefixed with `gemini/` *if* those models are in the server's known `GEMINI_MODELS` list (otherwise falls back to OpenAI mapping).
36+
37+
```bash
38+
cp .env.example .env
39+
```
40+
41+
Edit `.env` and fill in your API keys and model configurations:
42+
43+
- `ANTHROPIC_API_KEY`: (Optional) Needed only if proxying _to_ Anthropic models.
44+
- `OPENAI_API_KEY`: Your OpenAI API key (Required if using the default OpenAI preference or as fallback).
45+
- `GEMINI_API_KEY`: Your Google AI Studio (Gemini) API key (Required if PREFERRED_PROVIDER=google).
46+
- `PREFERRED_PROVIDER` (Optional): Set to `openai` (default) or `google`. This determines the primary backend for mapping `haiku`/`sonnet`.
47+
- `BIG_MODEL` (Optional): The model to map `sonnet` requests to. Defaults to `gpt-4.1` (if `PREFERRED_PROVIDER=openai`) or `gemini-2.5-pro-preview-03-25`.
48+
- `SMALL_MODEL` (Optional): The model to map `haiku` requests to. Defaults to `gpt-4.1-mini` (if `PREFERRED_PROVIDER=openai`) or `gemini-2.0-flash`.
49+
50+
**Mapping Logic:**
51+
52+
- If `PREFERRED_PROVIDER=openai` (default), `haiku`/`sonnet` map to `SMALL_MODEL`/`BIG_MODEL` prefixed with `openai/`.
53+
- If `PREFERRED_PROVIDER=google`, `haiku`/`sonnet` map to `SMALL_MODEL`/`BIG_MODEL` prefixed with `gemini/` _if_ those models are in the server's known `GEMINI_MODELS` list (otherwise falls back to OpenAI mapping).
4954

5055
4. **Run the server**:
51-
```bash
52-
uv run uvicorn server:app --host 0.0.0.0 --port 8082 --reload
53-
```
54-
*(`--reload` is optional, for development)*
56+
```bash
57+
uv run uvicorn server:app --host 0.0.0.0 --port 8082 --reload
58+
```
59+
_(`--reload` is optional, for development)_
5560

5661
### Using with Claude Code 🎮
5762

5863
1. **Install Claude Code** (if you haven't already):
59-
```bash
60-
npm install -g @anthropic-ai/claude-code
61-
```
64+
65+
```bash
66+
npm install -g @anthropic-ai/claude-code
67+
```
6268
6369
2. **Connect to your proxy**:
64-
```bash
65-
ANTHROPIC_BASE_URL=http://localhost:8082 claude
66-
```
70+
71+
```bash
72+
ANTHROPIC_BASE_URL=http://localhost:8082 claude
73+
```
6774
6875
3. **That's it!** Your Claude Code client will now use the configured backend models (defaulting to Gemini) through the proxy. 🎯
6976

7077
## Model Mapping 🗺️
7178

7279
The proxy automatically maps Claude models to either OpenAI or Gemini models based on the configured model:
7380

74-
| Claude Model | Default Mapping | When BIG_MODEL/SMALL_MODEL is a Gemini model |
75-
|--------------|--------------|---------------------------|
76-
| haiku | openai/gpt-4o-mini | gemini/[model-name] |
77-
| sonnet | openai/gpt-4o | gemini/[model-name] |
81+
| Claude Model | Default Mapping | When BIG_MODEL/SMALL_MODEL is a Gemini model |
82+
| ------------ | ------------------ | -------------------------------------------- |
83+
| haiku | openai/gpt-4o-mini | gemini/[model-name] |
84+
| sonnet | openai/gpt-4o | gemini/[model-name] |
7885

7986
### Supported Models
8087

8188
#### OpenAI Models
89+
8290
The following OpenAI models are supported with automatic `openai/` prefix handling:
83-
- o3-mini
84-
- o1
85-
- o1-mini
86-
- o1-pro
87-
- gpt-4.5-preview
88-
- gpt-4o
89-
- gpt-4o-audio-preview
90-
- chatgpt-4o-latest
91-
- gpt-4o-mini
92-
- gpt-4o-mini-audio-preview
93-
- gpt-4.1
94-
- gpt-4.1-mini
91+
92+
- o3-mini
93+
- o1
94+
- o1-mini
95+
- o1-pro
96+
- gpt-4.5-preview
97+
- gpt-4o
98+
- gpt-4o-audio-preview
99+
- chatgpt-4o-latest
100+
- gpt-4o-mini
101+
- gpt-4o-mini-audio-preview
102+
- gpt-4.1
103+
- gpt-4.1-mini
95104

96105
#### Gemini Models
106+
97107
The following Gemini models are supported with automatic `gemini/` prefix handling:
98-
- gemini-2.5-pro-preview-03-25
99-
- gemini-2.0-flash
108+
109+
- gemini-2.5-pro-preview-03-25
110+
- gemini-2.0-flash
100111

101112
### Model Prefix Handling
113+
102114
The proxy automatically adds the appropriate prefix to model names:
103-
- OpenAI models get the `openai/` prefix
104-
- Gemini models get the `gemini/` prefix
105-
- The BIG_MODEL and SMALL_MODEL will get the appropriate prefix based on whether they're in the OpenAI or Gemini model lists
115+
116+
- OpenAI models get the `openai/` prefix
117+
- Gemini models get the `gemini/` prefix
118+
- The BIG_MODEL and SMALL_MODEL will get the appropriate prefix based on whether they're in the OpenAI or Gemini model lists
106119
107120
For example:
108-
- `gpt-4o` becomes `openai/gpt-4o`
109-
- `gemini-2.5-pro-preview-03-25` becomes `gemini/gemini-2.5-pro-preview-03-25`
110-
- When BIG_MODEL is set to a Gemini model, Claude Sonnet will map to `gemini/[model-name]`
121+
122+
- `gpt-4o` becomes `openai/gpt-4o`
123+
- `gemini-2.5-pro-preview-03-25` becomes `gemini/gemini-2.5-pro-preview-03-25`
124+
- When BIG_MODEL is set to a Gemini model, Claude Sonnet will map to `gemini/[model-name]`
111125
112126
### Customizing Model Mapping
113127
114128
Control the mapping using environment variables in your `.env` file or directly:
115129
116130
**Example 1: Default (Use OpenAI)**
117131
No changes needed in `.env` beyond API keys, or ensure:
132+
118133
```dotenv
119134
OPENAI_API_KEY="your-openai-key"
120135
GEMINI_API_KEY="your-google-key" # Needed if PREFERRED_PROVIDER=google
@@ -124,6 +139,7 @@ GEMINI_API_KEY="your-google-key" # Needed if PREFERRED_PROVIDER=google
124139
```
125140
126141
**Example 2: Prefer Google**
142+
127143
```dotenv
128144
GEMINI_API_KEY="your-google-key"
129145
OPENAI_API_KEY="your-openai-key" # Needed for fallback
@@ -133,6 +149,7 @@ PREFERRED_PROVIDER="google"
133149
```
134150
135151
**Example 3: Use Specific OpenAI Models**
152+
136153
```dotenv
137154
OPENAI_API_KEY="your-openai-key"
138155
GEMINI_API_KEY="your-google-key"

0 commit comments

Comments
 (0)