Skip to content

Commit 3185e9f

Browse files
author
Rahul Sengottuvelu
committed
docs: correct setup instructions to use uv
1 parent 15d849c commit 3185e9f

File tree

1 file changed

+19
-7
lines changed

1 file changed

+19
-7
lines changed

README.md

Lines changed: 19 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,8 @@ A proxy server that lets you use Anthropic clients with Gemini or OpenAI models
1212
### Prerequisites
1313

1414
- OpenAI API key 🔑
15+
- Google AI Studio (Gemini) API key (if using default provider) 🔑
16+
- [uv](https://github.com/astral-sh/uv) (recommended) or pip installed.
1517

1618
### Setup 🛠️
1719

@@ -22,10 +24,17 @@ A proxy server that lets you use Anthropic clients with Gemini or OpenAI models
2224
```
2325

2426
2. **Install dependencies**:
27+
Using uv (recommended):
2528
```bash
26-
pip install -r requirements.txt
29+
uv venv # Create virtual environment (optional but recommended)
30+
uv pip install fastapi uvicorn litellm python-dotenv httpx # Install main packages
31+
```
32+
Or using pip:
33+
```bash
34+
python -m venv .venv
35+
source .venv/bin/activate # Or .\venv\Scripts\activate on Windows
36+
pip install fastapi uvicorn litellm python-dotenv httpx
2737
```
28-
*(Ensure `requirements.txt` includes FastAPI, Uvicorn, LiteLLM, python-dotenv, httpx)*
2938

3039
3. **Configure Environment Variables**:
3140
Copy the example environment file:
@@ -46,6 +55,11 @@ A proxy server that lets you use Anthropic clients with Gemini or OpenAI models
4655
- Otherwise (if `PREFERRED_PROVIDER=openai` or the specified Google model isn't known), they map to `SMALL_MODEL`/`BIG_MODEL` prefixed with `openai/`.
4756

4857
4. **Run the server**:
58+
Using uv:
59+
```bash
60+
uv run uvicorn server:app --host 0.0.0.0 --port 8082 --reload
61+
```
62+
Or directly with uvicorn (if installed globally or in activated venv):
4963
```bash
5064
uvicorn server:app --host 0.0.0.0 --port 8082 --reload
5165
```
@@ -125,15 +139,13 @@ SMALL_MODEL=gpt-4o-mini
125139

126140
Or set them directly when running the server:
127141
```bash
128-
# Using OpenAI models
142+
# Using OpenAI models (with uv)
129143
BIG_MODEL=gpt-4o SMALL_MODEL=gpt-4o-mini uv run uvicorn server:app --host 0.0.0.0 --port 8082
130144

131-
# Using Gemini models
145+
# Using Gemini models (with uv)
132146
BIG_MODEL=gemini-2.5-pro-preview-03-25 SMALL_MODEL=gemini-2.0-flash uv run uvicorn server:app --host 0.0.0.0 --port 8082
133-
```
134147

135-
To use a mix of OpenAI and Gemini models:
136-
```bash
148+
# Mix and match (with uv)
137149
BIG_MODEL=gemini-2.5-pro-preview-03-25 SMALL_MODEL=gpt-4o-mini uv run uvicorn server:app --host 0.0.0.0 --port 8082
138150
```
139151

0 commit comments

Comments
 (0)