Skip to content

Commit 3a43042

Browse files
authored
docs - add sap gen ai provider on LiteLLM (#17667)
1 parent ee0812a commit 3a43042

File tree

4 files changed

+157
-0
lines changed

4 files changed

+157
-0
lines changed
Lines changed: 121 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,121 @@
1+
import Tabs from '@theme/Tabs';
2+
import TabItem from '@theme/TabItem';
3+
4+
# SAP Generative AI Hub
5+
6+
LiteLLM supports SAP Generative AI Hub's Orchestration Service.
7+
8+
| Property | Details |
9+
|-------|-------|
10+
| Description | SAP's Generative AI Hub provides access to foundation models through the AI Core orchestration service. |
11+
| Provider Route on LiteLLM | `sap/` |
12+
| Supported Endpoints | `/chat/completions` |
13+
| API Reference | [SAP AI Core Documentation](https://help.sap.com/docs/sap-ai-core) |
14+
15+
## Authentication
16+
17+
SAP Generative AI Hub uses service key authentication. You can provide credentials via:
18+
19+
1. **Environment variable** - Set `AICORE_SERVICE_KEY` with your service key JSON
20+
2. **Direct parameter** - Pass `api_key` with the service key JSON string
21+
22+
```python showLineNumbers title="Environment Variable"
23+
import os
24+
os.environ["AICORE_SERVICE_KEY"] = '{"clientid": "...", "clientsecret": "...", ...}'
25+
```
26+
27+
## Usage - LiteLLM Python SDK
28+
29+
```python showLineNumbers title="SAP Chat Completion"
30+
from litellm import completion
31+
import os
32+
33+
os.environ["AICORE_SERVICE_KEY"] = '{"clientid": "...", "clientsecret": "...", ...}'
34+
35+
response = completion(
36+
model="sap/gpt-4",
37+
messages=[{"role": "user", "content": "Hello from LiteLLM"}]
38+
)
39+
print(response)
40+
```
41+
42+
```python showLineNumbers title="SAP Chat Completion - Streaming"
43+
from litellm import completion
44+
import os
45+
46+
os.environ["AICORE_SERVICE_KEY"] = '{"clientid": "...", "clientsecret": "...", ...}'
47+
48+
response = completion(
49+
model="sap/gpt-4",
50+
messages=[{"role": "user", "content": "Hello from LiteLLM"}],
51+
stream=True
52+
)
53+
54+
for chunk in response:
55+
print(chunk.choices[0].delta.content or "", end="")
56+
```
57+
58+
## Usage - LiteLLM Proxy
59+
60+
Add to your LiteLLM Proxy config:
61+
62+
```yaml showLineNumbers title="config.yaml"
63+
model_list:
64+
- model_name: sap-gpt4
65+
litellm_params:
66+
model: sap/gpt-4
67+
api_key: os.environ/AICORE_SERVICE_KEY
68+
```
69+
70+
Start the proxy:
71+
72+
```bash showLineNumbers title="Start Proxy"
73+
litellm --config config.yaml
74+
```
75+
76+
<Tabs>
77+
<TabItem value="curl" label="cURL">
78+
79+
```bash showLineNumbers title="Test Request"
80+
curl http://localhost:4000/v1/chat/completions \
81+
-H "Content-Type: application/json" \
82+
-H "Authorization: Bearer your-proxy-api-key" \
83+
-d '{
84+
"model": "sap-gpt4",
85+
"messages": [{"role": "user", "content": "Hello"}]
86+
}'
87+
```
88+
89+
</TabItem>
90+
<TabItem value="openai-sdk" label="OpenAI SDK">
91+
92+
```python showLineNumbers title="OpenAI SDK"
93+
from openai import OpenAI
94+
95+
client = OpenAI(
96+
base_url="http://localhost:4000",
97+
api_key="your-proxy-api-key"
98+
)
99+
100+
response = client.chat.completions.create(
101+
model="sap-gpt4",
102+
messages=[{"role": "user", "content": "Hello"}]
103+
)
104+
print(response.choices[0].message.content)
105+
```
106+
107+
</TabItem>
108+
</Tabs>
109+
110+
## Supported Parameters
111+
112+
| Parameter | Description |
113+
|-----------|-------------|
114+
| `temperature` | Controls randomness |
115+
| `max_tokens` | Maximum tokens in response |
116+
| `top_p` | Nucleus sampling |
117+
| `tools` | Function calling tools |
118+
| `tool_choice` | Tool selection behavior |
119+
| `response_format` | Output format (json_object, json_schema) |
120+
| `stream` | Enable streaming |
121+

docs/my-website/sidebars.js

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -668,6 +668,7 @@ const sidebars = {
668668
]
669669
},
670670
"providers/sambanova",
671+
"providers/sap",
671672
"providers/snowflake",
672673
"providers/togetherai",
673674
"providers/topaz",

litellm/proxy/public_endpoints/provider_create_fields.json

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2446,6 +2446,24 @@
24462446
],
24472447
"default_model_placeholder": "gpt-3.5-turbo"
24482448
},
2449+
{
2450+
"provider": "SAP",
2451+
"provider_display_name": "SAP Generative AI Hub",
2452+
"litellm_provider": "sap",
2453+
"credential_fields": [
2454+
{
2455+
"key": "api_key",
2456+
"label": "SAP AI Core Service Key (JSON)",
2457+
"placeholder": null,
2458+
"tooltip": "Paste your SAP AI Core service key JSON. Contains clientid, clientsecret, and service URLs.",
2459+
"required": true,
2460+
"field_type": "textarea",
2461+
"options": null,
2462+
"default_value": null
2463+
}
2464+
],
2465+
"default_model_placeholder": "sap/gpt-4"
2466+
},
24492467
{
24502468
"provider": "Snowflake",
24512469
"provider_display_name": "Snowflake",

provider_endpoints_support.json

Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1554,6 +1554,23 @@
15541554
"a2a": true
15551555
}
15561556
},
1557+
"sap": {
1558+
"display_name": "SAP Generative AI Hub (`sap`)",
1559+
"url": "https://docs.litellm.ai/docs/providers/sap",
1560+
"endpoints": {
1561+
"chat_completions": true,
1562+
"messages": true,
1563+
"responses": true,
1564+
"embeddings": false,
1565+
"image_generations": false,
1566+
"audio_transcriptions": false,
1567+
"audio_speech": false,
1568+
"moderations": false,
1569+
"batches": false,
1570+
"rerank": false,
1571+
"a2a": true
1572+
}
1573+
},
15571574
"snowflake": {
15581575
"display_name": "Snowflake (`snowflake`)",
15591576
"url": "https://docs.litellm.ai/docs/providers/snowflake",

0 commit comments

Comments
 (0)