Skip to content

Commit 669566a

Browse files
Contributor PR - Support OPENAI_BASE_URL in addition to OPENAI_API_BASE (#9995) (#10423)
* Support OPENAI_BASE_URL in addition to OPENAI_API_BASE (#9995) * Support OPENAI_BASE_URL in addition to OPENAI_API_BASE Signed-off-by: Adrian Cole <[email protected]> * exact Signed-off-by: Adrian Cole <[email protected]> * feedback * less change Signed-off-by: Adrian Cole <[email protected]> --------- Signed-off-by: Adrian Cole <[email protected]> * doc fix OPENAI_API_BASE --------- Signed-off-by: Adrian Cole <[email protected]> Co-authored-by: Adrian Cole <[email protected]>
1 parent 9e35ca2 commit 669566a

File tree

14 files changed

+86
-15
lines changed

14 files changed

+86
-15
lines changed

.env.example

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# OpenAI
22
OPENAI_API_KEY = ""
3-
OPENAI_API_BASE = ""
3+
OPENAI_BASE_URL = ""
44
# Cohere
55
COHERE_API_KEY = ""
66
# OpenRouter

docs/my-website/docs/providers/openai.md

+4-4
Original file line numberDiff line numberDiff line change
@@ -156,7 +156,7 @@ print(response)
156156
```python
157157
import os
158158
os.environ["OPENAI_ORGANIZATION"] = "your-org-id" # OPTIONAL
159-
os.environ["OPENAI_API_BASE"] = "openaiai-api-base" # OPTIONAL
159+
os.environ["OPENAI_BASE_URL"] = "https://your_host/v1" # OPTIONAL
160160
```
161161

162162
### OpenAI Chat Completion Models
@@ -194,7 +194,7 @@ os.environ["OPENAI_API_BASE"] = "openaiai-api-base" # OPTIONAL
194194
| gpt-4-32k-0613 | `response = completion(model="gpt-4-32k-0613", messages=messages)` |
195195

196196

197-
These also support the `OPENAI_API_BASE` environment variable, which can be used to specify a custom API endpoint.
197+
These also support the `OPENAI_BASE_URL` environment variable, which can be used to specify a custom API endpoint.
198198

199199
## OpenAI Vision Models
200200
| Model Name | Function Call |
@@ -620,8 +620,8 @@ os.environ["OPENAI_API_KEY"] = ""
620620

621621
# set custom api base to your proxy
622622
# either set .env or litellm.api_base
623-
# os.environ["OPENAI_API_BASE"] = ""
624-
litellm.api_base = "your-openai-proxy-url"
623+
# os.environ["OPENAI_BASE_URL"] = "https://your_host/v1"
624+
litellm.api_base = "https://your_host/v1"
625625

626626

627627
messages = [{ "content": "Hello, how are you?","role": "user"}]

docs/my-website/docs/proxy/config_settings.md

+1
Original file line numberDiff line numberDiff line change
@@ -462,6 +462,7 @@ router_settings:
462462
| NO_DOCS | Flag to disable documentation generation
463463
| NO_PROXY | List of addresses to bypass proxy
464464
| OAUTH_TOKEN_INFO_ENDPOINT | Endpoint for OAuth token info retrieval
465+
| OPENAI_BASE_URL | Base URL for OpenAI API
465466
| OPENAI_API_BASE | Base URL for OpenAI API
466467
| OPENAI_API_KEY | API key for OpenAI services
467468
| OPENAI_ORGANIZATION | Organization identifier for OpenAI

docs/my-website/docs/proxy_server.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -337,7 +337,7 @@ export OPENAI_API_KEY="sk-1234"
337337
```
338338

339339
```shell
340-
export OPENAI_API_BASE="http://0.0.0.0:8000"
340+
export OPENAI_BASE_URL="http://0.0.0.0:8000"
341341
```
342342
```shell
343343
python3 run.py --task "a script that says hello world" --name "hello world"
@@ -572,7 +572,7 @@ export OPENAI_API_KEY="sk-1234"
572572
```
573573

574574
```shell
575-
export OPENAI_API_BASE="http://0.0.0.0:8000"
575+
export OPENAI_BASE_URL="http://0.0.0.0:8000"
576576
```
577577
```shell
578578
python3 run.py --task "a script that says hello world" --name "hello world"

docs/my-website/docs/set_keys.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ os.environ['AZURE_API_VERSION'] = "2023-05-15" # [OPTIONAL]
4444
os.environ['AZURE_API_TYPE'] = "azure" # [OPTIONAL]
4545

4646
# for openai
47-
os.environ['OPENAI_API_BASE'] = "https://openai-gpt-4-test2-v-12.openai.azure.com/"
47+
os.environ['OPENAI_BASE_URL'] = "https://your_host/v1"
4848
```
4949

5050
### Setting Project, Location, Token

docs/my-website/docs/tutorials/lm_evaluation_harness.md

+5-5
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ pip install openai==0.28.01
3939

4040
**Step 3: Set OpenAI API Base & Key**
4141
```shell
42-
$ export OPENAI_API_BASE=http://0.0.0.0:8000
42+
$ export OPENAI_BASE_URL=http://0.0.0.0:8000
4343
```
4444

4545
LM Harness requires you to set an OpenAI API key `OPENAI_API_SECRET_KEY` for running benchmarks
@@ -74,7 +74,7 @@ $ litellm --model huggingface/bigcode/starcoder
7474

7575
**Step 2: Set OpenAI API Base & Key**
7676
```shell
77-
$ export OPENAI_API_BASE=http://0.0.0.0:8000
77+
$ export OPENAI_BASE_URL=http://0.0.0.0:8000
7878
```
7979

8080
Set this to anything since the proxy has the credentials
@@ -93,12 +93,12 @@ cd FastEval
9393

9494
**Set API Base on FastEval**
9595

96-
On FastEval make the following **2 line code change** to set `OPENAI_API_BASE`
96+
On FastEval make the following **2 line code change** to set `OPENAI_BASE_URL`
9797

9898
https://github.com/FastEval/FastEval/pull/90/files
9999
```python
100100
try:
101-
api_base = os.environ["OPENAI_API_BASE"] #changed: read api base from .env
101+
api_base = os.environ["OPENAI_BASE_URL"] #changed: read api base from .env
102102
if api_base == None:
103103
api_base = "https://api.openai.com/v1"
104104
response = await self.reply_two_attempts_with_different_max_new_tokens(
@@ -130,7 +130,7 @@ $ litellm --model huggingface/bigcode/starcoder
130130

131131
**Step 2: Set OpenAI API Base & Key**
132132
```shell
133-
$ export OPENAI_API_BASE=http://0.0.0.0:8000
133+
$ export OPENAI_BASE_URL=http://0.0.0.0:8000
134134
```
135135

136136
**Step 3 Run with FLASK**

litellm/assistants/main.py

+8
Original file line numberDiff line numberDiff line change
@@ -110,6 +110,7 @@ def get_assistants(
110110
api_base = (
111111
optional_params.api_base # for deepinfra/perplexity/anyscale/groq we check in get_llm_provider and pass in the api base from there
112112
or litellm.api_base
113+
or os.getenv("OPENAI_BASE_URL")
113114
or os.getenv("OPENAI_API_BASE")
114115
or "https://api.openai.com/v1"
115116
)
@@ -314,6 +315,7 @@ def create_assistants(
314315
api_base = (
315316
optional_params.api_base # for deepinfra/perplexity/anyscale/groq we check in get_llm_provider and pass in the api base from there
316317
or litellm.api_base
318+
or os.getenv("OPENAI_BASE_URL")
317319
or os.getenv("OPENAI_API_BASE")
318320
or "https://api.openai.com/v1"
319321
)
@@ -490,6 +492,7 @@ def delete_assistant(
490492
api_base = (
491493
optional_params.api_base
492494
or litellm.api_base
495+
or os.getenv("OPENAI_BASE_URL")
493496
or os.getenv("OPENAI_API_BASE")
494497
or "https://api.openai.com/v1"
495498
)
@@ -678,6 +681,7 @@ def create_thread(
678681
api_base = (
679682
optional_params.api_base # for deepinfra/perplexity/anyscale/groq we check in get_llm_provider and pass in the api base from there
680683
or litellm.api_base
684+
or os.getenv("OPENAI_BASE_URL")
681685
or os.getenv("OPENAI_API_BASE")
682686
or "https://api.openai.com/v1"
683687
)
@@ -833,6 +837,7 @@ def get_thread(
833837
api_base = (
834838
optional_params.api_base # for deepinfra/perplexity/anyscale/groq we check in get_llm_provider and pass in the api base from there
835839
or litellm.api_base
840+
or os.getenv("OPENAI_BASE_URL")
836841
or os.getenv("OPENAI_API_BASE")
837842
or "https://api.openai.com/v1"
838843
)
@@ -1021,6 +1026,7 @@ def add_message(
10211026
api_base = (
10221027
optional_params.api_base # for deepinfra/perplexity/anyscale/groq we check in get_llm_provider and pass in the api base from there
10231028
or litellm.api_base
1029+
or os.getenv("OPENAI_BASE_URL")
10241030
or os.getenv("OPENAI_API_BASE")
10251031
or "https://api.openai.com/v1"
10261032
)
@@ -1182,6 +1188,7 @@ def get_messages(
11821188
api_base = (
11831189
optional_params.api_base # for deepinfra/perplexity/anyscale/groq we check in get_llm_provider and pass in the api base from there
11841190
or litellm.api_base
1191+
or os.getenv("OPENAI_BASE_URL")
11851192
or os.getenv("OPENAI_API_BASE")
11861193
or "https://api.openai.com/v1"
11871194
)
@@ -1380,6 +1387,7 @@ def run_thread(
13801387
api_base = (
13811388
optional_params.api_base # for deepinfra/perplexity/anyscale/groq we check in get_llm_provider and pass in the api base from there
13821389
or litellm.api_base
1390+
or os.getenv("OPENAI_BASE_URL")
13831391
or os.getenv("OPENAI_API_BASE")
13841392
or "https://api.openai.com/v1"
13851393
)

litellm/batches/main.py

+4
Original file line numberDiff line numberDiff line change
@@ -157,6 +157,7 @@ def create_batch(
157157
api_base = (
158158
optional_params.api_base
159159
or litellm.api_base
160+
or os.getenv("OPENAI_BASE_URL")
160161
or os.getenv("OPENAI_API_BASE")
161162
or "https://api.openai.com/v1"
162163
)
@@ -361,6 +362,7 @@ def retrieve_batch(
361362
api_base = (
362363
optional_params.api_base
363364
or litellm.api_base
365+
or os.getenv("OPENAI_BASE_URL")
364366
or os.getenv("OPENAI_API_BASE")
365367
or "https://api.openai.com/v1"
366368
)
@@ -556,6 +558,7 @@ def list_batches(
556558
api_base = (
557559
optional_params.api_base
558560
or litellm.api_base
561+
or os.getenv("OPENAI_BASE_URL")
559562
or os.getenv("OPENAI_API_BASE")
560563
or "https://api.openai.com/v1"
561564
)
@@ -713,6 +716,7 @@ def cancel_batch(
713716
api_base = (
714717
optional_params.api_base
715718
or litellm.api_base
719+
or os.getenv("OPENAI_BASE_URL")
716720
or os.getenv("OPENAI_API_BASE")
717721
or "https://api.openai.com/v1"
718722
)

litellm/files/main.py

+5
Original file line numberDiff line numberDiff line change
@@ -164,6 +164,7 @@ def create_file(
164164
api_base = (
165165
optional_params.api_base
166166
or litellm.api_base
167+
or os.getenv("OPENAI_BASE_URL")
167168
or os.getenv("OPENAI_API_BASE")
168169
or "https://api.openai.com/v1"
169170
)
@@ -343,6 +344,7 @@ def file_retrieve(
343344
api_base = (
344345
optional_params.api_base
345346
or litellm.api_base
347+
or os.getenv("OPENAI_BASE_URL")
346348
or os.getenv("OPENAI_API_BASE")
347349
or "https://api.openai.com/v1"
348350
)
@@ -496,6 +498,7 @@ def file_delete(
496498
api_base = (
497499
optional_params.api_base
498500
or litellm.api_base
501+
or os.getenv("OPENAI_BASE_URL")
499502
or os.getenv("OPENAI_API_BASE")
500503
or "https://api.openai.com/v1"
501504
)
@@ -649,6 +652,7 @@ def file_list(
649652
api_base = (
650653
optional_params.api_base
651654
or litellm.api_base
655+
or os.getenv("OPENAI_BASE_URL")
652656
or os.getenv("OPENAI_API_BASE")
653657
or "https://api.openai.com/v1"
654658
)
@@ -809,6 +813,7 @@ def file_content(
809813
api_base = (
810814
optional_params.api_base
811815
or litellm.api_base
816+
or os.getenv("OPENAI_BASE_URL")
812817
or os.getenv("OPENAI_API_BASE")
813818
or "https://api.openai.com/v1"
814819
)

litellm/fine_tuning/main.py

+4
Original file line numberDiff line numberDiff line change
@@ -142,6 +142,7 @@ def create_fine_tuning_job(
142142
api_base = (
143143
optional_params.api_base
144144
or litellm.api_base
145+
or os.getenv("OPENAI_BASE_URL")
145146
or os.getenv("OPENAI_API_BASE")
146147
or "https://api.openai.com/v1"
147148
)
@@ -363,6 +364,7 @@ def cancel_fine_tuning_job(
363364
api_base = (
364365
optional_params.api_base
365366
or litellm.api_base
367+
or os.getenv("OPENAI_BASE_URL")
366368
or os.getenv("OPENAI_API_BASE")
367369
or "https://api.openai.com/v1"
368370
)
@@ -524,6 +526,7 @@ def list_fine_tuning_jobs(
524526
api_base = (
525527
optional_params.api_base
526528
or litellm.api_base
529+
or os.getenv("OPENAI_BASE_URL")
527530
or os.getenv("OPENAI_API_BASE")
528531
or "https://api.openai.com/v1"
529532
)
@@ -678,6 +681,7 @@ def retrieve_fine_tuning_job(
678681
api_base = (
679682
optional_params.api_base
680683
or litellm.api_base
684+
or os.getenv("OPENAI_BASE_URL")
681685
or os.getenv("OPENAI_API_BASE")
682686
or "https://api.openai.com/v1"
683687
)

litellm/llms/openai/chat/gpt_transformation.py

+1
Original file line numberDiff line numberDiff line change
@@ -384,6 +384,7 @@ def get_api_base(api_base: Optional[str] = None) -> Optional[str]:
384384
return (
385385
api_base
386386
or litellm.api_base
387+
or get_secret_str("OPENAI_BASE_URL")
387388
or get_secret_str("OPENAI_API_BASE")
388389
or "https://api.openai.com/v1"
389390
)

litellm/llms/openai/responses/transformation.py

+1
Original file line numberDiff line numberDiff line change
@@ -119,6 +119,7 @@ def get_complete_url(
119119
api_base = (
120120
api_base
121121
or litellm.api_base
122+
or get_secret_str("OPENAI_BASE_URL")
122123
or get_secret_str("OPENAI_API_BASE")
123124
or "https://api.openai.com/v1"
124125
)

litellm/main.py

+6
Original file line numberDiff line numberDiff line change
@@ -1548,6 +1548,7 @@ def completion( # type: ignore # noqa: PLR0915
15481548
api_base = (
15491549
api_base
15501550
or litellm.api_base
1551+
or get_secret("OPENAI_BASE_URL")
15511552
or get_secret("OPENAI_API_BASE")
15521553
or "https://api.openai.com/v1"
15531554
)
@@ -1704,6 +1705,7 @@ def completion( # type: ignore # noqa: PLR0915
17041705
api_base = (
17051706
api_base # for deepinfra/perplexity/anyscale/groq/friendliai we check in get_llm_provider and pass in the api base from there
17061707
or litellm.api_base
1708+
or get_secret("OPENAI_BASE_URL")
17071709
or get_secret("OPENAI_API_BASE")
17081710
or "https://api.openai.com/v1"
17091711
)
@@ -1757,6 +1759,7 @@ def completion( # type: ignore # noqa: PLR0915
17571759
api_base = (
17581760
api_base # for deepinfra/perplexity/anyscale/groq/friendliai we check in get_llm_provider and pass in the api base from there
17591761
or litellm.api_base
1762+
or get_secret("OPENAI_BASE_URL")
17601763
or get_secret("OPENAI_API_BASE")
17611764
or "https://api.openai.com/v1"
17621765
)
@@ -3543,6 +3546,7 @@ def embedding( # noqa: PLR0915
35433546
api_base = (
35443547
api_base
35453548
or litellm.api_base
3549+
or get_secret_str("OPENAI_BASE_URL")
35463550
or get_secret_str("OPENAI_API_BASE")
35473551
or "https://api.openai.com/v1"
35483552
)
@@ -5251,6 +5255,7 @@ def transcription(
52515255
api_base = (
52525256
api_base
52535257
or litellm.api_base
5258+
or get_secret("OPENAI_BASE_URL")
52545259
or get_secret("OPENAI_API_BASE")
52555260
or "https://api.openai.com/v1"
52565261
) # type: ignore
@@ -5421,6 +5426,7 @@ def speech( # noqa: PLR0915
54215426
api_base = (
54225427
api_base # for deepinfra/perplexity/anyscale/groq/friendliai we check in get_llm_provider and pass in the api base from there
54235428
or litellm.api_base
5429+
or get_secret("OPENAI_BASE_URL")
54245430
or get_secret("OPENAI_API_BASE")
54255431
or "https://api.openai.com/v1"
54265432
) # type: ignore

0 commit comments

Comments
 (0)