Skip to content

Commit 9c5c81b

Browse files
authored
[Misc][Doc] Add note regarding loading generation_config by default (#15281)
Signed-off-by: Roger Wang <[email protected]>
1 parent d6cd59f commit 9c5c81b

File tree

4 files changed

+27
-1
lines changed

4 files changed

+27
-1
lines changed

docs/source/getting_started/quickstart.md

Lines changed: 11 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -58,6 +58,11 @@ from vllm import LLM, SamplingParams
5858
```
5959

6060
The next section defines a list of input prompts and sampling parameters for text generation. The [sampling temperature](https://arxiv.org/html/2402.05201v1) is set to `0.8` and the [nucleus sampling probability](https://en.wikipedia.org/wiki/Top-p_sampling) is set to `0.95`. You can find more information about the sampling parameters [here](#sampling-params).
61+
:::{important}
62+
By default, vLLM will use sampling parameters recommended by model creator by applying the `generation_config.json` from the Hugging Face model repository if it exists. In most cases, this will provide you with the best results by default if {class}`~vllm.SamplingParams` is not specified.
63+
64+
However, if vLLM's default sampling parameters are preferred, please set `generation_config="vllm"` when creating the {class}`~vllm.LLM` instance.
65+
:::
6166

6267
```python
6368
prompts = [
@@ -76,7 +81,7 @@ llm = LLM(model="facebook/opt-125m")
7681
```
7782

7883
:::{note}
79-
By default, vLLM downloads models from [HuggingFace](https://huggingface.co/). If you would like to use models from [ModelScope](https://www.modelscope.cn), set the environment variable `VLLM_USE_MODELSCOPE` before initializing the engine.
84+
By default, vLLM downloads models from [Hugging Face](https://huggingface.co/). If you would like to use models from [ModelScope](https://www.modelscope.cn), set the environment variable `VLLM_USE_MODELSCOPE` before initializing the engine.
8085
:::
8186

8287
Now, the fun part! The outputs are generated using `llm.generate`. It adds the input prompts to the vLLM engine's waiting queue and executes the vLLM engine to generate the outputs with high throughput. The outputs are returned as a list of `RequestOutput` objects, which include all of the output tokens.
@@ -107,6 +112,11 @@ vllm serve Qwen/Qwen2.5-1.5B-Instruct
107112
By default, the server uses a predefined chat template stored in the tokenizer.
108113
You can learn about overriding it [here](#chat-template).
109114
:::
115+
:::{important}
116+
By default, the server applies `generation_config.json` from the huggingface model repository if it exists. This means the default values of certain sampling parameters can be overridden by those recommended by the model creator.
117+
118+
To disable this behavior, please pass `--generation-config vllm` when launching the server.
119+
:::
110120

111121
This server can be queried in the same format as OpenAI API. For example, to list the models:
112122

docs/source/models/generative_models.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -46,6 +46,11 @@ for output in outputs:
4646
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
4747
```
4848

49+
:::{important}
50+
By default, vLLM will use sampling parameters recommended by model creator by applying the `generation_config.json` from the huggingface model repository if it exists. In most cases, this will provide you with the best results by default if {class}`~vllm.SamplingParams` is not specified.
51+
52+
However, if vLLM's default sampling parameters are preferred, please pass `generation_config="vllm"` when creating the {class}`~vllm.LLM` instance.
53+
:::
4954
A code example can be found here: <gh-file:examples/offline_inference/basic/basic.py>
5055

5156
### `LLM.beam_search`

docs/source/serving/openai_compatible_server.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,11 @@ print(completion.choices[0].message)
3333
vLLM supports some parameters that are not supported by OpenAI, `top_k` for example.
3434
You can pass these parameters to vLLM using the OpenAI client in the `extra_body` parameter of your requests, i.e. `extra_body={"top_k": 50}` for `top_k`.
3535
:::
36+
:::{important}
37+
By default, the server applies `generation_config.json` from the Hugging Face model repository if it exists. This means the default values of certain sampling parameters can be overridden by those recommended by the model creator.
3638

39+
To disable this behavior, please pass `--generation-config vllm` when launching the server.
40+
:::
3741
## Supported APIs
3842

3943
We currently support the following OpenAI APIs:

vllm/config.py

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1023,6 +1023,13 @@ def get_diff_sampling_param(self) -> dict[str, Any]:
10231023
"max_new_tokens")
10241024
else:
10251025
diff_sampling_param = {}
1026+
1027+
if diff_sampling_param:
1028+
logger.warning_once(
1029+
"Default sampling parameters have been overridden by the "
1030+
"model's Hugging Face generation config recommended from the "
1031+
"model creator. If this is not intended, please relaunch "
1032+
"vLLM instance with `--generation-config vllm`.")
10261033
return diff_sampling_param
10271034

10281035
@property

0 commit comments

Comments
 (0)