Skip to content

Commit 9487765

Browse files
authored
Add Qwen2 MoE model card (#38649)
* Add Qwen2 MoE model card * Revisions to qwen2 moe model card * Add Qwen2 MoE model card
1 parent 32dbf4b commit 9487765

File tree

1 file changed

+99
-28
lines changed

1 file changed

+99
-28
lines changed

docs/source/en/model_doc/qwen2_moe.md

Lines changed: 99 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -14,53 +14,124 @@ rendered properly in your Markdown viewer.
1414
1515
-->
1616

17-
# Qwen2MoE
18-
19-
<div class="flex flex-wrap space-x-1">
17+
<div style="float: right;">
2018
<img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white">
2119
<img alt="FlashAttention" src="https://img.shields.io/badge/%E2%9A%A1%EF%B8%8E%20FlashAttention-eae0c8?style=flat">
2220
<img alt="SDPA" src="https://img.shields.io/badge/SDPA-DE3412?style=flat&logo=pytorch&logoColor=white">
2321
</div>
2422

25-
## Overview
26-
27-
Qwen2MoE is the new model series of large language models from the Qwen team. Previously, we released the Qwen series, including Qwen-72B, Qwen-1.8B, Qwen-VL, Qwen-Audio, etc.
28-
29-
### Model Details
23+
# Qwen2MoE
3024

31-
Qwen2MoE is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. Qwen2MoE has the following architectural choices:
3225

33-
- Qwen2MoE is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes.
34-
- Qwen2MoE employs Mixture of Experts (MoE) architecture, where the models are upcycled from dense language models. For instance, `Qwen1.5-MoE-A2.7B` is upcycled from `Qwen-1.8B`. It has 14.3B parameters in total and 2.7B activated parameters during runtime, while it achieves comparable performance with `Qwen1.5-7B`, with only 25% of the training resources.
26+
[Qwen2MoE]((https://huggingface.co/papers/2407.10671) ) is a Mixture-of-Experts (MoE) variant of [Qwen2](./qwen2), available as a base model and an aligned chat model. It uses SwiGLU activation, group query attention and a mixture of sliding window attention and full attention. The tokenizer can also be adapted to multiple languages and codes.
3527

36-
For more details refer to the [release blog post](https://qwenlm.github.io/blog/qwen-moe/).
28+
The MoE architecture uses upcyled models from the dense language models. For example, Qwen1.5-MoE-A2.7B is upcycled from Qwen-1.8B. It has 14.3B parameters but only 2.7B parameters are activated during runtime.
3729

38-
## Usage tips
30+
You can find all the original checkpoints in the [Qwen1.5](https://huggingface.co/collections/Qwen/qwen15-65c0a2f577b1ecb76d786524) collection.
3931

40-
`Qwen1.5-MoE-A2.7B` and `Qwen1.5-MoE-A2.7B-Chat` can be found on the [Huggingface Hub](https://huggingface.co/Qwen)
32+
> [!TIP]
33+
> Click on the Qwen2MoE models in the right sidebar for more examples of how to apply Qwen2MoE to different language tasks.
4134
42-
In the following, we demonstrate how to use `Qwen1.5-MoE-A2.7B-Chat` for the inference. Note that we have used the ChatML format for dialog, in this demo we show how to leverage `apply_chat_template` for this purpose.
35+
The example below demonstrates how to generate text with [`Pipeline`], [`AutoModel`], and from the command line.
4336

44-
```python
45-
>>> from transformers import AutoModelForCausalLM, AutoTokenizer
46-
>>> device = "cuda" # the device to load the model onto
37+
<hfoptions id="usage">
38+
<hfoption id="Pipeline">
4739

48-
>>> model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen1.5-MoE-A2.7B-Chat", device_map="auto")
49-
>>> tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen1.5-MoE-A2.7B-Chat")
40+
```py
41+
import torch
42+
from transformers import pipeline
5043

51-
>>> prompt = "Give me a short introduction to large language model."
44+
pipe = pipeline(
45+
task="text-generation",
46+
model="Qwen/Qwen1.5-MoE-A2.7B",
47+
torch_dtype=torch.bfloat16,
48+
device_map=0
49+
)
5250

53-
>>> messages = [{"role": "user", "content": prompt}]
54-
55-
>>> text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
51+
messages = [
52+
{"role": "system", "content": "You are a helpful assistant."},
53+
{"role": "user", "content": "Tell me about the Qwen2 model family."},
54+
]
55+
outputs = pipe(messages, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
56+
print(outputs[0]["generated_text"][-1]['content'])
57+
```
58+
</hfoption>
59+
<hfoption id="AutoModel">
60+
61+
```py
62+
import torch
63+
from transformers import AutoModelForCausalLM, AutoTokenizer
64+
65+
model = AutoModelForCausalLM.from_pretrained(
66+
"Qwen/Qwen1.5-MoE-A2.7B-Chat",
67+
torch_dtype=torch.bfloat16,
68+
device_map="auto",
69+
attn_implementation="sdpa"
70+
)
71+
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen1.5-MoE-A2.7B-Chat")
72+
73+
prompt = "Give me a short introduction to large language models."
74+
messages = [
75+
{"role": "system", "content": "You are a helpful assistant."},
76+
{"role": "user", "content": prompt}
77+
]
78+
text = tokenizer.apply_chat_template(
79+
messages,
80+
tokenize=False,
81+
add_generation_prompt=True
82+
)
83+
model_inputs = tokenizer([text], return_tensors="pt").to("cuda")
84+
85+
generated_ids = model.generate(
86+
model_inputs.input_ids,
87+
cache_implementation="static",
88+
max_new_tokens=512,
89+
do_sample=True,
90+
temperature=0.7,
91+
top_k=50,
92+
top_p=0.95
93+
)
94+
generated_ids = [
95+
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
96+
]
97+
98+
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
99+
print(response)
100+
```
101+
</hfoption>
102+
<hfoption id="transformers CLI">
103+
```bash
104+
transformers chat Qwen/Qwen1.5-MoE-A2.7B-Chat --torch_dtype auto --attn_implementation flash_attention_2
105+
```
106+
</hfoption>
107+
</hfoptions>
56108

57-
>>> model_inputs = tokenizer([text], return_tensors="pt").to(device)
58109

59-
>>> generated_ids = model.generate(model_inputs.input_ids, max_new_tokens=512, do_sample=True)
110+
Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the [Quantization](../quantization/overview) overview for more available quantization backends.
60111

61-
>>> generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)]
112+
The example below uses [bitsandbytes](../quantization/bitsandbytes) to quantize the weights to 8-bits.
62113

63-
>>> response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
114+
```python
115+
# pip install -U flash-attn --no-build-isolation
116+
import torch
117+
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
118+
119+
quantization_config = BitsAndBytesConfig(
120+
load_in_8bit=True
121+
)
122+
123+
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen1.5-MoE-A2.7B-Chat")
124+
model = AutoModelForCausalLM.from_pretrained(
125+
"Qwen/Qwen1.5-MoE-A2.7B-Chat",
126+
torch_dtype=torch.bfloat16,
127+
device_map="auto",
128+
quantization_config=quantization_config,
129+
attn_implementation="flash_attention_2"
130+
)
131+
132+
inputs = tokenizer("The Qwen2 model family is", return_tensors="pt").to("cuda")
133+
outputs = model.generate(**inputs, max_new_tokens=100)
134+
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
64135
```
65136

66137
## Qwen2MoeConfig

0 commit comments

Comments
 (0)