-
Notifications
You must be signed in to change notification settings - Fork 55
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
update: llama3 #556
base: main
Are you sure you want to change the base?
update: llama3 #556
Conversation
# data file | ||
alpaca_data/ | ||
libai/version.py | ||
sft_result |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个文件不用改动
@@ -114,7 +114,8 @@ def prepare_sample(example: dict, tokenizer, max_length: int) -> dict: | |||
|
|||
prompt = tokenizer.tokenize(full_prompt, add_bos=True, add_eos=False, device="cpu")[0] | |||
example = tokenizer.tokenize( | |||
full_prompt_and_response, add_bos=True, add_eos=True, device="cpu" | |||
full_prompt_and_response, add_bos=True, add_eos=True, device=None, | |||
# device="cpu" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
注释可以删掉
|
||
self.scale_mask_softmax_fusion = scale_mask_softmax_fusion | ||
|
||
self.query_key_value = Linear( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个实现有问题导致的之后的k,v相关计算对不上,GQA里k,v的num_head数量和q不一样,参考chatglm的方式实现:
libai/projects/ChatGLM/chatglm.py
Lines 245 to 259 in 9dcbe3b
self.qkv_hidden_size = 3 * self.projection_size | |
if self.multi_query_attention: | |
self.num_multi_query_groups_per_partition = cfg.multi_query_group_num | |
self.qkv_hidden_size = ( | |
self.projection_size | |
+ 2 * self.hidden_size_per_attention_head * cfg.multi_query_group_num | |
) | |
self.query_key_value = Linear( | |
cfg.hidden_size, | |
self.qkv_hidden_size, | |
bias=cfg.add_bias_linear or cfg.add_qkv_bias, | |
parallel="col", | |
layer_idx=self.layer_number - 1, | |
) |
query_key_value = query_key_value.permute( | ||
0, 2, 1, 3 | ||
) # [bsz, num_heads, src_len, 3 * head_size] | ||
query, key, value = flow.chunk(query_key_value, chunks=3, dim=-1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里的k和v的num_head排列顺序有问题,GQA里是多个q_head对应单个k_head,v_head,
举个例子
# k的h1对应q的h1,h2
q: [h1, h2, h3, h4]
k: [ h1, h2]
v: [ h1, h2]
repeat k,v的head后:
# k的h1对应q的h1,h2
q: [h1, h2, h3, h4]
k: [h1, h1, h2, h2]
v: [h1, h1, h2, h2]
但是这里的实现结果k,v head排列顺序有问题,所以之后计算是有问题的
# k的h1对应q的h1,h2
q: [h1, h2, h3, h4]
k: [h1, h2, h1, h2]
v: [h1, h2, h1, h2]
参考chaglm的实现:
libai/projects/ChatGLM/chatglm.py
Lines 332 to 356 in 9dcbe3b
if self.multi_query_attention: | |
key_layer = key_layer.unsqueeze(-2) | |
key_layer = key_layer.expand( | |
-1, | |
-1, | |
-1, | |
self.num_attention_heads_per_partition // self.num_multi_query_groups_per_partition, | |
-1, | |
) | |
key_layer = key_layer.contiguous().view( | |
key_layer.size()[:2] | |
+ (self.num_attention_heads_per_partition, self.hidden_size_per_attention_head) | |
) | |
value_layer = value_layer.unsqueeze(-2) | |
value_layer = value_layer.expand( | |
-1, | |
-1, | |
-1, | |
self.num_attention_heads_per_partition // self.num_multi_query_groups_per_partition, | |
-1, | |
) | |
value_layer = value_layer.contiguous().view( | |
value_layer.size()[:2] | |
+ (self.num_attention_heads_per_partition, self.hidden_size_per_attention_head) | |
) |
No description provided.