-
Notifications
You must be signed in to change notification settings - Fork 13
Open
Description
Hi, according to the preprocess.py
file, you choose the special tokens as follows,
tgt_bos = '<|endoftext|>'
tgt_eos = '\u0120GDDR'
tgt_pad = '\u0120SHALL'
tgt_unk = '\u0120RELE'
src_pad = '\u0120SHALL'
src_unk = '\u0120RELE'
In the huggingface tokenizer implementation, they use '<|endoftext|>'
for all these special tokens. Is there any reason to use other tokens in the vocab as special tokens? What if these tokens appear in the dataset after bpe encoding?
Thanks
Metadata
Metadata
Assignees
Labels
No labels