Fixed Length Pre-Tokenizer#1713
Conversation
|
Thanks for this. The code looks working, but I think it could be simplified quite a lot. Is there any source/paper for trying to do fixed sized chunking ? Before adding anything to the library usually we try to make sure it's used in the wild and would benefits actual users of models (not necessarily researchers exploring new ideas, for this they can try out your branch or create their own pre_tokenizer directly in Python). |
221d55e to
e42ac9e
Compare
|
You're right, I simplified it along the lines of my initial comment. I also asked the author of the issue whether this is a common approach in the literature or not (I'm not aware of it either). Should have probably clarified this before jumping on it ;) |
|
according to the author it's used in DNA Transformer |
ArthurZucker
left a comment
There was a problem hiding this comment.
Same as my colleague! would be nice if we can get a reference to the paper it was used in the documentation of the class! (like a arxiv link)
Otherwise we can also keep this issue open an let the community upvote ! if it get's traction we merge 🤗
|
|
||
| pretok.length = 10 | ||
| assert pretok.length == 10 | ||
|
|
There was a problem hiding this comment.
we'd also want to make sure that it does it's job as a pretokenizer! so testing with the same string, that it splits in 5 then 10!
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
|
I'll fix the CI and merge this, sorry for being slow! |
Introduces a pre-tokenizer to split text in fixed length chunks (closes #1697).
The method
pre_tokenizecould be more made more concise by creating a vector with indices first like sobut that would take a bit more memory, so I went for my approach instead.