You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Tensorflow supports two (or three) different types of WordPiece tokenizers.
Could be worth testing to use the FastWordPiece tokenizer, since it can build the model from a vocab directly and claims to be faster as mentioned:
Tensorflow supports two (or three) different types of WordPiece tokenizers.
Could be worth testing to use the FastWordPiece tokenizer, since it can build the model from a vocab directly and claims to be faster as mentioned:
But is will likely also require a bit more setup (https://www.tensorflow.org/text/guide/subwords_tokenizer#overview), as WordPiece only see to split words, but the BertTokenizer splits sentences
Goal
The text was updated successfully, but these errors were encountered: