You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* Updated BERTweet model card.
* Update docs/source/en/model_doc/bertweet.md
Co-authored-by: Steven Liu <[email protected]>
* Update docs/source/en/model_doc/bertweet.md
Co-authored-by: Steven Liu <[email protected]>
* Update docs/source/en/model_doc/bertweet.md
Co-authored-by: Steven Liu <[email protected]>
* Update docs/source/en/model_doc/bertweet.md
Co-authored-by: Steven Liu <[email protected]>
* Update docs/source/en/model_doc/bertweet.md
Co-authored-by: Steven Liu <[email protected]>
* Update docs/source/en/model_doc/bertweet.md
Co-authored-by: Steven Liu <[email protected]>
* Update docs/source/en/model_doc/bertweet.md
Co-authored-by: Steven Liu <[email protected]>
* updated toctree (EN).
* Updated BERTweet model card.
* Update docs/source/en/model_doc/bertweet.md
Co-authored-by: Steven Liu <[email protected]>
* Update docs/source/en/model_doc/bertweet.md
Co-authored-by: Steven Liu <[email protected]>
* Update docs/source/en/model_doc/bertweet.md
Co-authored-by: Steven Liu <[email protected]>
* Update docs/source/en/model_doc/bertweet.md
Co-authored-by: Steven Liu <[email protected]>
* Update docs/source/en/model_doc/bertweet.md
Co-authored-by: Steven Liu <[email protected]>
* Update docs/source/en/model_doc/bertweet.md
Co-authored-by: Steven Liu <[email protected]>
* Update docs/source/en/model_doc/bertweet.md
Co-authored-by: Steven Liu <[email protected]>
* updated toctree (EN).
* Updated BERTweet model card.
* Update docs/source/en/model_doc/bertweet.md
Co-authored-by: Steven Liu <[email protected]>
* Update docs/source/en/model_doc/bertweet.md
Co-authored-by: Steven Liu <[email protected]>
* Update docs/source/en/model_doc/bertweet.md
Co-authored-by: Steven Liu <[email protected]>
* Update docs/source/en/model_doc/bertweet.md
Co-authored-by: Steven Liu <[email protected]>
* Update docs/source/en/model_doc/bertweet.md
Co-authored-by: Steven Liu <[email protected]>
* Update docs/source/en/model_doc/bertweet.md
Co-authored-by: Steven Liu <[email protected]>
* Update docs/source/en/model_doc/bertweet.md
Co-authored-by: Steven Liu <[email protected]>
* updated toctree (EN).
* Commit for new_gpt_model_card.
* Update docs/source/en/model_doc/gpt_neo.md
Co-authored-by: Steven Liu <[email protected]>
* Update docs/source/en/model_doc/gpt_neo.md
Co-authored-by: Steven Liu <[email protected]>
* Update docs/source/en/model_doc/gpt_neo.md
Co-authored-by: Steven Liu <[email protected]>
* Update docs/source/en/model_doc/gpt_neo.md
Co-authored-by: Steven Liu <[email protected]>
* Update docs/source/en/model_doc/gpt_neo.md
Co-authored-by: Steven Liu <[email protected]>
* Update docs/source/en/model_doc/gpt_neo.md
Co-authored-by: Steven Liu <[email protected]>
* Update docs/source/en/model_doc/gpt_neo.md
Co-authored-by: Steven Liu <[email protected]>
* Update docs/source/en/model_doc/gpt_neo.md
Co-authored-by: Steven Liu <[email protected]>
* commit for new canine model card.
* Update docs/source/en/model_doc/canine.md
Co-authored-by: Steven Liu <[email protected]>
* Update docs/source/en/model_doc/canine.md
Co-authored-by: Steven Liu <[email protected]>
* Update docs/source/en/model_doc/canine.md
Co-authored-by: Steven Liu <[email protected]>
* Update docs/source/en/model_doc/canine.md
Co-authored-by: Steven Liu <[email protected]>
* Update docs/source/en/model_doc/canine.md
Co-authored-by: Steven Liu <[email protected]>
* Update docs/source/en/model_doc/canine.md
Co-authored-by: Steven Liu <[email protected]>
* implemented suggestion by @stevhliu.
* Update canine.md
---------
Co-authored-by: Steven Liu <[email protected]>
The CANINE model was proposed in [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language
26
-
Representation](https://arxiv.org/abs/2103.06874) by Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting. It's
27
-
among the first papers that trains a Transformer without using an explicit tokenization step (such as Byte Pair
28
-
Encoding (BPE), WordPiece or SentencePiece). Instead, the model is trained directly at a Unicode character-level.
29
-
Training at a character-level inevitably comes with a longer sequence length, which CANINE solves with an efficient
30
-
downsampling strategy, before applying a deep Transformer encoder.
31
-
32
-
The abstract from the paper is the following:
23
+
# CANINE
33
24
34
-
*Pipelined NLP systems have largely been superseded by end-to-end neural modeling, yet nearly all commonly-used models
35
-
still require an explicit tokenization step. While recent tokenization approaches based on data-derived subword
36
-
lexicons are less brittle than manually engineered tokenizers, these techniques are not equally suited to all
37
-
languages, and the use of any fixed vocabulary may limit a model's ability to adapt. In this paper, we present CANINE,
38
-
a neural encoder that operates directly on character sequences, without explicit tokenization or vocabulary, and a
39
-
pre-training strategy that operates either directly on characters or optionally uses subwords as a soft inductive bias.
40
-
To use its finer-grained input effectively and efficiently, CANINE combines downsampling, which reduces the input
41
-
sequence length, with a deep transformer stack, which encodes context. CANINE outperforms a comparable mBERT model by
42
-
2.8 F1 on TyDi QA, a challenging multilingual benchmark, despite having 28% fewer model parameters.*
25
+
[CANINE](https://huggingface.co/papers/2103.06874) is a tokenization-free Transformer. It skips the usual step of splitting text into subwords or wordpieces and processes text character by character. That means it works directly with raw Unicode, making it especially useful for languages with complex or inconsistent tokenization rules and even noisy inputs like typos. Since working with characters means handling longer sequences, CANINE uses a smart trick. The model compresses the input early on (called downsampling) so the transformer doesn’t have to process every character individually. This keeps things fast and efficient.
43
26
44
-
This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/google-research/language/tree/master/language/canine).
27
+
You can find all the original CANINE checkpoints under the [Google](https://huggingface.co/google?search_models=canine) organization.
45
28
46
-
## Usage tips
29
+
> [!TIP]
30
+
> Click on the CANINE models in the right sidebar for more examples of how to apply CANINE to different language tasks.
47
31
48
-
- CANINE uses no less than 3 Transformer encoders internally: 2 "shallow" encoders (which only consist of a single
49
-
layer) and 1 "deep" encoder (which is a regular BERT encoder). First, a "shallow" encoder is used to contextualize
50
-
the character embeddings, using local attention. Next, after downsampling, a "deep" encoder is applied. Finally,
51
-
after upsampling, a "shallow" encoder is used to create the final character embeddings. Details regarding up- and
52
-
downsampling can be found in the paper.
53
-
- CANINE uses a max sequence length of 2048 characters by default. One can use [`CanineTokenizer`]
54
-
to prepare text for the model.
55
-
- Classification can be done by placing a linear layer on top of the final hidden state of the special [CLS] token
56
-
(which has a predefined Unicode code point). For token classification tasks however, the downsampled sequence of
57
-
tokens needs to be upsampled again to match the length of the original character sequence (which is 2048). The
58
-
details for this can be found in the paper.
32
+
The example below demonstrates how to generate embeddings with [`Pipeline`], [`AutoModel`], and from the command line.
59
33
60
-
Model checkpoints:
34
+
<hfoptionsid="usage">
35
+
<hfoptionid="Pipeline">
61
36
62
-
-[google/canine-c](https://huggingface.co/google/canine-c): Pre-trained with autoregressive character loss,
echo -e "Plant create energy through a process known as photosynthesis."| transformers-cli run --task feature-extraction --model google/canine-c --device 0
72
+
```
95
73
96
-
>>> inputs = ["Life is like a box of chocolates.", "You never know what you gonna get."]
- CANINE skips tokenization entirely — it works directly on raw characters, not subwords. You can use it with or without a tokenizer. For batched inference and training, it is recommended to use the tokenizer to pad and truncate all sequences to the same length.
-CANINEis primarily designed to be fine-tuned on a downstream task. The pretrained model can be used for either masked language modeling ornext sentence prediction.
0 commit comments