Skip to content

Commit aa798b7

Browse files
New canine model card (#38631)
* Updated BERTweet model card. * Update docs/source/en/model_doc/bertweet.md Co-authored-by: Steven Liu <[email protected]> * Update docs/source/en/model_doc/bertweet.md Co-authored-by: Steven Liu <[email protected]> * Update docs/source/en/model_doc/bertweet.md Co-authored-by: Steven Liu <[email protected]> * Update docs/source/en/model_doc/bertweet.md Co-authored-by: Steven Liu <[email protected]> * Update docs/source/en/model_doc/bertweet.md Co-authored-by: Steven Liu <[email protected]> * Update docs/source/en/model_doc/bertweet.md Co-authored-by: Steven Liu <[email protected]> * Update docs/source/en/model_doc/bertweet.md Co-authored-by: Steven Liu <[email protected]> * updated toctree (EN). * Updated BERTweet model card. * Update docs/source/en/model_doc/bertweet.md Co-authored-by: Steven Liu <[email protected]> * Update docs/source/en/model_doc/bertweet.md Co-authored-by: Steven Liu <[email protected]> * Update docs/source/en/model_doc/bertweet.md Co-authored-by: Steven Liu <[email protected]> * Update docs/source/en/model_doc/bertweet.md Co-authored-by: Steven Liu <[email protected]> * Update docs/source/en/model_doc/bertweet.md Co-authored-by: Steven Liu <[email protected]> * Update docs/source/en/model_doc/bertweet.md Co-authored-by: Steven Liu <[email protected]> * Update docs/source/en/model_doc/bertweet.md Co-authored-by: Steven Liu <[email protected]> * updated toctree (EN). * Updated BERTweet model card. * Update docs/source/en/model_doc/bertweet.md Co-authored-by: Steven Liu <[email protected]> * Update docs/source/en/model_doc/bertweet.md Co-authored-by: Steven Liu <[email protected]> * Update docs/source/en/model_doc/bertweet.md Co-authored-by: Steven Liu <[email protected]> * Update docs/source/en/model_doc/bertweet.md Co-authored-by: Steven Liu <[email protected]> * Update docs/source/en/model_doc/bertweet.md Co-authored-by: Steven Liu <[email protected]> * Update docs/source/en/model_doc/bertweet.md Co-authored-by: Steven Liu <[email protected]> * Update docs/source/en/model_doc/bertweet.md Co-authored-by: Steven Liu <[email protected]> * updated toctree (EN). * Commit for new_gpt_model_card. * Update docs/source/en/model_doc/gpt_neo.md Co-authored-by: Steven Liu <[email protected]> * Update docs/source/en/model_doc/gpt_neo.md Co-authored-by: Steven Liu <[email protected]> * Update docs/source/en/model_doc/gpt_neo.md Co-authored-by: Steven Liu <[email protected]> * Update docs/source/en/model_doc/gpt_neo.md Co-authored-by: Steven Liu <[email protected]> * Update docs/source/en/model_doc/gpt_neo.md Co-authored-by: Steven Liu <[email protected]> * Update docs/source/en/model_doc/gpt_neo.md Co-authored-by: Steven Liu <[email protected]> * Update docs/source/en/model_doc/gpt_neo.md Co-authored-by: Steven Liu <[email protected]> * Update docs/source/en/model_doc/gpt_neo.md Co-authored-by: Steven Liu <[email protected]> * commit for new canine model card. * Update docs/source/en/model_doc/canine.md Co-authored-by: Steven Liu <[email protected]> * Update docs/source/en/model_doc/canine.md Co-authored-by: Steven Liu <[email protected]> * Update docs/source/en/model_doc/canine.md Co-authored-by: Steven Liu <[email protected]> * Update docs/source/en/model_doc/canine.md Co-authored-by: Steven Liu <[email protected]> * Update docs/source/en/model_doc/canine.md Co-authored-by: Steven Liu <[email protected]> * Update docs/source/en/model_doc/canine.md Co-authored-by: Steven Liu <[email protected]> * implemented suggestion by @stevhliu. * Update canine.md --------- Co-authored-by: Steven Liu <[email protected]>
1 parent e28fb26 commit aa798b7

File tree

1 file changed

+50
-71
lines changed

1 file changed

+50
-71
lines changed

docs/source/en/model_doc/canine.md

Lines changed: 50 additions & 71 deletions
Original file line numberDiff line numberDiff line change
@@ -14,99 +14,78 @@ rendered properly in your Markdown viewer.
1414
1515
-->
1616

17-
# CANINE
18-
19-
<div class="flex flex-wrap space-x-1">
20-
<img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white">
17+
<div style="float: right;">
18+
<div class="flex flex-wrap space-x-1">
19+
<img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white">
20+
</div>
2121
</div>
2222

23-
## Overview
24-
25-
The CANINE model was proposed in [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language
26-
Representation](https://arxiv.org/abs/2103.06874) by Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting. It's
27-
among the first papers that trains a Transformer without using an explicit tokenization step (such as Byte Pair
28-
Encoding (BPE), WordPiece or SentencePiece). Instead, the model is trained directly at a Unicode character-level.
29-
Training at a character-level inevitably comes with a longer sequence length, which CANINE solves with an efficient
30-
downsampling strategy, before applying a deep Transformer encoder.
31-
32-
The abstract from the paper is the following:
23+
# CANINE
3324

34-
*Pipelined NLP systems have largely been superseded by end-to-end neural modeling, yet nearly all commonly-used models
35-
still require an explicit tokenization step. While recent tokenization approaches based on data-derived subword
36-
lexicons are less brittle than manually engineered tokenizers, these techniques are not equally suited to all
37-
languages, and the use of any fixed vocabulary may limit a model's ability to adapt. In this paper, we present CANINE,
38-
a neural encoder that operates directly on character sequences, without explicit tokenization or vocabulary, and a
39-
pre-training strategy that operates either directly on characters or optionally uses subwords as a soft inductive bias.
40-
To use its finer-grained input effectively and efficiently, CANINE combines downsampling, which reduces the input
41-
sequence length, with a deep transformer stack, which encodes context. CANINE outperforms a comparable mBERT model by
42-
2.8 F1 on TyDi QA, a challenging multilingual benchmark, despite having 28% fewer model parameters.*
25+
[CANINE](https://huggingface.co/papers/2103.06874) is a tokenization-free Transformer. It skips the usual step of splitting text into subwords or wordpieces and processes text character by character. That means it works directly with raw Unicode, making it especially useful for languages with complex or inconsistent tokenization rules and even noisy inputs like typos. Since working with characters means handling longer sequences, CANINE uses a smart trick. The model compresses the input early on (called downsampling) so the transformer doesn’t have to process every character individually. This keeps things fast and efficient.
4326

44-
This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/google-research/language/tree/master/language/canine).
27+
You can find all the original CANINE checkpoints under the [Google](https://huggingface.co/google?search_models=canine) organization.
4528

46-
## Usage tips
29+
> [!TIP]
30+
> Click on the CANINE models in the right sidebar for more examples of how to apply CANINE to different language tasks.
4731
48-
- CANINE uses no less than 3 Transformer encoders internally: 2 "shallow" encoders (which only consist of a single
49-
layer) and 1 "deep" encoder (which is a regular BERT encoder). First, a "shallow" encoder is used to contextualize
50-
the character embeddings, using local attention. Next, after downsampling, a "deep" encoder is applied. Finally,
51-
after upsampling, a "shallow" encoder is used to create the final character embeddings. Details regarding up- and
52-
downsampling can be found in the paper.
53-
- CANINE uses a max sequence length of 2048 characters by default. One can use [`CanineTokenizer`]
54-
to prepare text for the model.
55-
- Classification can be done by placing a linear layer on top of the final hidden state of the special [CLS] token
56-
(which has a predefined Unicode code point). For token classification tasks however, the downsampled sequence of
57-
tokens needs to be upsampled again to match the length of the original character sequence (which is 2048). The
58-
details for this can be found in the paper.
32+
The example below demonstrates how to generate embeddings with [`Pipeline`], [`AutoModel`], and from the command line.
5933

60-
Model checkpoints:
34+
<hfoptions id="usage">
35+
<hfoption id="Pipeline">
6136

62-
- [google/canine-c](https://huggingface.co/google/canine-c): Pre-trained with autoregressive character loss,
63-
12-layer, 768-hidden, 12-heads, 121M parameters (size ~500 MB).
64-
- [google/canine-s](https://huggingface.co/google/canine-s): Pre-trained with subword loss, 12-layer,
65-
768-hidden, 12-heads, 121M parameters (size ~500 MB).
37+
```py
38+
import torch
39+
from transformers import pipeline
6640

41+
pipeline = pipeline(
42+
task="feature-extraction",
43+
model="google/canine-c",
44+
device=0,
45+
)
6746

68-
## Usage example
47+
pipeline("Plant create energy through a process known as photosynthesis.")
48+
```
6949

70-
CANINE works on raw characters, so it can be used **without a tokenizer**:
50+
</hfoption>
51+
<hfoption id="AutoModel">
7152

72-
```python
73-
>>> from transformers import CanineModel
74-
>>> import torch
53+
```py
54+
import torch
55+
from transformers import AutoModel
7556

76-
>>> model = CanineModel.from_pretrained("google/canine-c") # model pre-trained with autoregressive character loss
57+
model = AutoModel.from_pretrained("google/canine-c")
7758

78-
>>> text = "hello world"
79-
>>> # use Python's built-in ord() function to turn each character into its unicode code point id
80-
>>> input_ids = torch.tensor([[ord(char) for char in text]])
59+
text = "Plant create energy through a process known as photosynthesis."
60+
input_ids = torch.tensor([[ord(char) for char in text]])
8161

82-
>>> outputs = model(input_ids) # forward pass
83-
>>> pooled_output = outputs.pooler_output
84-
>>> sequence_output = outputs.last_hidden_state
62+
outputs = model(input_ids)
63+
pooled_output = outputs.pooler_output
64+
sequence_output = outputs.last_hidden_state
8565
```
8666

87-
For batched inference and training, it is however recommended to make use of the tokenizer (to pad/truncate all
88-
sequences to the same length):
89-
90-
```python
91-
>>> from transformers import CanineTokenizer, CanineModel
67+
</hfoption>
68+
<hfoption id="transformers CLI">
9269

93-
>>> model = CanineModel.from_pretrained("google/canine-c")
94-
>>> tokenizer = CanineTokenizer.from_pretrained("google/canine-c")
70+
```bash
71+
echo -e "Plant create energy through a process known as photosynthesis." | transformers-cli run --task feature-extraction --model google/canine-c --device 0
72+
```
9573

96-
>>> inputs = ["Life is like a box of chocolates.", "You never know what you gonna get."]
97-
>>> encoding = tokenizer(inputs, padding="longest", truncation=True, return_tensors="pt")
74+
</hfoption>
75+
</hfoptions>
9876

99-
>>> outputs = model(**encoding) # forward pass
100-
>>> pooled_output = outputs.pooler_output
101-
>>> sequence_output = outputs.last_hidden_state
102-
```
77+
## Notes
10378

104-
## Resources
79+
- CANINE skips tokenization entirely — it works directly on raw characters, not subwords. You can use it with or without a tokenizer. For batched inference and training, it is recommended to use the tokenizer to pad and truncate all sequences to the same length.
10580

106-
- [Text classification task guide](../tasks/sequence_classification)
107-
- [Token classification task guide](../tasks/token_classification)
108-
- [Question answering task guide](../tasks/question_answering)
109-
- [Multiple choice task guide](../tasks/multiple_choice)
81+
```py
82+
from transformers import AutoTokenizer, AutoModel
83+
84+
tokenizer = AutoTokenizer("google/canine-c")
85+
inputs = ["Life is like a box of chocolates.", "You never know what you gonna get."]
86+
encoding = tokenizer(inputs, padding="longest", truncation=True, return_tensors="pt")
87+
```
88+
- CANINE is primarily designed to be fine-tuned on a downstream task. The pretrained model can be used for either masked language modeling or next sentence prediction.
11089

11190
## CanineConfig
11291

0 commit comments

Comments
 (0)