You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* Moved the sources to the right
* small Changes
* Some Changes to moonshine
* Added the install to pipline
* updated the monshine model card
* Update docs/source/en/model_doc/moonshine.md
Co-authored-by: Steven Liu <[email protected]>
* Update docs/source/en/model_doc/moonshine.md
Co-authored-by: Steven Liu <[email protected]>
* Update docs/source/en/model_doc/moonshine.md
Co-authored-by: Steven Liu <[email protected]>
* Update docs/source/en/model_doc/moonshine.md
Co-authored-by: Steven Liu <[email protected]>
* Updated Documentation According to changes
* Fixed the model with the commits
* Changes to the roc_bert
* Final Update to the branch
* Adds Quantizaiton to the model
* Finsihed Fixing the Roc_bert docs
* Fixed Moshi
* Fixed Problems
* Fixed Problems
* Fixed Problems
* Fixed Problems
* Fixed Problems
* Fixed Problems
* Added the install to pipline
* updated the monshine model card
* Update docs/source/en/model_doc/moonshine.md
Co-authored-by: Steven Liu <[email protected]>
* Update docs/source/en/model_doc/moonshine.md
Co-authored-by: Steven Liu <[email protected]>
* Update docs/source/en/model_doc/moonshine.md
Co-authored-by: Steven Liu <[email protected]>
* Updated Documentation According to changes
* Fixed the model with the commits
* Fixed the problems
* Final Fix
* Final Fix
* Final Fix
* Update roc_bert.md
---------
Co-authored-by: Your Name <[email protected]>
Co-authored-by: Your Name <[email protected]>
Co-authored-by: Steven Liu <[email protected]>
[RoCBert](https://aclanthology.org/2022.acl-long.65.pdf) is a pretrained Chinese [BERT](./bert) model designed against adversarial attacks like typos and synonyms. It is pretrained with a contrastive learning objective to align normal and adversarial text examples. The examples include different semantic, phonetic, and visual features of Chinese. This makes RoCBert more robust against manipulation.
26
+
27
+
You can find all the original RoCBert checkpoints under the [weiweishi](https://huggingface.co/weiweishi) profile.
28
+
29
+
> [!TIP]
30
+
> This model was contributed by [weiweishi](https://huggingface.co/weiweishi).
31
+
>
32
+
> Click on the RoCBert models in the right sidebar for more examples of how to apply RoCBert to different Chinese language tasks.
33
+
34
+
The example below demonstrates how to predict the [MASK] token with [`Pipeline`], [`AutoModel`], and from the command line.
35
+
36
+
<hfoptionsid="usage">
37
+
<hfoptionid="Pipeline">
38
+
39
+
```py
40
+
import torch
41
+
from transformers import pipeline
42
+
43
+
pipeline = pipeline(
44
+
task="fill-mask",
45
+
model="weiweishi/roc-bert-base-zh",
46
+
torch_dtype=torch.float16,
47
+
device=0
48
+
)
49
+
pipeline("這家餐廳的拉麵是我[MASK]過的最好的拉麵之")
50
+
```
51
+
52
+
</hfoption>
53
+
<hfoptionid="AutoModel">
54
+
55
+
```py
56
+
import torch
57
+
from transformers import AutoModelForMaskedLM, AutoTokenizer
The RoCBert model was proposed in [RoCBert: Robust Chinese Bert with Multimodal Contrastive Pretraining](https://aclanthology.org/2022.acl-long.65.pdf) by HuiSu, WeiweiShi, XiaoyuShen, XiaoZhou, TuoJi, JiaruiFang, JieZhou.
26
-
It's a pretrained Chinese language model that is robust under various forms of adversarial attacks.
0 commit comments