-
Notifications
You must be signed in to change notification settings - Fork 127
models roberta base openai detector
The RoBERTa base OpenAI Detector functions as a model designed to detect outputs generated by the GPT-2 model. It was created by refining a RoBERTa base model using the outputs of the 1.5B-parameter GPT-2 model. This detector is utilized to determine whether text was generated by a GPT-2 model. OpenAI introduced this model concurrently with the release of the weights for the largest GPT-2 model, known as the 1.5B parameter version.
The model serves as a sequence classifier based on RoBERTa base, initially trained with the RoBERTa base training data. Subsequently, it undergoes fine-tuning using the outputs of the 1.5B GPT-2 model.
According to the model developers, they constructed a sequence classifier leveraging RoBERTaBASE (125 million parameters) and fine-tuned it to differentiate between outputs from the 1.5B GPT-2 model and WebText, the dataset utilized for training the GPT-2 model. To ensure the detector model's robustness in accurately classifying generated texts across various sampling methods, they conducted an in-depth analysis of the model's transfer performance. Further details on the training procedure are available in the associated paper.
Evaluation details extracted from the associated paper are as follows:
The model's primary purpose is to detect text generated by GPT-2 models. To assess its performance, the model developers test it on text datasets, measuring accuracy by evaluating:
510-token test examples, comprising 5,000 samples from the WebText dataset and 5,000 samples generated by a GPT-2 model. These examples were not utilized during the training phase.
In their associated paper, the model developers address the concern that the model might be exploited by malicious actors to create methods for evading detection. However, one of the primary reasons for releasing the model is to enhance detection research.
In a related blog post, the model developers delve into the limitations of automated techniques for identifying synthetic text and emphasize the necessity of combining automated detection tools with other non-automated approaches. They state:
“Our in-house detection research led to the development of a detection model with approximately 95% accuracy in detecting 1.5B GPT-2-generated text. While this accuracy rate is commendable, it is not sufficient for standalone detection. To enhance effectiveness, it should be complemented with metadata-based approaches, human judgment, and public education.”
Additionally, the model developers discovered that classifying content from larger models presents greater challenges. As model sizes increase, automated tools like this model may face increasing difficulty in detection. The authors propose that training detector models using outputs from larger models can enhance accuracy and robustness.
Extensive research has delved into the challenges related to bias and fairness in language models. Notably, Sheng et al. (2021) and Bender et al. (2021) have contributed significantly to this field. Predictions generated by the RoBERTa base and GPT-2 1.5B models—on which this particular model is built and fine-tuned—may inadvertently perpetuate harmful stereotypes across various dimensions. These dimensions include protected classes, identity characteristics, and sensitive social and occupational groups. For more detailed insights, the RoBERTa base and GPT-2 XL model cards provide additional information. The developers of this model further explore these issues in their research paper
Task | Use case | Dataset | Python sample | CLI with YAML |
---|---|---|---|---|
Text Classification | Detecting GPT2 Output | GPT2-Outputs | evaluate-model-text-classification.ipynb | evaluate-model-text-classification.yml |
Inference type | Python sample |
---|---|
Real time | text-classification-online-endpoint.ipynb |
Batch | entailment-contradiction-batch.ipynb |
{
"input_data": ["I like you. I love you", "Today was a horrible day" ],
"params": {
"return_all_scores": true
}
}
[
{
"label": "Fake",
"score": 0.881293773651123
},
{
"label": "Fake",
"score": 0.9996414184570312
}
]
Version: 14
license : mit
task : text-classification
SharedComputeCapacityEnabled
hiddenlayerscanned
training_datasets : bookcorpus, wikipedia
huggingface_model_id : roberta-base-openai-detector
evaluation_compute_allow_list : ['Standard_DS4_v2', 'Standard_D8a_v4', 'Standard_D8as_v4', 'Standard_DS5_v2', 'Standard_DS12_v2', 'Standard_D16a_v4', 'Standard_D16as_v4', 'Standard_D32a_v4', 'Standard_D32as_v4', 'Standard_D48a_v4', 'Standard_D48as_v4', 'Standard_D64a_v4', 'Standard_D64as_v4', 'Standard_D96a_v4', 'Standard_D96as_v4', 'Standard_FX4mds', 'Standard_FX12mds', 'Standard_F16s_v2', 'Standard_F32s_v2', 'Standard_F48s_v2', 'Standard_F64s_v2', 'Standard_F72s_v2', 'Standard_FX24mds', 'Standard_FX36mds', 'Standard_FX48mds', 'Standard_E4s_v3', 'Standard_E8s_v3', 'Standard_E16s_v3', 'Standard_E32s_v3', 'Standard_E48s_v3', 'Standard_E64s_v3', 'Standard_NC4as_T4_v3', 'Standard_NC6s_v3', 'Standard_NC8as_T4_v3', 'Standard_NC12s_v3', 'Standard_NC16as_T4_v3', 'Standard_NC24s_v3', 'Standard_NC64as_T4_v3', 'Standard_NC24ads_A100_v4', 'Standard_NC48ads_A100_v4', 'Standard_NC96ads_A100_v4', 'Standard_ND96asr_v4', 'Standard_ND96amsr_A100_v4', 'Standard_ND40rs_v2']
inference_compute_allow_list : ['Standard_DS4_v2', 'Standard_D8a_v4', 'Standard_D8as_v4', 'Standard_DS5_v2', 'Standard_D16a_v4', 'Standard_D16as_v4', 'Standard_D32a_v4', 'Standard_D32as_v4', 'Standard_D48a_v4', 'Standard_D48as_v4', 'Standard_D64a_v4', 'Standard_D64as_v4', 'Standard_D96a_v4', 'Standard_D96as_v4', 'Standard_F4s_v2', 'Standard_FX4mds', 'Standard_F8s_v2', 'Standard_FX12mds', 'Standard_F16s_v2', 'Standard_F32s_v2', 'Standard_F48s_v2', 'Standard_F64s_v2', 'Standard_F72s_v2', 'Standard_FX24mds', 'Standard_FX36mds', 'Standard_FX48mds', 'Standard_E2s_v3', 'Standard_E4s_v3', 'Standard_E8s_v3', 'Standard_E16s_v3', 'Standard_E32s_v3', 'Standard_E48s_v3', 'Standard_E64s_v3', 'Standard_NC4as_T4_v3', 'Standard_NC6s_v3', 'Standard_NC8as_T4_v3', 'Standard_NC12s_v3', 'Standard_NC16as_T4_v3', 'Standard_NC24s_v3', 'Standard_NC64as_T4_v3', 'Standard_NC24ads_A100_v4', 'Standard_NC48ads_A100_v4', 'Standard_NC96ads_A100_v4', 'Standard_ND96asr_v4', 'Standard_ND96amsr_A100_v4', 'Standard_ND40rs_v2']
View in Studio: https://ml.azure.com/registries/azureml/models/roberta-base-openai-detector/version/14
License: mit
SharedComputeCapacityEnabled: True
SHA: f5444000d615d1366ab9432a981035c58c57d55f
evaluation-min-sku-spec: 4|0|28|56
inference-min-sku-spec: 2|0|8|32
evaluation-recommended-sku: Standard_DS4_v2, Standard_D8a_v4, Standard_D8as_v4, Standard_DS5_v2, Standard_DS12_v2, Standard_D16a_v4, Standard_D16as_v4, Standard_D32a_v4, Standard_D32as_v4, Standard_D48a_v4, Standard_D48as_v4, Standard_D64a_v4, Standard_D64as_v4, Standard_D96a_v4, Standard_D96as_v4, Standard_FX4mds, Standard_FX12mds, Standard_F16s_v2, Standard_F32s_v2, Standard_F48s_v2, Standard_F64s_v2, Standard_F72s_v2, Standard_FX24mds, Standard_FX36mds, Standard_FX48mds, Standard_E4s_v3, Standard_E8s_v3, Standard_E16s_v3, Standard_E32s_v3, Standard_E48s_v3, Standard_E64s_v3, Standard_NC4as_T4_v3, Standard_NC6s_v3, Standard_NC8as_T4_v3, Standard_NC12s_v3, Standard_NC16as_T4_v3, Standard_NC24s_v3, Standard_NC64as_T4_v3, Standard_NC24ads_A100_v4, Standard_NC48ads_A100_v4, Standard_NC96ads_A100_v4, Standard_ND96asr_v4, Standard_ND96amsr_A100_v4, Standard_ND40rs_v2
inference-recommended-sku: Standard_DS4_v2, Standard_D8a_v4, Standard_D8as_v4, Standard_DS5_v2, Standard_D16a_v4, Standard_D16as_v4, Standard_D32a_v4, Standard_D32as_v4, Standard_D48a_v4, Standard_D48as_v4, Standard_D64a_v4, Standard_D64as_v4, Standard_D96a_v4, Standard_D96as_v4, Standard_F4s_v2, Standard_FX4mds, Standard_F8s_v2, Standard_FX12mds, Standard_F16s_v2, Standard_F32s_v2, Standard_F48s_v2, Standard_F64s_v2, Standard_F72s_v2, Standard_FX24mds, Standard_FX36mds, Standard_FX48mds, Standard_E2s_v3, Standard_E4s_v3, Standard_E8s_v3, Standard_E16s_v3, Standard_E32s_v3, Standard_E48s_v3, Standard_E64s_v3, Standard_NC4as_T4_v3, Standard_NC6s_v3, Standard_NC8as_T4_v3, Standard_NC12s_v3, Standard_NC16as_T4_v3, Standard_NC24s_v3, Standard_NC64as_T4_v3, Standard_NC24ads_A100_v4, Standard_NC48ads_A100_v4, Standard_NC96ads_A100_v4, Standard_ND96asr_v4, Standard_ND96amsr_A100_v4, Standard_ND40rs_v2
languages: en