Skip to content

models distilbert base uncased finetuned sst 2 english

github-actions[bot] edited this page Jun 16, 2023 · 20 revisions

distilbert-base-uncased-finetuned-sst-2-english

Overview

Description: This is a fine-tuned version of DistilBERT-base-uncased, trained on SST-2, which reached 91.3 % accuracy on the dev set. Developed by Hugging Face, it's mainly intended to be used for topic classification and can be fine-tuned on downstream tasks, but it's important to keep in mind that it has certain biases, such as biased predictions for certain underrepresented populations and that it should not be used to create hostile or alienating environments for people. Additionally, the authors used the Stanford Sentiment Treebank(sst2) corpora for training the model. It's recommended to evaluate the risks of this model by thoroughly probing the bias evaluation datasets like WinoBias, WinoGender, Stereoset > The above summary was generated using ChatGPT. Review the original model card to understand the data used to train the model, evaluation metrics, license, intended uses, limitations and bias before using the model. ### Inference samples Inference type|Python sample (Notebook)|CLI with YAML |--|--|--| Real time|text-classification-online-endpoint.ipynb|text-classification-online-endpoint.sh Batch |entailment-contradiction-batch.ipynb| coming soon ### Finetuning samples Task|Use case|Dataset|Python sample (Notebook)|CLI with YAML |--|--|--|--|--| Text Classification|Emotion Detection|Emotion|emotion-detection.ipynb|emotion-detection.sh Token Classification|Named Entity Recognition|Conll2003|named-entity-recognition.ipynb|named-entity-recognition.sh Question Answering|Extractive Q&A|SQUAD (Wikipedia)|extractive-qa.ipynb|extractive-qa.sh ### Model Evaluation Task| Use case| Dataset| Python sample (Notebook)| CLI with YAML |--|--|--|--|--| Text Classification|Sentiment Classification|SST2|evaluate-model-sentiment-analysis.ipynb|evaluate-model-sentiment-analysis.yml ### Sample inputs and outputs (for real-time inference) #### Sample input json { "inputs": { "input_string": ["Today was an amazing day!", "It was an unfortunate series of events."] } } #### Sample output json [ { "0": "POSITIVE" }, { "0": "NEGATIVE" } ]

Version: 6

Tags

Preview license : apache-2.0 task : text-classification

View in Studio: https://ml.azure.com/registries/azureml/models/distilbert-base-uncased-finetuned-sst-2-english/version/6

License: apache-2.0

Properties

SHA: 3d65bad49c7ba6f71920504507a8927f4b9db6c0

datasets: sst2, glue

evaluation-min-sku-spec: 2|0|7|14

evaluation-recommended-sku: Standard_DS2_v2

finetune-min-sku-spec: 4|1|28|176

finetune-recommended-sku: Standard_NC24rs_v3

finetuning-tasks: text-classification, token-classification, question-answering

inference-min-sku-spec: 2|0|7|14

inference-recommended-sku: Standard_DS2_v2

languages: en

Clone this wiki locally