metadata
language: en
license: apache-2.0
tags:
- text-classfication
- int8
- Intel® Neural Compressor
- PostTrainingStatic
datasets:
- sst2
model-index:
- name: distilbert-base-uncased-finetuned-sst-2-english-int8-static
results:
- task:
type: sentiment-classification
name: Sentiment Classification
dataset:
type: sst2
name: Stanford Sentiment Treebank
metrics:
- type: accuracy
value: 90.37
name: accuracy
config: accuracy
verified: false
Model Details: INT8 DistilBERT base uncased finetuned SST-2
This model is already fine-tuned and quantized to INT8 (post-training static quantization) from the original FP32 model (distilbert-base-uncased-finetuned-sst-2-english). The same model is provided in two different formats: PyTorch and ONNX.
Model Detail | Description |
---|---|
Model Authors - Company | Intel |
Date | March 29, 2022 for PyTorch model & February 3, 2023 for ONNX model |
Version | 1 |
Type | NLP DistilBERT (INT8) - Sentiment Classification (+/-) |
Paper or Other Resources | https://github.com/huggingface/optimum-intel |
License | Apache 2.0 |
Questions or Comments | Community Tab and Intel Developers Discord |
Intended Use | Description |
---|---|
Primary intended uses | Inference for sentiment classification (classifying whether a statement is positive or negative) |
Primary intended users | Anyone |
Out-of-scope uses | This model is already fine-tuned and quantized to INT8. It is not suitable for further fine-tuning in this form. To fine-tune your own model, you can start with distilbert-base-uncased-finetuned-sst-2-english. |
Load PyTorch model with Optimum
from optimum.intel.neural_compressor.quantization import IncQuantizedModelForSequenceClassification
int8_model = IncQuantizedModelForSequenceClassification.from_pretrained(
'Intel/distilbert-base-uncased-finetuned-sst-2-english-int8-static')
Load ONNX model:
from optimum.onnxruntime import ORTModelForSequenceClassification
model = ORTModelForSequenceClassification.from_pretrained(
'Intel/distilbert-base-uncased-finetuned-sst-2-english-int8-static')
Factors | Description |
---|---|
Groups | Movie reviewers from the internet |
Instrumentation | Text movie single-sentence reviews taken from 4 authors. More information can be found in the original paper by Pang and Lee (2005) |
Environment | - |
Card Prompts | Model deployment on alternate hardware and software can change model performance |
Metrics | Description |
---|---|
Model performance measures | Accuracy |
Decision thresholds | - |
Approaches to uncertainty and variability | - |
PyTorch INT8 | ONNX INT8 | FP32 | |
---|---|---|---|
Accuracy (eval-accuracy) | 0.9037 | 0.9060 | 0.9106 |
Model Size (MB) | 65 | 80 | 255 |
Training and Evaluation Data | Description |
---|---|
Datasets | The dataset can be found here: datasets/sst2. There dataset has a total of 215,154 unique phrases, annotated by 3 human judges. |
Motivation | Dataset was chosen to showcase the benefits of quantization on an NLP classification task with the Optimum Intel and Intel® Neural Compressor |
Preprocessing | The calibration dataloader is the train dataloader. The default calibration sampling size 100 isn't divisible exactly by batch size 8, so the real sampling size is 104. |
Quantitative Analyses | Description |
---|---|
Unitary results | The model was only evaluated on accuracy. There is no available comparison between evaluation factors. |
Intersectional results | There is no available comparison between the intersection of evaluated factors. |
Ethical Considerations | Description |
---|---|
Data | The data that make up the model are movie reviews from authors on the internet. |
Human life | The model is not intended to inform decisions central to human life or flourishing. It is an aggregated set of movie reviews from the internet. |
Mitigations | No additional risk mitigation strategies were considered during model development. |
Risks and harms | The data are biased toward the particular reviewers' opinions and the judges (labelers) of the data. The extent of the risks involved by using the model were considered but remain unknown. |
Use cases | - |
Caveats and Recommendations |
---|
There are no additional caveats or recommendations for this model. |
BibTeX Entry and Citation Info
@misc{distilbert-base-uncased-finetuned-sst-2-english-int8-static
author = {Xin He, Yu Wenz},
title = {distilbert-base-uncased-finetuned-sst-2-english-int8-static},
year = {2022},
url = {https://huggingface.co/Intel/distilbert-base-uncased-finetuned-sst-2-english-int8-static},
}