|
--- |
|
language: en |
|
license: apache-2.0 |
|
datasets: |
|
- sst2 |
|
- glue |
|
metrics: |
|
- accuracy |
|
tags: |
|
- text-classification |
|
- neural-compressor |
|
- int8 |
|
--- |
|
|
|
# Dynamically quantized and pruned DistilBERT base uncased finetuned SST-2 |
|
|
|
## Table of Contents |
|
- [Model Details](#model-details) |
|
- [How to Get Started With the Model](#how-to-get-started-with-the-model) |
|
|
|
## Model Details |
|
**Model Description:** This model is a [DistilBERT](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) fine-tuned on SST-2 dynamically quantized and pruned using a magnitude pruning strategy to obtain a sparsity of 10% with [optimum-intel](https://github.com/huggingface/optimum-intel) through the usage of [Intel® Neural Compressor](https://github.com/intel/neural-compressor). |
|
- **Model Type:** Text Classification |
|
- **Language(s):** English |
|
- **License:** Apache-2.0 |
|
- **Parent Model:** For more details on the original model, we encourage users to check out [this](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) model card. |
|
|
|
## How to Get Started With the Model |
|
|
|
To load the quantized model and run inference using the Transformers [pipelines](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines), you can do as follows: |
|
|
|
```python |
|
from transformers import AutoTokenizer, pipeline |
|
from optimum.intel.neural_compressor import IncQuantizedModelForSequenceClassification |
|
|
|
model_id = "echarlaix/distilbert-sst2-inc-dynamic-quantization-magnitude-pruning-0.1" |
|
model = IncQuantizedModelForSequenceClassification.from_pretrained(model_id) |
|
tokenizer = AutoTokenizer.from_pretrained(model_id) |
|
cls_pipe = pipeline("text-classification", model=model, tokenizer=tokenizer) |
|
text = "He's a dreadful magician." |
|
outputs = cls_pipe(text) |
|
``` |
|
|