AlexKoff88 commited on
Commit
aa81b56
1 Parent(s): 994a4c4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -1
README.md CHANGED
@@ -1,5 +1,12 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
3
  ---
4
  # Quantized BERT-base MNLI model with 90% of usntructured sparsity
5
  The pruned and quantized model in the OpenVINO IR. The pruned model was taken from this [source](https://huggingface.co/neuralmagic/oBERT-12-downstream-pruned-unstructured-90-mnli) and quantized with the code below using HF Optimum for OpenVINO:
@@ -20,7 +27,7 @@ def preprocess_function(examples, tokenizer):
20
  # Load the default quantization configuration detailing the quantization we wish to apply
21
  quantization_config = OVConfig()
22
  # Instantiate our OVQuantizer using the desired configuration
23
- quantizer = OVQuantizer.from_pretrained(model)
24
  # Create the calibration dataset used to perform static quantization
25
 
26
  calibration_dataset = quantizer.get_calibration_dataset(
 
1
  ---
2
  license: apache-2.0
3
+ datasets:
4
+ - mnli
5
+ metrics:
6
+ - accuracy
7
+ tags:
8
+ - sequence-classification
9
+ - int8
10
  ---
11
  # Quantized BERT-base MNLI model with 90% of usntructured sparsity
12
  The pruned and quantized model in the OpenVINO IR. The pruned model was taken from this [source](https://huggingface.co/neuralmagic/oBERT-12-downstream-pruned-unstructured-90-mnli) and quantized with the code below using HF Optimum for OpenVINO:
 
27
  # Load the default quantization configuration detailing the quantization we wish to apply
28
  quantization_config = OVConfig()
29
  # Instantiate our OVQuantizer using the desired configuration
30
+ quantizer = OVQuantizer.from_pretrained(model, feature="sequence-classification")
31
  # Create the calibration dataset used to perform static quantization
32
 
33
  calibration_dataset = quantizer.get_calibration_dataset(