violetch24 commited on
Commit
f7c4dfd
·
1 Parent(s): 0a9ad76

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -3,7 +3,7 @@ license: apache-2.0
3
  tags:
4
  - int8
5
  - Intel® Neural Compressor
6
- - PostTrainingStatic
7
  datasets:
8
  - mnli
9
  metrics:
@@ -14,7 +14,7 @@ metrics:
14
 
15
  ### Post-training dynamic quantization
16
 
17
- This is an INT8 PyTorch model quantized with [Intel® Neural Compressor](https://github.com/intel/neural-compressor).
18
 
19
  The original fp32 model comes from the fine-tuned model [adasnew/t5-small-xsum](https://huggingface.co/adasnew/t5-small-xsum).
20
 
@@ -29,11 +29,11 @@ The linear modules **lm.head**, fall back to fp32 for less than 1% relative accu
29
  | **Accuracy (eval-rouge1)** | 29.9008 |29.9592|
30
  | **Model size** |154M|242M|
31
 
32
- ### Load with Intel® Neural Compressor:
33
 
34
  ```python
35
- from neural_compressor.utils.load_huggingface import OptimizedModel
36
- int8_model = OptimizedModel.from_pretrained(
37
- 'Intel/roberta-base-squad2-int8-static',
38
  )
39
  ```
 
3
  tags:
4
  - int8
5
  - Intel® Neural Compressor
6
+ - PostTrainingDynamic
7
  datasets:
8
  - mnli
9
  metrics:
 
14
 
15
  ### Post-training dynamic quantization
16
 
17
+ This is an INT8 PyTorch model quantized with [huggingface/optimum-intel](https://github.com/huggingface/optimum-intel) through the usage of [Intel® Neural Compressor](https://github.com/intel/neural-compressor).
18
 
19
  The original fp32 model comes from the fine-tuned model [adasnew/t5-small-xsum](https://huggingface.co/adasnew/t5-small-xsum).
20
 
 
29
  | **Accuracy (eval-rouge1)** | 29.9008 |29.9592|
30
  | **Model size** |154M|242M|
31
 
32
+ ### Load with optimum:
33
 
34
  ```python
35
+ from optimum.intel.neural_compressor.quantization import IncQuantizedModelForSeq2SeqLM
36
+ int8_model = IncQuantizedModelForSeq2SeqLM.from_pretrained(
37
+ 'Intel/t5-small-xsum-int8-dynamic',
38
  )
39
  ```