xinhe commited on
Commit
3b8ff63
1 Parent(s): 6f81b2f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +57 -0
README.md CHANGED
@@ -1,3 +1,60 @@
1
  ---
 
 
2
  license: mit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
  license: mit
5
+ tags:
6
+ - text-classfication
7
+ - int8
8
+ - PostTrainingDynamic
9
+ datasets:
10
+ - glue
11
+ metrics:
12
+ - f1
13
+ model-index:
14
+ - name: camembert-base-mrpc-int8-dynamic
15
+ results:
16
+ - task:
17
+ name: Text Classification
18
+ type: text-classification
19
+ dataset:
20
+ name: GLUE MRPC
21
+ type: glue
22
+ args: mrpc
23
+ metrics:
24
+ - name: F1
25
+ type: f1
26
+ value: 0.8842832469775476
27
  ---
28
+ # INT8 camembert-base-mrpc
29
+
30
+ ### Post-training dynamic quantization
31
+
32
+ This is an INT8 PyTorch model quantized with [Intel® Neural Compressor](https://github.com/intel/neural-compressor).
33
+
34
+ The original fp32 model comes from the fine-tuned model [camembert-base-mrpc](https://huggingface.co/Intel/camembert-base-mrpc).
35
+
36
+ The linear module **roberta.encoder.layer.6.attention.self.query** falls back to fp32 to meet the 1% relative accuracy loss.
37
+
38
+ ### Test result
39
+
40
+ - Batch size = 8
41
+ - [Amazon Web Services](https://aws.amazon.com/) c6i.xlarge (Intel ICE Lake: 4 vCPUs, 8g Memory) instance.
42
+
43
+ | |INT8|FP32|
44
+ |---|:---:|:---:|
45
+ | **Throughput (samples/sec)** |24.745|13.078|
46
+ | **Accuracy (eval-f1)** |0.8843|0.8928|
47
+ | **Model size (MB)** |180|422|
48
+
49
+ ### Load with Intel® Neural Compressor (build from source):
50
+
51
+ ```python
52
+ from neural_compressor.utils.load_huggingface import OptimizedModel
53
+ int8_model = OptimizedModel.from_pretrained(
54
+ 'Intel/camembert-base-mrpc-int8-dynamic',
55
+ )
56
+ ```
57
+
58
+ Notes:
59
+ - The INT8 model has better performance than the FP32 model when the CPU is fully occupied. Otherwise, there will be the illusion that INT8 is inferior to FP32.
60
+