xinhe's picture
Update README.md
f40a244
|
raw
history blame
1.78 kB
metadata
language:
  - en
license: mit
tags:
  - text-classfication
  - int8
  - PostTrainingStatic
datasets:
  - glue
metrics:
  - f1
model-index:
  - name: roberta-base-mrpc-int8-static
    results:
      - task:
          name: Text Classification
          type: text-classification
        dataset:
          name: GLUE MRPC
          type: glue
          args: mrpc
        metrics:
          - name: F1
            type: f1
            value: 0.924693520140105

INT8 roberta-base-mrpc

Post-training static quantization

This is an INT8 PyTorch model quantized with Intel® Neural Compressor.

The original fp32 model comes from the fine-tuned model roberta-base-mrpc.

The calibration dataloader is the train dataloader. The default calibration sampling size 300 isn't divisible exactly by batch size 8, so the real sampling size is 304.

Embedding module roberta.embeddings.token_type_embeddings is fallbacked to fp32 due to Unexpected exception RuntimeError('Expect weight, indices, and offsets to be contiguous.')

Test result

INT8 FP32
Throughput (samples/sec) 25.737 13.171
Accuracy (eval-f1) 0.9247 0.9138
Model size (MB) 121 476

Load with Intel® Neural Compressor (build from source):

from neural_compressor.utils.load_huggingface import OptimizedModel
int8_model = OptimizedModel.from_pretrained(
    'Intel/roberta-base-mrpc-int8-static',
)

Notes:

  • The INT8 model has better performance than the FP32 model when the CPU is fully occupied. Otherwise, there will be the illusion that INT8 is inferior to FP32.