xinhe's picture
Update README.md
bb650cc
|
raw
history blame
1.61 kB
metadata
language:
  - en
license: mit
datasets:
  - glue
metrics:
  - f1
model-index:
  - name: electra-small-discriminator-mrpc-int8-static
    results:
      - task:
          name: Text Classification
          type: text-classification
        dataset:
          name: GLUE MRPC
          type: glue
          args: mrpc
        metrics:
          - name: F1
            type: f1
            value: 0.900709219858156

INT8 electra-small-discriminator-mrpc

Post-training static quantization

This is an INT8 PyTorch model quantized with Intel® Neural Compressor.

The original fp32 model comes from the fine-tuned model electra-small-discriminator-mrpc.

The calibration dataloader is the train dataloader. The default calibration sampling size 300 isn't divisible exactly by batch size 8, so the real sampling size is 304.

Test result

INT8 FP32
Throughput (samples/sec) 102.15 75.056
Accuracy (eval-f1) 0.9007 0.8983
Model size (MB) 14 51.8

Load with Intel® Neural Compressor (build from source):

from neural_compressor.utils.load_huggingface import OptimizedModel
int8_model = OptimizedModel.from_pretrained(
    'Intel/electra-small-discriminator-mrpc-int8-static',
)

Notes:

  • The INT8 model has better performance than the FP32 model when the CPU is fully occupied. Otherwise, there will be the illusion that INT8 is inferior to FP32.