80% 1x4 Block Sparse BERT-Large (uncased) Fine Tuned on SQuADv1.1

This model is a result of fine-tuning a Prune OFA 80% 1x4 block sparse pre-trained BERT-Large combined with knowledge distillation. This model yields the following results on SQuADv1.1 development set:
{"exact_match": 84.673, "f1": 91.174}

For further details see our paper, Prune Once for All: Sparse Pre-Trained Language Models, and our open source implementation available here.

Downloads last month
26
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including Intel/bert-large-uncased-squadv1.1-sparse-80-1x4-block-pruneofa