Edit model card

DistilBERT base cased distilled SQuAD

Note: This model is a clone of distilbert-base-cased-distilled-squad for internal testing.

This model is a fine-tune checkpoint of DistilBERT-base-cased, fine-tuned using (a second step of) knowledge distillation on SQuAD v1.1. This model reaches a F1 score of 87.1 on the dev set (for comparison, BERT bert-base-cased version reaches a F1 score of 88.7).

Using the question answering Evaluator from evaluate gives:

{'exact_match': 79.54588457899716,
 'f1': 86.81181300991533,
 'latency_in_seconds': 0.008683730778997168,
 'samples_per_second': 115.15787689073015,
 'total_time_in_seconds': 91.78703433400005}

which is roughly consistent with the official score.

Downloads last month
24
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train autoevaluate/distilbert-base-cased-distilled-squad