Edit model card

ONNX Conversion of distilbert-base-cased-distilled-squad

DistilBERT base cased distilled SQuAD

This model is a fine-tune checkpoint of DistilBERT-base-cased, fine-tuned using (a second step of) knowledge distillation on SQuAD v1.1. This model reaches a F1 score of 87.1 on the dev set (for comparison, BERT bert-base-cased version reaches a F1 score of 88.7).

Downloads last month
44,783
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train philschmid/distilbert-onnx

Spaces using philschmid/distilbert-onnx 2