sgugger Marissa commited on
Commit
1b9d42b
1 Parent(s): 626af31

Add model card (#1)

Browse files

- Add model card (0dda5de40a08c373132fd4e025f2dc0a6fd64cee)


Co-authored-by: Marissa Gerchick <Marissa@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +178 -3
README.md CHANGED
@@ -1,5 +1,5 @@
1
  ---
2
- language: "en"
3
  datasets:
4
  - squad
5
  metrics:
@@ -9,5 +9,180 @@ license: apache-2.0
9
 
10
  # DistilBERT base cased distilled SQuAD
11
 
12
- This model is a fine-tune checkpoint of [DistilBERT-base-cased](https://huggingface.co/distilbert-base-cased), fine-tuned using (a second step of) knowledge distillation on SQuAD v1.1.
13
- This model reaches a F1 score of 87.1 on the dev set (for comparison, BERT bert-base-cased version reaches a F1 score of 88.7).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language: en
3
  datasets:
4
  - squad
5
  metrics:
 
9
 
10
  # DistilBERT base cased distilled SQuAD
11
 
12
+ ## Table of Contents
13
+ - [Model Details](#model-details)
14
+ - [How To Get Started With the Model](#how-to-get-started-with-the-model)
15
+ - [Uses](#uses)
16
+ - [Risks, Limitations and Biases](#risks-limitations-and-biases)
17
+ - [Training](#training)
18
+ - [Evaluation](#evaluation)
19
+ - [Environmental Impact](#environmental-impact)
20
+ - [Technical Specifications](#technical-specifications)
21
+ - [Citation Information](#citation-information)
22
+ - [Model Card Authors](#model-card-authors)
23
+
24
+ ## Model Details
25
+
26
+ **Model Description:** The DistilBERT model was proposed in the blog post [Smaller, faster, cheaper, lighter: Introducing DistilBERT, adistilled version of BERT](https://medium.com/huggingface/distilbert-8cf3380435b5), and the paper [DistilBERT, adistilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108). DistilBERT is a small, fast, cheap and light Transformer model trained by distilling BERT base. It has 40% less parameters than *bert-base-uncased*, runs 60% faster while preserving over 95% of BERT's performances as measured on the GLUE language understanding benchmark.
27
+
28
+ This model is a fine-tune checkpoint of [DistilBERT-base-cased](https://huggingface.co/distilbert-base-cased), fine-tuned using (a second step of) knowledge distillation on [SQuAD v1.1](https://huggingface.co/datasets/squad).
29
+
30
+ - **Developed by:** Hugging Face
31
+ - **Model Type:** Transformer-based language model
32
+ - **Language(s):** English
33
+ - **License:** Apache 2.0
34
+ - **Related Models:** [DistilBERT-base-cased](https://huggingface.co/distilbert-base-cased)
35
+ - **Resources for more information:**
36
+ - See [this repository](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) for more about Distil\* (a class of compressed models including this model)
37
+ - See [Sanh et al. (2019)](https://arxiv.org/abs/1910.01108) for more information about knowledge distillation and the training procedure
38
+
39
+ ## How to Get Started with the Model
40
+
41
+ Use the code below to get started with the model.
42
+
43
+ ```python
44
+ >>> from transformers import pipeline
45
+ >>> question_answerer = pipeline("question-answering", model='distilbert-base-cased-distilled-squad')
46
+
47
+ >>> context = r"""
48
+ ... Extractive Question Answering is the task of extracting an answer from a text given a question. An example of a
49
+ ... question answering dataset is the SQuAD dataset, which is entirely based on that task. If you would like to fine-tune
50
+ ... a model on a SQuAD task, you may leverage the examples/pytorch/question-answering/run_squad.py script.
51
+ ... """
52
+
53
+ >>> result = question_answerer(question="What is a good example of a question answering dataset?", context=context)
54
+ >>> print(
55
+ ... f"Answer: '{result['answer']}', score: {round(result['score'], 4)}, start: {result['start']}, end: {result['end']}"
56
+ ...)
57
+
58
+ Answer: 'SQuAD dataset', score: 0.5152, start: 147, end: 160
59
+ ```
60
+
61
+ Here is how to use this model in PyTorch:
62
+
63
+ ```python
64
+ from transformers import DistilBertTokenizer, DistilBertModel
65
+ import torch
66
+ tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-cased-distilled-squad')
67
+ model = DistilBertModel.from_pretrained('distilbert-base-cased-distilled-squad')
68
+
69
+ question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
70
+
71
+ inputs = tokenizer(question, text, return_tensors="pt")
72
+ with torch.no_grad():
73
+ outputs = model(**inputs)
74
+
75
+ print(outputs)
76
+ ```
77
+
78
+ And in TensorFlow:
79
+
80
+ ```python
81
+ from transformers import DistilBertTokenizer, TFDistilBertForQuestionAnswering
82
+ import tensorflow as tf
83
+
84
+ tokenizer = DistilBertTokenizer.from_pretrained("distilbert-base-cased-distilled-squad")
85
+ model = TFDistilBertForQuestionAnswering.from_pretrained("distilbert-base-cased-distilled-squad")
86
+
87
+ question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
88
+
89
+ inputs = tokenizer(question, text, return_tensors="tf")
90
+ outputs = model(**inputs)
91
+
92
+ answer_start_index = int(tf.math.argmax(outputs.start_logits, axis=-1)[0])
93
+ answer_end_index = int(tf.math.argmax(outputs.end_logits, axis=-1)[0])
94
+
95
+ predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
96
+ tokenizer.decode(predict_answer_tokens)
97
+ ```
98
+
99
+ ## Uses
100
+
101
+ This model can be used for question answering.
102
+
103
+ #### Misuse and Out-of-scope Use
104
+
105
+ The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
106
+
107
+ ## Risks, Limitations and Biases
108
+
109
+ **CONTENT WARNING: Readers should be aware that language generated by this model can be disturbing or offensive to some and can propagate historical and current stereotypes.**
110
+
111
+ Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:
112
+
113
+
114
+ ```python
115
+ >>> from transformers import pipeline
116
+ >>> question_answerer = pipeline("question-answering", model='distilbert-base-cased-distilled-squad')
117
+
118
+ >>> context = r"""
119
+ ... Alice is sitting on the bench. Bob is sitting next to her.
120
+ ... """
121
+
122
+ >>> result = question_answerer(question="Who is the CEO?", context=context)
123
+ >>> print(
124
+ ... f"Answer: '{result['answer']}', score: {round(result['score'], 4)}, start: {result['start']}, end: {result['end']}"
125
+ ...)
126
+
127
+ Answer: 'Bob', score: 0.7527, start: 32, end: 35
128
+ ```
129
+
130
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
131
+
132
+ ## Training
133
+
134
+ #### Training Data
135
+
136
+ The [distilbert-base-cased model](https://huggingface.co/distilbert-base-cased) was trained using the same data as the [distilbert-base-uncased model](https://huggingface.co/distilbert-base-uncased). The [distilbert-base-uncased model](https://huggingface.co/distilbert-base-uncased) model describes it's training data as:
137
+
138
+ > DistilBERT pretrained on the same data as BERT, which is [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers).
139
+
140
+ To learn more about the SQuAD v1.1 dataset, see the [SQuAD v1.1 data card](https://huggingface.co/datasets/squad).
141
+
142
+ #### Training Procedure
143
+
144
+ ##### Preprocessing
145
+
146
+ See the [distilbert-base-cased model card](https://huggingface.co/distilbert-base-cased) for further details.
147
+
148
+ ##### Pretraining
149
+
150
+ See the [distilbert-base-cased model card](https://huggingface.co/distilbert-base-cased) for further details.
151
+
152
+ ## Evaluation
153
+
154
+ As discussed in the [model repository](https://github.com/huggingface/transformers/blob/main/examples/research_projects/distillation/README.md)
155
+
156
+ > This model reaches a F1 score of 87.1 on the [SQuAD v1.1] dev set (for comparison, BERT bert-base-cased version reaches a F1 score of 88.7).
157
+
158
+ ## Environmental Impact
159
+
160
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). We present the hardware type and hours used based on the [associated paper](https://arxiv.org/pdf/1910.01108.pdf). Note that these details are just for training DistilBERT, not including the fine-tuning with SQuAD.
161
+
162
+ - **Hardware Type:** 8 16GB V100 GPUs
163
+ - **Hours used:** 90 hours
164
+ - **Cloud Provider:** Unknown
165
+ - **Compute Region:** Unknown
166
+ - **Carbon Emitted:** Unknown
167
+
168
+ ## Technical Specifications
169
+
170
+ See the [associated paper](https://arxiv.org/abs/1910.01108) for details on the modeling architecture, objective, compute infrastructure, and training details.
171
+
172
+ ## Citation Information
173
+
174
+ ```bibtex
175
+ @inproceedings{sanh2019distilbert,
176
+ title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
177
+ author={Sanh, Victor and Debut, Lysandre and Chaumond, Julien and Wolf, Thomas},
178
+ booktitle={NeurIPS EMC^2 Workshop},
179
+ year={2019}
180
+ }
181
+ ```
182
+
183
+ APA:
184
+ - Sanh, V., Debut, L., Chaumond, J., & Wolf, T. (2019). DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.
185
+
186
+ ## Model Card Authors
187
+
188
+ This model card was written by the Hugging Face team.