Migrate model card from transformers-repo
Browse filesRead announcement at https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755
Original file history: https://github.com/huggingface/transformers/commits/master/model_cards/deepset/minilm-uncased-squad2/README.md
README.md
ADDED
@@ -0,0 +1,116 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
datasets:
|
3 |
+
- squad_v2
|
4 |
+
---
|
5 |
+
|
6 |
+
# MiniLM-L12-H384-uncased for QA
|
7 |
+
|
8 |
+
## Overview
|
9 |
+
**Language model:** microsoft/MiniLM-L12-H384-uncased
|
10 |
+
**Language:** English
|
11 |
+
**Downstream-task:** Extractive QA
|
12 |
+
**Training data:** SQuAD 2.0
|
13 |
+
**Eval data:** SQuAD 2.0
|
14 |
+
**Code:** See [example](https://github.com/deepset-ai/FARM/blob/master/examples/question_answering.py) in [FARM](https://github.com/deepset-ai/FARM/blob/master/examples/question_answering.py)
|
15 |
+
**Infrastructure**: 1x Tesla v100
|
16 |
+
|
17 |
+
## Hyperparameters
|
18 |
+
|
19 |
+
```
|
20 |
+
seed=42
|
21 |
+
batch_size = 12
|
22 |
+
n_epochs = 4
|
23 |
+
base_LM_model = "microsoft/MiniLM-L12-H384-uncased"
|
24 |
+
max_seq_len = 384
|
25 |
+
learning_rate = 4e-5
|
26 |
+
lr_schedule = LinearWarmup
|
27 |
+
warmup_proportion = 0.2
|
28 |
+
doc_stride=128
|
29 |
+
max_query_length=64
|
30 |
+
grad_acc_steps=4
|
31 |
+
```
|
32 |
+
|
33 |
+
## Performance
|
34 |
+
Evaluated on the SQuAD 2.0 dev set with the [official eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/).
|
35 |
+
```
|
36 |
+
"exact": 76.13071675229513,
|
37 |
+
"f1": 79.49786500219953,
|
38 |
+
"total": 11873,
|
39 |
+
"HasAns_exact": 78.35695006747639,
|
40 |
+
"HasAns_f1": 85.10090269418276,
|
41 |
+
"HasAns_total": 5928,
|
42 |
+
"NoAns_exact": 73.91084945332211,
|
43 |
+
"NoAns_f1": 73.91084945332211,
|
44 |
+
"NoAns_total": 5945
|
45 |
+
```
|
46 |
+
|
47 |
+
## Usage
|
48 |
+
|
49 |
+
### In Transformers
|
50 |
+
```python
|
51 |
+
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
|
52 |
+
|
53 |
+
model_name = "deepset/minilm-uncased-squad2"
|
54 |
+
|
55 |
+
# a) Get predictions
|
56 |
+
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
|
57 |
+
QA_input = {
|
58 |
+
'question': 'Why is model conversion important?',
|
59 |
+
'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
|
60 |
+
}
|
61 |
+
res = nlp(QA_input)
|
62 |
+
|
63 |
+
# b) Load model & tokenizer
|
64 |
+
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
|
65 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
66 |
+
```
|
67 |
+
|
68 |
+
### In FARM
|
69 |
+
|
70 |
+
```python
|
71 |
+
from farm.modeling.adaptive_model import AdaptiveModel
|
72 |
+
from farm.modeling.tokenization import Tokenizer
|
73 |
+
from farm.infer import Inferencer
|
74 |
+
|
75 |
+
model_name = "deepset/minilm-uncased-squad2"
|
76 |
+
|
77 |
+
# a) Get predictions
|
78 |
+
nlp = Inferencer.load(model_name, task_type="question_answering")
|
79 |
+
QA_input = [{"questions": ["Why is model conversion important?"],
|
80 |
+
"text": "The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks."}]
|
81 |
+
res = nlp.inference_from_dicts(dicts=QA_input)
|
82 |
+
|
83 |
+
# b) Load model & tokenizer
|
84 |
+
model = AdaptiveModel.convert_from_transformers(model_name, device="cpu", task_type="question_answering")
|
85 |
+
tokenizer = Tokenizer.load(model_name)
|
86 |
+
```
|
87 |
+
|
88 |
+
### In haystack
|
89 |
+
For doing QA at scale (i.e. many docs instead of single paragraph), you can load the model also in [haystack](https://github.com/deepset-ai/haystack/):
|
90 |
+
```python
|
91 |
+
reader = FARMReader(model_name_or_path="deepset/minilm-uncased-squad2")
|
92 |
+
# or
|
93 |
+
reader = TransformersReader(model="deepset/minilm-uncased-squad2",tokenizer="deepset/minilm-uncased-squad2")
|
94 |
+
```
|
95 |
+
|
96 |
+
|
97 |
+
## Authors
|
98 |
+
Vaishali Pal `vaishali.pal [at] deepset.ai`
|
99 |
+
Branden Chan: `branden.chan [at] deepset.ai`
|
100 |
+
Timo Möller: `timo.moeller [at] deepset.ai`
|
101 |
+
Malte Pietsch: `malte.pietsch [at] deepset.ai`
|
102 |
+
Tanay Soni: `tanay.soni [at] deepset.ai`
|
103 |
+
|
104 |
+
## About us
|
105 |
+
![deepset logo](https://raw.githubusercontent.com/deepset-ai/FARM/master/docs/img/deepset_logo.png)
|
106 |
+
|
107 |
+
We bring NLP to the industry via open source!
|
108 |
+
Our focus: Industry specific language models & large scale QA systems.
|
109 |
+
|
110 |
+
Some of our work:
|
111 |
+
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
|
112 |
+
- [FARM](https://github.com/deepset-ai/FARM)
|
113 |
+
- [Haystack](https://github.com/deepset-ai/haystack/)
|
114 |
+
|
115 |
+
Get in touch:
|
116 |
+
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Website](https://deepset.ai)
|