Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,71 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language: en
|
3 |
+
datasets:
|
4 |
+
- squad_v2
|
5 |
+
- covid_qa_deepset
|
6 |
+
license: cc-by-4.0
|
7 |
+
---
|
8 |
+
|
9 |
+
# minilm-uncased-squad2 for QA on COVID-19
|
10 |
+
|
11 |
+
## Overview
|
12 |
+
**Language model:** deepset/minilm-uncased-squad2
|
13 |
+
**Language:** English
|
14 |
+
**Downstream-task:** Extractive QA
|
15 |
+
**Training data:** [SQuAD-style COV-19 QA](https://github.com/deepset-ai/COVID-QA/blob/master/data/question-answering/COVID-QA.json)
|
16 |
+
**Infrastructure**: A4000
|
17 |
+
|
18 |
+
Initially fine-tuned for https://github.com/CDCapobianco/COVID-Question-Answering-REST-API
|
19 |
+
## Hyperparameters
|
20 |
+
```
|
21 |
+
batch_size = 24
|
22 |
+
n_epochs = 3
|
23 |
+
base_LM_model = "deepset/minilm-uncased-squad2"
|
24 |
+
max_seq_len = 384
|
25 |
+
learning_rate = 3e-5
|
26 |
+
lr_schedule = LinearWarmup
|
27 |
+
warmup_proportion = 0.1
|
28 |
+
doc_stride = 128
|
29 |
+
dev_split = 0
|
30 |
+
x_val_splits = 5
|
31 |
+
no_ans_boost = -100
|
32 |
+
```
|
33 |
+
---
|
34 |
+
license: cc-by-4.0
|
35 |
+
---
|
36 |
+
|
37 |
+
## Performance
|
38 |
+
|
39 |
+
**Single EM-Scores:** [0.7441, 0.7938, 0.6666, 0.6576, 0.6445]
|
40 |
+
**Single F1-Scores:** [0.8261, 0.8748, 0.8188, 0.7633, 0.7935]
|
41 |
+
**Single top\\_3\\_recall Scores:** [0.827, 0.776, 0.860, 0.771, 0.777]
|
42 |
+
**XVAL EM:** 0.7013
|
43 |
+
**XVAL f1:** 0.8153
|
44 |
+
|
45 |
+
## Usage
|
46 |
+
|
47 |
+
### In Haystack
|
48 |
+
For doing QA at scale (i.e. many docs instead of single paragraph), you can load the model also in [haystack](https://github.com/deepset-ai/haystack/):
|
49 |
+
```python
|
50 |
+
reader = FARMReader(model_name_or_path="Frizio/minilm-uncased-squad2-covidqa")
|
51 |
+
```
|
52 |
+
|
53 |
+
### In Transformers
|
54 |
+
```python
|
55 |
+
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
|
56 |
+
|
57 |
+
|
58 |
+
model_name = "Frizio/minilm-uncased-squad2-covidqa"
|
59 |
+
|
60 |
+
# a) Get predictions
|
61 |
+
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
|
62 |
+
QA_input = {
|
63 |
+
'question': 'Why is model conversion important?',
|
64 |
+
'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
|
65 |
+
}
|
66 |
+
res = nlp(QA_input)
|
67 |
+
|
68 |
+
# b) Load model & tokenizer
|
69 |
+
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
|
70 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
71 |
+
```
|