Frizio's picture
Update README.md
6c92a42
|
raw
history blame
1.92 kB
---
language: en
datasets:
- squad_v2
- covid_qa_deepset
license: cc-by-4.0
---
# minilm-uncased-squad2 for QA on COVID-19
## Overview
**Language model:** deepset/minilm-uncased-squad2
**Language:** English
**Downstream-task:** Extractive QA
**Training data:** [SQuAD-style COV-19 QA](https://github.com/deepset-ai/COVID-QA/blob/master/data/question-answering/COVID-QA.json)
**Infrastructure**: A4000
Initially fine-tuned for https://github.com/CDCapobianco/COVID-Question-Answering-REST-API
## Hyperparameters
```
batch_size = 24
n_epochs = 3
base_LM_model = "deepset/minilm-uncased-squad2"
max_seq_len = 384
learning_rate = 3e-5
lr_schedule = LinearWarmup
warmup_proportion = 0.1
doc_stride = 128
dev_split = 0
x_val_splits = 5
no_ans_boost = -100
```
---
license: cc-by-4.0
---
## Performance
**Single EM-Scores:** [0.7441, 0.7938, 0.6666, 0.6576, 0.6445]
**Single F1-Scores:** [0.8261, 0.8748, 0.8188, 0.7633, 0.7935]
**XVAL EM:** 0.7013
**XVAL f1:** 0.8153
## Usage
### In Haystack
For doing QA at scale (i.e. many docs instead of single paragraph), you can load the model also in [haystack](https://github.com/deepset-ai/haystack/):
```python
reader = FARMReader(model_name_or_path="Frizio/minilm-uncased-squad2-covidqa")
```
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "Frizio/minilm-uncased-squad2-covidqa"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Why is model conversion important?',
'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```