language: | |
- en | |
- ru | |
- multilingual | |
license: apache-2.0 | |
# XLM-RoBERTa large model whole word masking finetuned on SQuAD | |
Pretrained model using a masked language modeling (MLM) objective. | |
Fine tuned on English and Russian QA datasets | |
## Used QA Datasets | |
SQuAD + SberQuAD | |
[SberQuAD original paper](https://arxiv.org/pdf/1912.09723.pdf) is here! Recommend to read! | |
## Evaluation results | |
The results obtained are the following (SberQUaD): | |
``` | |
f1 = 84.3 | |
exact_match = 65.3 | |