|
--- |
|
license: mit |
|
--- |
|
|
|
## EQA-PMR-large |
|
EQA-PMR-large is initialized with [PMR-large](https://huggingface.co/DAMO-NLP-SG/PMR-large) and further fine-tuned on 6 Extractive Question Answering (EQA) training data from training split of [MRQA](https://aclanthology.org/D19-5801). |
|
|
|
The model performance on the in-dev sets are: |
|
|
|
|| SQuAD | NewsQA | HotpotQA | NaturalQuestions | TriviaQA | SearchQA | |
|
|--|-------|-------|-------|------------|--------|--------| |
|
|RoBERTa-large (single-task model)| 94.2 | 73.8 | 81.6|83.3| 85.1 | 85.7 | |
|
|PMR-large (single-task model)| 94.5 | 74.0 | 83.6 | 83.8 | 85.1 | 88.3 | |
|
|EQA-PMR-large (multi-task model)| 94.2 | 73.7 | 66.9 | 82.3 | 85.4 | 88.7 | |
|
|
|
Note that the performance of RoBERTa-large and PMR-large are single-task fine-tuning, while EQA-PMR-large is a multi-task fine-tuned model. |
|
As it is fine-tuned on multiple datasets, we believe that EQA-PMR-large has a better generalization capability to other EQA tasks than PMR-large and RoBERTa-large. |
|
|
|
### How to use |
|
You can try the codes from [this repo](https://github.com/DAMO-NLP-SG/PMR/QA) for both training and inference. |
|
|
|
|
|
### BibTeX entry and citation info |
|
```bibtxt |
|
@article{xu2022clozing, |
|
title={From Clozing to Comprehending: Retrofitting Pre-trained Language Model to Pre-trained Machine Reader}, |
|
author={Xu, Weiwen and Li, Xin and Zhang, Wenxuan and Zhou, Meng and Bing, Lidong and Lam, Wai and Si, Luo}, |
|
journal={arXiv preprint arXiv:2212.04755}, |
|
year={2022} |
|
} |
|
``` |