Datasets:
File size: 2,884 Bytes
57f4898 df0c8b8 57f4898 df0c8b8 f489b49 9e5350d df0c8b8 e2257ac 57f4898 e2257ac 8e9c265 e2257ac cc5e07d 6cc9633 cc5e07d 6cc9633 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 |
# MuLD
> The Multitask Long Document Benchmark
MuLD (Multitask Long Document Benchmark) is a set of 6 NLP tasks where the inputs consist of at least 10,000 words. The benchmark covers a wide variety of task types including translation, summarization, question answering, and classification. Additionally there is a range of output lengths from a single word classification label all the way up to an output longer than the input text.
- **Repository:** https://github.com/ghomasHudson/muld
- **Paper:** https://arxiv.org/abs/2202.07362
### Supported Tasks and Leaderboards
The 6 MuLD tasks consist of:
- **NarrativeQA** - A question answering dataset requiring an understanding of the plot of books and films.
- **HotpotQA** - An expanded version of HotpotQA requiring multihop reasoning between multiple wikipedia pages. This expanded version includes the full Wikipedia pages.
- **OpenSubtitles** - A translation dataset based on the OpenSubtitles 2018 dataset. The entire subtitles for each tv show is provided, one subtitle per line in both English and German.
- **VLSP (Very Long Scientific Papers)** - An expanded version of the Scientific Papers summarization dataset. Instead of removing very long papers (e.g. thesis), we explicitly include them removing any short papers.
- **AO3 Style Change Detection**
- **Movie Character Types**
### Dataset Structure
The data is presented in a text-to-text format where each instance contains a input string, output string and (optionally) json encoded metadata.
```
{'input: 'Who was wearing the blue shirt? The beginning...', 'output': ['John'], 'metadata': ''}
```
### Data Fields
- `input`: a string which has a differing structure per task but is presented in a unified format
- `output`: a list of strings where each is a possible answer. Most instances only have a single answer, but some such as narrativeQA and VLSP may have multiple.
- `metadata`: Additional metadata which may be helpful for evaluation. In this version, only the OpenSubtitles task contains metadata (for the ContraPro annotations).
### Data Splits
Each tasks contains different splits depending what was available in the source datasets:
| Task Name | Train | Validation | Test |
|----------------------------|----|----|-----|
| NarrativeQA | ✔️ | ✔️ | ✔️ |
| HotpotQA | ✔️ | ✔️ | |
| AO3 Style Change Detection | ✔️ | ✔️ | ✔️ |
| Movie Character Types | ✔️ | ✔️ | ✔️ |
| VLSP | | | ✔️ |
| OpenSubtitles | ✔️ | | ✔️ |
### Citation Information
```
@misc{hudson2022muld,
title={MuLD: The Multitask Long Document Benchmark},
author={G Thomas Hudson and Noura Al Moubayed},
year={2022},
eprint={2202.07362},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |