|
--- |
|
language: |
|
- en |
|
pretty_name: qangaroo |
|
dataset_info: |
|
- config_name: masked_medhop |
|
features: |
|
- name: query |
|
dtype: string |
|
- name: supports |
|
sequence: string |
|
- name: candidates |
|
sequence: string |
|
- name: answer |
|
dtype: string |
|
- name: id |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 95813556 |
|
num_examples: 1620 |
|
- name: validation |
|
num_bytes: 16800542 |
|
num_examples: 342 |
|
download_size: 58801723 |
|
dataset_size: 112614098 |
|
- config_name: masked_wikihop |
|
features: |
|
- name: query |
|
dtype: string |
|
- name: supports |
|
sequence: string |
|
- name: candidates |
|
sequence: string |
|
- name: answer |
|
dtype: string |
|
- name: id |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 348073986 |
|
num_examples: 43738 |
|
- name: validation |
|
num_bytes: 43663600 |
|
num_examples: 5129 |
|
download_size: 211302995 |
|
dataset_size: 391737586 |
|
- config_name: medhop |
|
features: |
|
- name: query |
|
dtype: string |
|
- name: supports |
|
sequence: string |
|
- name: candidates |
|
sequence: string |
|
- name: answer |
|
dtype: string |
|
- name: id |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 93937294 |
|
num_examples: 1620 |
|
- name: validation |
|
num_bytes: 16461612 |
|
num_examples: 342 |
|
download_size: 57837760 |
|
dataset_size: 110398906 |
|
- config_name: wikihop |
|
features: |
|
- name: query |
|
dtype: string |
|
- name: supports |
|
sequence: string |
|
- name: candidates |
|
sequence: string |
|
- name: answer |
|
dtype: string |
|
- name: id |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 325777822 |
|
num_examples: 43738 |
|
- name: validation |
|
num_bytes: 40843303 |
|
num_examples: 5129 |
|
download_size: 202454962 |
|
dataset_size: 366621125 |
|
configs: |
|
- config_name: masked_medhop |
|
data_files: |
|
- split: train |
|
path: masked_medhop/train-* |
|
- split: validation |
|
path: masked_medhop/validation-* |
|
- config_name: masked_wikihop |
|
data_files: |
|
- split: train |
|
path: masked_wikihop/train-* |
|
- split: validation |
|
path: masked_wikihop/validation-* |
|
- config_name: medhop |
|
data_files: |
|
- split: train |
|
path: medhop/train-* |
|
- split: validation |
|
path: medhop/validation-* |
|
- config_name: wikihop |
|
data_files: |
|
- split: train |
|
path: wikihop/train-* |
|
- split: validation |
|
path: wikihop/validation-* |
|
--- |
|
|
|
# Dataset Card for "qangaroo" |
|
|
|
## Table of Contents |
|
- [Dataset Description](#dataset-description) |
|
- [Dataset Summary](#dataset-summary) |
|
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) |
|
- [Languages](#languages) |
|
- [Dataset Structure](#dataset-structure) |
|
- [Data Instances](#data-instances) |
|
- [Data Fields](#data-fields) |
|
- [Data Splits](#data-splits) |
|
- [Dataset Creation](#dataset-creation) |
|
- [Curation Rationale](#curation-rationale) |
|
- [Source Data](#source-data) |
|
- [Annotations](#annotations) |
|
- [Personal and Sensitive Information](#personal-and-sensitive-information) |
|
- [Considerations for Using the Data](#considerations-for-using-the-data) |
|
- [Social Impact of Dataset](#social-impact-of-dataset) |
|
- [Discussion of Biases](#discussion-of-biases) |
|
- [Other Known Limitations](#other-known-limitations) |
|
- [Additional Information](#additional-information) |
|
- [Dataset Curators](#dataset-curators) |
|
- [Licensing Information](#licensing-information) |
|
- [Citation Information](#citation-information) |
|
- [Contributions](#contributions) |
|
|
|
## Dataset Description |
|
|
|
- **Homepage:** [http://qangaroo.cs.ucl.ac.uk/index.html](http://qangaroo.cs.ucl.ac.uk/index.html) |
|
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
- **Size of downloaded dataset files:** 1.36 GB |
|
- **Size of the generated dataset:** 981.89 MB |
|
- **Total amount of disk used:** 2.34 GB |
|
|
|
### Dataset Summary |
|
|
|
We have created two new Reading Comprehension datasets focussing on multi-hop (alias multi-step) inference. |
|
|
|
Several pieces of information often jointly imply another fact. In multi-hop inference, a new fact is derived by combining facts via a chain of multiple steps. |
|
|
|
Our aim is to build Reading Comprehension methods that perform multi-hop inference on text, where individual facts are spread out across different documents. |
|
|
|
The two QAngaroo datasets provide a training and evaluation resource for such methods. |
|
|
|
### Supported Tasks and Leaderboards |
|
|
|
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
|
|
### Languages |
|
|
|
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
|
|
## Dataset Structure |
|
|
|
### Data Instances |
|
|
|
#### masked_medhop |
|
|
|
- **Size of downloaded dataset files:** 339.84 MB |
|
- **Size of the generated dataset:** 112.63 MB |
|
- **Total amount of disk used:** 452.47 MB |
|
|
|
An example of 'validation' looks as follows. |
|
``` |
|
|
|
``` |
|
|
|
#### masked_wikihop |
|
|
|
- **Size of downloaded dataset files:** 339.84 MB |
|
- **Size of the generated dataset:** 391.98 MB |
|
- **Total amount of disk used:** 731.82 MB |
|
|
|
An example of 'validation' looks as follows. |
|
``` |
|
|
|
``` |
|
|
|
#### medhop |
|
|
|
- **Size of downloaded dataset files:** 339.84 MB |
|
- **Size of the generated dataset:** 110.42 MB |
|
- **Total amount of disk used:** 450.26 MB |
|
|
|
An example of 'validation' looks as follows. |
|
``` |
|
|
|
``` |
|
|
|
#### wikihop |
|
|
|
- **Size of downloaded dataset files:** 339.84 MB |
|
- **Size of the generated dataset:** 366.87 MB |
|
- **Total amount of disk used:** 706.71 MB |
|
|
|
An example of 'validation' looks as follows. |
|
``` |
|
|
|
``` |
|
|
|
### Data Fields |
|
|
|
The data fields are the same among all splits. |
|
|
|
#### masked_medhop |
|
- `query`: a `string` feature. |
|
- `supports`: a `list` of `string` features. |
|
- `candidates`: a `list` of `string` features. |
|
- `answer`: a `string` feature. |
|
- `id`: a `string` feature. |
|
|
|
#### masked_wikihop |
|
- `query`: a `string` feature. |
|
- `supports`: a `list` of `string` features. |
|
- `candidates`: a `list` of `string` features. |
|
- `answer`: a `string` feature. |
|
- `id`: a `string` feature. |
|
|
|
#### medhop |
|
- `query`: a `string` feature. |
|
- `supports`: a `list` of `string` features. |
|
- `candidates`: a `list` of `string` features. |
|
- `answer`: a `string` feature. |
|
- `id`: a `string` feature. |
|
|
|
#### wikihop |
|
- `query`: a `string` feature. |
|
- `supports`: a `list` of `string` features. |
|
- `candidates`: a `list` of `string` features. |
|
- `answer`: a `string` feature. |
|
- `id`: a `string` feature. |
|
|
|
### Data Splits |
|
|
|
| name |train|validation| |
|
|--------------|----:|---------:| |
|
|masked_medhop | 1620| 342| |
|
|masked_wikihop|43738| 5129| |
|
|medhop | 1620| 342| |
|
|wikihop |43738| 5129| |
|
|
|
## Dataset Creation |
|
|
|
### Curation Rationale |
|
|
|
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
|
|
### Source Data |
|
|
|
#### Initial Data Collection and Normalization |
|
|
|
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
|
|
#### Who are the source language producers? |
|
|
|
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
|
|
### Annotations |
|
|
|
#### Annotation process |
|
|
|
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
|
|
#### Who are the annotators? |
|
|
|
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
|
|
### Personal and Sensitive Information |
|
|
|
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
|
|
## Considerations for Using the Data |
|
|
|
### Social Impact of Dataset |
|
|
|
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
|
|
### Discussion of Biases |
|
|
|
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
|
|
### Other Known Limitations |
|
|
|
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
|
|
## Additional Information |
|
|
|
### Dataset Curators |
|
|
|
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
|
|
### Licensing Information |
|
|
|
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
|
|
### Citation Information |
|
|
|
``` |
|
|
|
``` |
|
|
|
|
|
### Contributions |
|
|
|
Thanks to [@thomwolf](https://github.com/thomwolf), [@jplu](https://github.com/jplu), [@lewtun](https://github.com/lewtun), [@lhoestq](https://github.com/lhoestq), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset. |