language:
- en
pretty_name: qangaroo
dataset_info:
- config_name: masked_medhop
features:
- name: query
dtype: string
- name: supports
sequence: string
- name: candidates
sequence: string
- name: answer
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 95813556
num_examples: 1620
- name: validation
num_bytes: 16800542
num_examples: 342
download_size: 58801723
dataset_size: 112614098
- config_name: masked_wikihop
features:
- name: query
dtype: string
- name: supports
sequence: string
- name: candidates
sequence: string
- name: answer
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 348073986
num_examples: 43738
- name: validation
num_bytes: 43663600
num_examples: 5129
download_size: 211302995
dataset_size: 391737586
- config_name: medhop
features:
- name: query
dtype: string
- name: supports
sequence: string
- name: candidates
sequence: string
- name: answer
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 93937294
num_examples: 1620
- name: validation
num_bytes: 16461612
num_examples: 342
download_size: 57837760
dataset_size: 110398906
- config_name: wikihop
features:
- name: query
dtype: string
- name: supports
sequence: string
- name: candidates
sequence: string
- name: answer
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 325777822
num_examples: 43738
- name: validation
num_bytes: 40843303
num_examples: 5129
download_size: 202454962
dataset_size: 366621125
configs:
- config_name: masked_medhop
data_files:
- split: train
path: masked_medhop/train-*
- split: validation
path: masked_medhop/validation-*
- config_name: masked_wikihop
data_files:
- split: train
path: masked_wikihop/train-*
- split: validation
path: masked_wikihop/validation-*
- config_name: medhop
data_files:
- split: train
path: medhop/train-*
- split: validation
path: medhop/validation-*
- config_name: wikihop
data_files:
- split: train
path: wikihop/train-*
- split: validation
path: wikihop/validation-*
Dataset Card for "qangaroo"
Table of Contents
- Dataset Description
- Dataset Structure
- Dataset Creation
- Considerations for Using the Data
- Additional Information
Dataset Description
- Homepage: http://qangaroo.cs.ucl.ac.uk/index.html
- Repository: More Information Needed
- Paper: More Information Needed
- Point of Contact: More Information Needed
- Size of downloaded dataset files: 1.36 GB
- Size of the generated dataset: 981.89 MB
- Total amount of disk used: 2.34 GB
Dataset Summary
We have created two new Reading Comprehension datasets focussing on multi-hop (alias multi-step) inference.
Several pieces of information often jointly imply another fact. In multi-hop inference, a new fact is derived by combining facts via a chain of multiple steps.
Our aim is to build Reading Comprehension methods that perform multi-hop inference on text, where individual facts are spread out across different documents.
The two QAngaroo datasets provide a training and evaluation resource for such methods.
Supported Tasks and Leaderboards
Languages
Dataset Structure
Data Instances
masked_medhop
- Size of downloaded dataset files: 339.84 MB
- Size of the generated dataset: 112.63 MB
- Total amount of disk used: 452.47 MB
An example of 'validation' looks as follows.
masked_wikihop
- Size of downloaded dataset files: 339.84 MB
- Size of the generated dataset: 391.98 MB
- Total amount of disk used: 731.82 MB
An example of 'validation' looks as follows.
medhop
- Size of downloaded dataset files: 339.84 MB
- Size of the generated dataset: 110.42 MB
- Total amount of disk used: 450.26 MB
An example of 'validation' looks as follows.
wikihop
- Size of downloaded dataset files: 339.84 MB
- Size of the generated dataset: 366.87 MB
- Total amount of disk used: 706.71 MB
An example of 'validation' looks as follows.
Data Fields
The data fields are the same among all splits.
masked_medhop
query
: astring
feature.supports
: alist
ofstring
features.candidates
: alist
ofstring
features.answer
: astring
feature.id
: astring
feature.
masked_wikihop
query
: astring
feature.supports
: alist
ofstring
features.candidates
: alist
ofstring
features.answer
: astring
feature.id
: astring
feature.
medhop
query
: astring
feature.supports
: alist
ofstring
features.candidates
: alist
ofstring
features.answer
: astring
feature.id
: astring
feature.
wikihop
query
: astring
feature.supports
: alist
ofstring
features.candidates
: alist
ofstring
features.answer
: astring
feature.id
: astring
feature.
Data Splits
name | train | validation |
---|---|---|
masked_medhop | 1620 | 342 |
masked_wikihop | 43738 | 5129 |
medhop | 1620 | 342 |
wikihop | 43738 | 5129 |
Dataset Creation
Curation Rationale
Source Data
Initial Data Collection and Normalization
Who are the source language producers?
Annotations
Annotation process
Who are the annotators?
Personal and Sensitive Information
Considerations for Using the Data
Social Impact of Dataset
Discussion of Biases
Other Known Limitations
Additional Information
Dataset Curators
Licensing Information
Citation Information
Contributions
Thanks to @thomwolf, @jplu, @lewtun, @lhoestq, @mariamabarham for adding this dataset.