Datasets:
Tasks:
Question Answering
Modalities:
Text
Formats:
parquet
Sub-tasks:
open-domain-qa
Languages:
English
Size:
10K - 100K
ArXiv:
License:
metadata
languages:
- en
paperswithcode_id: commonsenseqa
Dataset Card for "commonsense_qa"
Table of Contents
- Dataset Description
- Dataset Structure
- Dataset Creation
- Considerations for Using the Data
- Additional Information
Dataset Description
- Homepage: https://www.tau-nlp.org/commonsenseqa
- Repository: More Information Needed
- Paper: More Information Needed
- Point of Contact: More Information Needed
- Size of downloaded dataset files: 4.46 MB
- Size of the generated dataset: 2.08 MB
- Total amount of disk used: 6.54 MB
Dataset Summary
CommonsenseQA is a new multiple-choice question answering dataset that requires different types of commonsense knowledge to predict the correct answers . It contains 12,102 questions with one correct answer and four distractor answers. The dataset is provided in two major training/validation/testing set splits: "Random split" which is the main evaluation split, and "Question token split", see paper for details.
Supported Tasks and Leaderboards
Languages
Dataset Structure
We show detailed information for up to 5 configurations of the dataset.
Data Instances
default
- Size of downloaded dataset files: 4.46 MB
- Size of the generated dataset: 2.08 MB
- Total amount of disk used: 6.54 MB
An example of 'train' looks as follows.
{
"answerKey": "B",
"choices": {
"label": ["A", "B", "C", "D", "E"],
"text": ["mildred's coffee shop", "mexico", "diner", "kitchen", "canteen"]
},
"question": "In what Spanish speaking North American country can you get a great cup of coffee?"
}
Data Fields
The data fields are the same among all splits.
default
answerKey
: astring
feature.question
: astring
feature.choices
: a dictionary feature containing:label
: astring
feature.text
: astring
feature.
Data Splits
name | train | validation | test |
---|---|---|---|
default | 9741 | 1221 | 1140 |
Dataset Creation
Curation Rationale
Source Data
Initial Data Collection and Normalization
Who are the source language producers?
Annotations
Annotation process
Who are the annotators?
Personal and Sensitive Information
Considerations for Using the Data
Social Impact of Dataset
Discussion of Biases
Other Known Limitations
Additional Information
Dataset Curators
Licensing Information
Citation Information
@InProceedings{commonsense_QA,
title={COMMONSENSEQA: A Question Answering Challenge Targeting Commonsense Knowledge},
author={Alon, Talmor and Jonathan, Herzig and Nicholas, Lourie and Jonathan ,Berant},
journal={arXiv preprint arXiv:1811.00937v2},
year={2019}
Contributions
Thanks to @thomwolf, @lewtun, @albertvillanova, @patrickvonplaten for adding this dataset.