Datasets:

Modalities:
Text
Formats:
parquet
Sub-tasks:
extractive-qa
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
system HF staff commited on
Commit
cfc8fbe
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,180 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ language_creators:
5
+ - crowdsourced
6
+ - found
7
+ languages:
8
+ - en
9
+ licenses:
10
+ - cc-by-4-0
11
+ multilinguality:
12
+ - monolingual
13
+ size_categories:
14
+ - 10K<n<100K
15
+ source_datasets:
16
+ - extended|wikipedia
17
+ - original
18
+ task_categories:
19
+ - question-answering
20
+ task_ids:
21
+ - extractive-qa
22
+ ---
23
+
24
+ # Dataset Card Creation Guide
25
+
26
+ ## Table of Contents
27
+ - [Dataset Description](#dataset-description)
28
+ - [Dataset Summary](#dataset-summary)
29
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
30
+ - [Languages](#languages)
31
+ - [Dataset Structure](#dataset-structure)
32
+ - [Data Instances](#data-instances)
33
+ - [Data Fields](#data-instances)
34
+ - [Data Splits](#data-instances)
35
+ - [Dataset Creation](#dataset-creation)
36
+ - [Curation Rationale](#curation-rationale)
37
+ - [Source Data](#source-data)
38
+ - [Annotations](#annotations)
39
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
40
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
41
+ - [Social Impact of Dataset](#social-impact-of-dataset)
42
+ - [Discussion of Biases](#discussion-of-biases)
43
+ - [Other Known Limitations](#other-known-limitations)
44
+ - [Additional Information](#additional-information)
45
+ - [Dataset Curators](#dataset-curators)
46
+ - [Licensing Information](#licensing-information)
47
+ - [Citation Information](#citation-information)
48
+
49
+ ## Dataset Description
50
+
51
+ - **Homepage:** [ROPES dataset](https://allenai.org/data/ropes)
52
+ - **Paper:** [Reasoning Over Paragraph Effects in Situations](https://arxiv.org/abs/1908.05852)
53
+ - **Leaderboard:** [ROPES leaderboard](https://leaderboard.allenai.org/ropes)
54
+
55
+ ### Dataset Summary
56
+
57
+ ROPES (Reasoning Over Paragraph Effects in Situations) is a QA dataset which tests a system's ability to apply knowledge from a passage of text to a new situation. A system is presented a background passage containing a causal or qualitative relation(s) (e.g., "animal pollinators increase efficiency of fertilization in flowers"), a novel situation that uses this background, and questions that require reasoning about effects of the relationships in the background passage in the context of the situation.
58
+
59
+ ### Supported Tasks and Leaderboards
60
+
61
+ The reading comprehension task is framed as an extractive question answering problem.
62
+
63
+ Models are evaluated by computing word-level F1 and exact match (EM) metrics, following common practice for recent reading comprehension datasets (e.g., SQuAD).
64
+
65
+ ### Languages
66
+
67
+ The text in the dataset is in English. The associated BCP-47 code is `en`.
68
+
69
+ ## Dataset Structure
70
+
71
+ ### Data Instances
72
+
73
+ Data closely follow the SQuAD v1.1 format. An example looks like this:
74
+
75
+ ```
76
+ {
77
+ "id": "2058517998",
78
+ "background": "Cancer is a disease that causes cells to divide out of control. Normally, the body has systems that prevent cells from dividing out of control. But in the case of cancer, these systems fail. Cancer is usually caused by mutations. Mutations are random errors in genes. Mutations that lead to cancer usually happen to genes that control the cell cycle. Because of the mutations, abnormal cells divide uncontrollably. This often leads to the development of a tumor. A tumor is a mass of abnormal tissue. As a tumor grows, it may harm normal tissues around it. Anything that can cause cancer is called a carcinogen . Carcinogens may be pathogens, chemicals, or radiation.",
79
+ "situation": "Jason recently learned that he has cancer. After hearing this news, he convinced his wife, Charlotte, to get checked out. After running several tests, the doctors determined Charlotte has no cancer, but she does have high blood pressure. Relieved at this news, Jason was now focused on battling his cancer and fighting as hard as he could to survive.",
80
+ "question": "Whose cells are dividing more rapidly?",
81
+ "answers": {
82
+ "text": ["Jason"]
83
+ },
84
+ }
85
+ ```
86
+
87
+ ### Data Fields
88
+
89
+ - `id`: identification
90
+ - `background`: background passage
91
+ - `situation`: the grounding situation
92
+ - `question`: the question to answer
93
+ - `answers`: the answer text which is a span from either the situation or the question. The text list always contain a single element.
94
+
95
+ Note that the answers for the test set are hidden (and thus represented as an empty list). Predictions for the test set should be submitted to the leaderboard.
96
+
97
+ ### Data Splits
98
+
99
+ The dataset contains 14k QA pairs over 1.7K paragraphs, split between train (10k QAs), development (1.6k QAs) and a hidden test partition (1.7k QAs).
100
+
101
+ ## Dataset Creation
102
+
103
+ ### Curation Rationale
104
+
105
+ From the original paper:
106
+
107
+ *ROPES challenges reading comprehension models to handle more difficult phenomena: understanding the implications of a passage of text. ROPES is also particularly related to datasets focusing on "multi-hop reasoning", as by construction answering questions in ROPES requires connecting information from multiple parts of a given passage.*
108
+
109
+ *We constructed ROPES by first collecting background passages from science textbooks and Wikipedia articles that describe causal relationships. We showed the collected paragraphs to crowd workers and asked them to write situations that involve the relationships found in the background passage, and questions that connect the situation and the background using the causal relationships. The answers are spans from either the situation or the question. The dataset consists of 14,322 questions from various domains, mostly in science and economics.*
110
+
111
+ ### Source Data
112
+
113
+ From the original paper:
114
+
115
+ *We automatically scraped passages from science textbooks and Wikipedia that contained causal connectives eg. ”causes,” ”leads to,” and keywords that signal qualitative relations, e.g. ”increases,” ”decreases.”. We then manually filtered out the passages that do not have at least one relation. The passages can be categorized into physical science (49%), life science (45%), economics (5%) and other (1%). In total, we collected over 1,000 background passages.*
116
+
117
+ #### Initial Data Collection and Normalization
118
+
119
+ From the original paper:
120
+
121
+ *We used Amazon Mechanical Turk (AMT) to generate the situations, questions, and answers. The AMT workers were given background passages and asked to write situations that involved the relation(s) in the background passage. The AMT workers then authored questions about the situation that required both the background and the situation to answer. In each human intelligence task (HIT), AMT workers are given 5 background passages to select from and are asked to create a total of 10 questions. To mitigate the potential for easy lexical shortcuts in the dataset, the workers were encouraged via instructions to write questions in minimal pairs, where a very small change in the question results in a different answer.*
122
+
123
+ *Most questions are designed to have two sensible answer choices (eg. “more” vs. “less”).*
124
+
125
+ To reduce annotator bias, training and evaluation sets are writter by different annotators.
126
+
127
+ #### Who are the source language producers?
128
+
129
+ [More Information Needed]
130
+
131
+ ### Annotations
132
+
133
+ [More Information Needed]
134
+
135
+ #### Annotation process
136
+
137
+ [More Information Needed]
138
+
139
+ #### Who are the annotators?
140
+
141
+ [More Information Needed]
142
+
143
+ ### Personal and Sensitive Information
144
+
145
+ [More Information Needed]
146
+
147
+ ## Considerations for Using the Data
148
+
149
+ ### Social Impact of Dataset
150
+
151
+ [More Information Needed]
152
+
153
+ ### Discussion of Biases
154
+
155
+ [More Information Needed]
156
+
157
+ ### Other Known Limitations
158
+
159
+ [More Information Needed]
160
+
161
+ ## Additional Information
162
+
163
+ ### Dataset Curators
164
+
165
+ [More Information Needed]
166
+
167
+ ### Licensing Information
168
+
169
+ The data is distributed under the [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) license.
170
+
171
+ ### Citation Information
172
+
173
+ ```
174
+ @inproceedings{Lin2019ReasoningOP,
175
+ title={Reasoning Over Paragraph Effects in Situations},
176
+ author={Kevin Lin and Oyvind Tafjord and Peter Clark and Matt Gardner},
177
+ booktitle={MRQA@EMNLP},
178
+ year={2019}
179
+ }
180
+ ```
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"plain_text": {"description": "ROPES (Reasoning Over Paragraph Effects in Situations) is a QA dataset\nwhich tests a system's ability to apply knowledge from a passage\nof text to a new situation. A system is presented a background\npassage containing a causal or qualitative relation(s) (e.g.,\n\"animal pollinators increase efficiency of fertilization in flowers\"),\na novel situation that uses this background, and questions that require\nreasoning about effects of the relationships in the background\npassage in the background of the situation.\n", "citation": "@inproceedings{Lin2019ReasoningOP,\n title={Reasoning Over Paragraph Effects in Situations},\n author={Kevin Lin and Oyvind Tafjord and Peter Clark and Matt Gardner},\n booktitle={MRQA@EMNLP},\n year={2019}\n}\n", "homepage": "https://allenai.org/data/ropes", "license": "CC BY 4.0", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "background": {"dtype": "string", "id": null, "_type": "Value"}, "situation": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "answers": {"feature": {"text": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "ropes", "config_name": "plain_text", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 12231940, "num_examples": 10924, "dataset_name": "ropes"}, "test": {"name": "test", "num_bytes": 1928532, "num_examples": 1710, "dataset_name": "ropes"}, "validation": {"name": "validation", "num_bytes": 1643498, "num_examples": 1688, "dataset_name": "ropes"}}, "download_checksums": {"https://ropes-dataset.s3-us-west-2.amazonaws.com/train_and_dev/ropes-train-dev-v1.0.tar.gz": {"num_bytes": 3395072, "checksum": "ba26329832f84d8c2660de1120a1d7ff086cdf8580eabeebd721c092020763e2"}, "https://ropes-dataset.s3-us-west-2.amazonaws.com/test/ropes-test-questions-v1.0.tar.gz": {"num_bytes": 121845, "checksum": "a489b9d0c0f2bcf495f6eab089ba4915e7832dc656237b40497acdccb8a45f95"}}, "download_size": 3516917, "post_processing_size": null, "dataset_size": 15803970, "size_in_bytes": 19320887}}
dummy/plain_text/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5b1a926ff3cadfec92aaa4bd3a5bea56a0491666c09369a46582e5dab6b4ee8a
3
+ size 5852
ropes.py ADDED
@@ -0,0 +1,135 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """ROPES dataset.
16
+ Code is heavily inspired from https://github.com/huggingface/datasets/blob/master/datasets/squad/squad.py"""
17
+
18
+ from __future__ import absolute_import, division, print_function
19
+
20
+ import json
21
+ import os
22
+
23
+ import datasets
24
+
25
+
26
+ _CITATION = """\
27
+ @inproceedings{Lin2019ReasoningOP,
28
+ title={Reasoning Over Paragraph Effects in Situations},
29
+ author={Kevin Lin and Oyvind Tafjord and Peter Clark and Matt Gardner},
30
+ booktitle={MRQA@EMNLP},
31
+ year={2019}
32
+ }
33
+ """
34
+
35
+ _DESCRIPTION = """\
36
+ ROPES (Reasoning Over Paragraph Effects in Situations) is a QA dataset
37
+ which tests a system's ability to apply knowledge from a passage
38
+ of text to a new situation. A system is presented a background
39
+ passage containing a causal or qualitative relation(s) (e.g.,
40
+ "animal pollinators increase efficiency of fertilization in flowers"),
41
+ a novel situation that uses this background, and questions that require
42
+ reasoning about effects of the relationships in the background
43
+ passage in the background of the situation.
44
+ """
45
+
46
+ _LICENSE = "CC BY 4.0"
47
+
48
+ _URLs = {
49
+ "train+dev": "https://ropes-dataset.s3-us-west-2.amazonaws.com/train_and_dev/ropes-train-dev-v1.0.tar.gz",
50
+ "test": "https://ropes-dataset.s3-us-west-2.amazonaws.com/test/ropes-test-questions-v1.0.tar.gz",
51
+ }
52
+
53
+
54
+ class Ropes(datasets.GeneratorBasedBuilder):
55
+ """ROPES datset: testing a system's ability
56
+ to apply knowledge from a passage of text to a new situation.."""
57
+
58
+ VERSION = datasets.Version("1.1.0")
59
+
60
+ BUILDER_CONFIGS = [
61
+ datasets.BuilderConfig(name="plain_text", description="Plain text", version=VERSION),
62
+ ]
63
+
64
+ def _info(self):
65
+ return datasets.DatasetInfo(
66
+ description=_DESCRIPTION,
67
+ features=datasets.Features(
68
+ {
69
+ "id": datasets.Value("string"),
70
+ "background": datasets.Value("string"),
71
+ "situation": datasets.Value("string"),
72
+ "question": datasets.Value("string"),
73
+ "answers": datasets.features.Sequence(
74
+ {
75
+ "text": datasets.Value("string"),
76
+ }
77
+ ),
78
+ }
79
+ ),
80
+ supervised_keys=None,
81
+ homepage="https://allenai.org/data/ropes",
82
+ license=_LICENSE,
83
+ citation=_CITATION,
84
+ )
85
+
86
+ def _split_generators(self, dl_manager):
87
+ """Returns SplitGenerators."""
88
+ data_dir = dl_manager.download_and_extract(_URLs)
89
+
90
+ return [
91
+ datasets.SplitGenerator(
92
+ name=datasets.Split.TRAIN,
93
+ gen_kwargs={
94
+ "filepath": os.path.join(data_dir["train+dev"], "ropes-train-dev-v1.0", "train-v1.0.json"),
95
+ "split": "train",
96
+ },
97
+ ),
98
+ datasets.SplitGenerator(
99
+ name=datasets.Split.TEST,
100
+ gen_kwargs={
101
+ "filepath": os.path.join(data_dir["test"], "ropes-test-questions-v1.0", "test-1.0.json"),
102
+ "split": "test",
103
+ },
104
+ ),
105
+ datasets.SplitGenerator(
106
+ name=datasets.Split.VALIDATION,
107
+ gen_kwargs={
108
+ "filepath": os.path.join(data_dir["train+dev"], "ropes-train-dev-v1.0", "dev-v1.0.json"),
109
+ "split": "dev",
110
+ },
111
+ ),
112
+ ]
113
+
114
+ def _generate_examples(self, filepath, split):
115
+ """ Yields examples. """
116
+ with open(filepath, encoding="utf-8") as f:
117
+ ropes = json.load(f)
118
+ for article in ropes["data"]:
119
+ for paragraph in article["paragraphs"]:
120
+ background = paragraph["background"].strip()
121
+ situation = paragraph["situation"].strip()
122
+ for qa in paragraph["qas"]:
123
+ question = qa["question"].strip()
124
+ id_ = qa["id"]
125
+ answers = [] if split == "test" else [answer["text"].strip() for answer in qa["answers"]]
126
+
127
+ yield id_, {
128
+ "background": background,
129
+ "situation": situation,
130
+ "question": question,
131
+ "id": id_,
132
+ "answers": {
133
+ "text": answers,
134
+ },
135
+ }