Datasets:
Updated the dataset card.
Browse files
README.md
CHANGED
@@ -31,15 +31,14 @@ task_ids:
|
|
31 |
# Dataset Card for Wino-X
|
32 |
|
33 |
## Table of Contents
|
34 |
-
- [Table of Contents](#table-of-contents)
|
35 |
- [Dataset Description](#dataset-description)
|
36 |
- [Dataset Summary](#dataset-summary)
|
37 |
-
- [Supported Tasks
|
38 |
- [Languages](#languages)
|
39 |
- [Dataset Structure](#dataset-structure)
|
40 |
- [Data Instances](#data-instances)
|
41 |
-
- [Data Fields](#data-
|
42 |
-
- [Data Splits](#data-
|
43 |
- [Dataset Creation](#dataset-creation)
|
44 |
- [Curation Rationale](#curation-rationale)
|
45 |
- [Source Data](#source-data)
|
@@ -53,100 +52,164 @@ task_ids:
|
|
53 |
- [Dataset Curators](#dataset-curators)
|
54 |
- [Licensing Information](#licensing-information)
|
55 |
- [Citation Information](#citation-information)
|
56 |
-
- [Contributions](#contributions)
|
57 |
|
58 |
## Dataset Description
|
59 |
|
60 |
-
- **Homepage:**
|
61 |
-
- **Repository:**
|
62 |
-
- **Paper:**
|
63 |
-
- **Leaderboard:**
|
64 |
-
- **Point of Contact:**
|
65 |
|
66 |
### Dataset Summary
|
67 |
|
68 |
-
|
|
|
|
|
|
|
69 |
|
70 |
### Supported Tasks and Leaderboards
|
71 |
|
72 |
-
[
|
|
|
|
|
73 |
|
74 |
### Languages
|
75 |
|
76 |
-
|
77 |
|
78 |
## Dataset Structure
|
79 |
|
80 |
### Data Instances
|
81 |
|
82 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
83 |
|
84 |
### Data Fields
|
85 |
|
86 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
87 |
|
88 |
### Data Splits
|
89 |
|
90 |
-
|
|
|
91 |
|
92 |
## Dataset Creation
|
93 |
|
94 |
### Curation Rationale
|
95 |
|
96 |
-
[
|
97 |
|
98 |
### Source Data
|
99 |
|
100 |
#### Initial Data Collection and Normalization
|
101 |
|
102 |
-
[
|
103 |
|
104 |
#### Who are the source language producers?
|
105 |
|
106 |
-
[
|
107 |
|
108 |
### Annotations
|
109 |
|
110 |
#### Annotation process
|
111 |
|
112 |
-
[
|
113 |
|
114 |
#### Who are the annotators?
|
115 |
|
116 |
-
|
117 |
|
118 |
### Personal and Sensitive Information
|
119 |
|
120 |
-
[
|
121 |
|
122 |
## Considerations for Using the Data
|
123 |
|
124 |
### Social Impact of Dataset
|
125 |
|
126 |
-
[
|
127 |
|
128 |
### Discussion of Biases
|
129 |
|
130 |
-
[
|
131 |
|
132 |
### Other Known Limitations
|
133 |
|
134 |
-
[
|
135 |
|
136 |
## Additional Information
|
137 |
|
138 |
### Dataset Curators
|
139 |
|
140 |
-
[
|
141 |
|
142 |
### Licensing Information
|
143 |
|
144 |
-
|
145 |
|
146 |
### Citation Information
|
147 |
|
148 |
-
|
149 |
-
|
150 |
-
|
151 |
-
|
152 |
-
|
|
|
|
31 |
# Dataset Card for Wino-X
|
32 |
|
33 |
## Table of Contents
|
|
|
34 |
- [Dataset Description](#dataset-description)
|
35 |
- [Dataset Summary](#dataset-summary)
|
36 |
+
- [Supported Tasks](#supported-tasks-and-leaderboards)
|
37 |
- [Languages](#languages)
|
38 |
- [Dataset Structure](#dataset-structure)
|
39 |
- [Data Instances](#data-instances)
|
40 |
+
- [Data Fields](#data-instances)
|
41 |
+
- [Data Splits](#data-instances)
|
42 |
- [Dataset Creation](#dataset-creation)
|
43 |
- [Curation Rationale](#curation-rationale)
|
44 |
- [Source Data](#source-data)
|
|
|
52 |
- [Dataset Curators](#dataset-curators)
|
53 |
- [Licensing Information](#licensing-information)
|
54 |
- [Citation Information](#citation-information)
|
|
|
55 |
|
56 |
## Dataset Description
|
57 |
|
58 |
+
- **Homepage:** [Wino-X repository](https://github.com/demelin/Wino-X)
|
59 |
+
- **Repository:** [Wino-X repository](https://github.com/demelin/Wino-X)
|
60 |
+
- **Paper:** [Wino-X: Multilingual Winograd Schemas for Commonsense Reasoning and Coreference Resolution](https://aclanthology.org/2021.emnlp-main.670/)
|
61 |
+
- **Leaderboard:** [N/A]
|
62 |
+
- **Point of Contact:** [Denis Emelin](demelin.github.io)
|
63 |
|
64 |
### Dataset Summary
|
65 |
|
66 |
+
Wino-X is a parallel dataset of German, French, and Russian Winograd schemas, aligned with their English
|
67 |
+
counterparts, used to examine whether neural machine translation models can perform coreference resolution that
|
68 |
+
requires commonsense knowledge, and whether multilingual language models are capable of commonsense reasoning across
|
69 |
+
multiple languages.
|
70 |
|
71 |
### Supported Tasks and Leaderboards
|
72 |
|
73 |
+
- translation: The dataset can be used to evaluate translations of ambiguous source sentences, as produced by translation models . A [pretrained transformer-based NMT model](https://huggingface.co/Helsinki-NLP/opus-mt-en-de) can be used for this purpose.
|
74 |
+
- coreference-resolution: The dataset can be used to rank alternative translations of an ambiguous source sentence that differ in the chosen referent of an ambiguous source pronoun. A [pretrained transformer-based NMT model](https://huggingface.co/Helsinki-NLP/opus-mt-en-de) can be used for this purpose.
|
75 |
+
- commonsense-reasoning: The dataset can also be used evaluate whether pretrained multilingual language models can perform commonsense reasoning in (or across) multiple languages by identifying the correct filler in a cloze completion task. An [XLM-based model](https://huggingface.co/xlm-roberta-base) can be used for this purpose.
|
76 |
|
77 |
### Languages
|
78 |
|
79 |
+
The dataset (both its MT and LM portions) is available in the following translation pairs: English-German, English-French, English-Russian. All English sentences included in *Wino-X* were extracted from publicly available parallel corpora, as detailed in the accompanying paper, and represent the dataset-specific language varieties. All non-English sentences were obtained through machine translation and may, as such, exhibit features of translationese.
|
80 |
|
81 |
## Dataset Structure
|
82 |
|
83 |
### Data Instances
|
84 |
|
85 |
+
The following represents a typical *MT-Wino-X* instance (for the English-German translation pair):
|
86 |
+
|
87 |
+
{"qID": "3UDTAB6HH8D37OQL3O6F3GXEEOF09Z-1",
|
88 |
+
"sentence": "The woman looked for a different vase for the bouquet because it was too small.",
|
89 |
+
"translation1": "Die Frau suchte nach einer anderen Vase für den Blumenstrauß, weil sie zu klein war.",
|
90 |
+
"translation2": "Die Frau suchte nach einer anderen Vase für den Blumenstrauß, weil er zu klein war.",
|
91 |
+
"answer": 1,
|
92 |
+
"pronoun1": "sie",
|
93 |
+
"pronoun2": "er",
|
94 |
+
"referent1_en": "vase",
|
95 |
+
"referent2_en": "bouquet",
|
96 |
+
"true_translation_referent_of_pronoun1_de": "Vase",
|
97 |
+
"true_translation_referent_of_pronoun2_de": "Blumenstrauß",
|
98 |
+
"false_translation_referent_of_pronoun1_de": "Vase",
|
99 |
+
"false_translation_referent_of_pronoun2_de": "Blumenstrauß"}
|
100 |
+
|
101 |
+
|
102 |
+
The following represents a typical *LM-Wino-X* instance (for the English-French translation pair):
|
103 |
+
|
104 |
+
{"qID": "3UDTAB6HH8D37OQL3O6F3GXEEOF09Z-1",
|
105 |
+
"sentence": "The woman looked for a different vase for the bouquet because it was too small.",
|
106 |
+
"context_en": "The woman looked for a different vase for the bouquet because _ was too small.",
|
107 |
+
"context_fr": "La femme a cherché un vase différent pour le bouquet car _ était trop petit.",
|
108 |
+
"option1_en": "the bouquet",
|
109 |
+
"option2_en": "the vase",
|
110 |
+
"option1_fr": "le bouquet",
|
111 |
+
"option2_fr": "le vase",
|
112 |
+
"answer": 2,
|
113 |
+
"context_referent_of_option1_fr": "bouquet",
|
114 |
+
"context_referent_of_option2_fr": "vase"}
|
115 |
|
116 |
### Data Fields
|
117 |
|
118 |
+
For *MT-Wino-X*:
|
119 |
+
|
120 |
+
- "qID": Unique identifier ID for this dataset instance.
|
121 |
+
- "sentence": English sentence containing the ambiguous pronoun 'it'.
|
122 |
+
- "translation1": First translation candidate.
|
123 |
+
- "translation2": Second translation candidate.
|
124 |
+
- "answer": ID of the correct translation.
|
125 |
+
- "pronoun1": Translation of the ambiguous source pronoun in translation1.
|
126 |
+
- "pronoun2": Translation of the ambiguous source pronoun in translation2.
|
127 |
+
- "referent1_en": English referent of the translation of the ambiguous source pronoun in translation1.
|
128 |
+
- "referent2_en": English referent of the translation of the ambiguous source pronoun in translation2.
|
129 |
+
- "true_translation_referent_of_pronoun1_[TGT-LANG]": Target language referent of pronoun1 in the correct translation.
|
130 |
+
- "true_translation_referent_of_pronoun2_[TGT-LANG]": Target language referent of pronoun2 in the correct translation.
|
131 |
+
- "false_translation_referent_of_pronoun1_[TGT-LANG]": Target language referent of pronoun1 in the incorrect translation.
|
132 |
+
- "false_translation_referent_of_pronoun2_[TGT-LANG]": Target language referent of pronoun2 in the incorrect translation.
|
133 |
+
|
134 |
+
|
135 |
+
For *LM-Wino-X*:
|
136 |
+
|
137 |
+
- "qID": Unique identifier ID for this dataset instance.
|
138 |
+
- "sentence": English sentence containing the ambiguous pronoun 'it'.
|
139 |
+
- "context_en": Same English sentence, where 'it' is replaced by a gap.
|
140 |
+
- "context_fr": Target language translation of the English sentence, where the translation of 'it' is replaced by a gap.
|
141 |
+
- "option1_en": First filler option for the English sentence.
|
142 |
+
- "option2_en": Second filler option for the English sentence.
|
143 |
+
- "option1_[TGT-LANG]": First filler option for the target language sentence.
|
144 |
+
- "option2_[TGT-LANG]": Second filler option for the target language sentence.
|
145 |
+
- "answer": ID of the correct gap filler.
|
146 |
+
- "context_referent_of_option1_[TGT-LANG]": English translation of option1_[TGT-LANG].
|
147 |
+
- "context_referent_of_option2_[TGT-LANG]": English translation of option2_[TGT-LANG]
|
148 |
|
149 |
### Data Splits
|
150 |
|
151 |
+
*Wno-X* was designed as an evaluation-only benchmark and therefore is intended to be used for zero-shot testing only. However, users are very welcome to split the data as they wish :) .
|
152 |
+
|
153 |
|
154 |
## Dataset Creation
|
155 |
|
156 |
### Curation Rationale
|
157 |
|
158 |
+
Please refer to [Section 2 in the dataset paper](https://aclanthology.org/2021.emnlp-main.670.pdf).
|
159 |
|
160 |
### Source Data
|
161 |
|
162 |
#### Initial Data Collection and Normalization
|
163 |
|
164 |
+
Please refer to [Section 2 in the dataset paper](https://aclanthology.org/2021.emnlp-main.670.pdf).
|
165 |
|
166 |
#### Who are the source language producers?
|
167 |
|
168 |
+
Please refer to [Section 2 in the dataset paper](https://aclanthology.org/2021.emnlp-main.670.pdf).
|
169 |
|
170 |
### Annotations
|
171 |
|
172 |
#### Annotation process
|
173 |
|
174 |
+
Please refer to [Section 2 in the dataset paper](https://aclanthology.org/2021.emnlp-main.670.pdf).
|
175 |
|
176 |
#### Who are the annotators?
|
177 |
|
178 |
+
Annotations were generated automatically and verified by the dataset author / curator for correctness.
|
179 |
|
180 |
### Personal and Sensitive Information
|
181 |
|
182 |
+
[N/A]
|
183 |
|
184 |
## Considerations for Using the Data
|
185 |
|
186 |
### Social Impact of Dataset
|
187 |
|
188 |
+
Please refer to ['Ethical Considerations' in the dataset paper](https://aclanthology.org/2021.emnlp-main.670.pdf).
|
189 |
|
190 |
### Discussion of Biases
|
191 |
|
192 |
+
Please refer to ['Ethical Considerations' in the dataset paper](https://aclanthology.org/2021.emnlp-main.670.pdf).
|
193 |
|
194 |
### Other Known Limitations
|
195 |
|
196 |
+
Please refer to ['Ethical Considerations' in the dataset paper](https://aclanthology.org/2021.emnlp-main.670.pdf).
|
197 |
|
198 |
## Additional Information
|
199 |
|
200 |
### Dataset Curators
|
201 |
|
202 |
+
[Denis Emelin](demelin.github.io)
|
203 |
|
204 |
### Licensing Information
|
205 |
|
206 |
+
MIT
|
207 |
|
208 |
### Citation Information
|
209 |
|
210 |
+
@inproceedings{Emelin2021WinoXMW,
|
211 |
+
title={Wino-X: Multilingual Winograd Schemas for Commonsense Reasoning and Coreference Resolution},
|
212 |
+
author={Denis Emelin and Rico Sennrich},
|
213 |
+
booktitle={EMNLP},
|
214 |
+
year={2021}
|
215 |
+
}
|
wino_x.py
CHANGED
@@ -13,7 +13,7 @@
|
|
13 |
# limitations under the License.
|
14 |
""" Wino-X is a parallel dataset of German, French, and Russian Winograd schemas, aligned with their English
|
15 |
counterparts, used to examine whether neural machine translation models can perform coreference resolution that
|
16 |
-
requires commonsense knowledge and whether multilingual language models are capable of commonsense reasoning across
|
17 |
multiple languages. """
|
18 |
|
19 |
import csv
|
|
|
13 |
# limitations under the License.
|
14 |
""" Wino-X is a parallel dataset of German, French, and Russian Winograd schemas, aligned with their English
|
15 |
counterparts, used to examine whether neural machine translation models can perform coreference resolution that
|
16 |
+
requires commonsense knowledge, and whether multilingual language models are capable of commonsense reasoning across
|
17 |
multiple languages. """
|
18 |
|
19 |
import csv
|