Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
system HF staff commited on
Commit
7ada745
1 Parent(s): d4fecb2

Update files from the datasets library (from 1.3.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.3.0

Files changed (1) hide show
  1. README.md +304 -0
README.md ADDED
@@ -0,0 +1,304 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ ---
3
+
4
+ # Dataset Card for "blimp"
5
+
6
+ ## Table of Contents
7
+ - [Dataset Description](#dataset-description)
8
+ - [Dataset Summary](#dataset-summary)
9
+ - [Supported Tasks](#supported-tasks)
10
+ - [Languages](#languages)
11
+ - [Dataset Structure](#dataset-structure)
12
+ - [Data Instances](#data-instances)
13
+ - [Data Fields](#data-fields)
14
+ - [Data Splits Sample Size](#data-splits-sample-size)
15
+ - [Dataset Creation](#dataset-creation)
16
+ - [Curation Rationale](#curation-rationale)
17
+ - [Source Data](#source-data)
18
+ - [Annotations](#annotations)
19
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
20
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
21
+ - [Social Impact of Dataset](#social-impact-of-dataset)
22
+ - [Discussion of Biases](#discussion-of-biases)
23
+ - [Other Known Limitations](#other-known-limitations)
24
+ - [Additional Information](#additional-information)
25
+ - [Dataset Curators](#dataset-curators)
26
+ - [Licensing Information](#licensing-information)
27
+ - [Citation Information](#citation-information)
28
+ - [Contributions](#contributions)
29
+
30
+ ## [Dataset Description](#dataset-description)
31
+
32
+ - **Homepage:** [https://github.com/alexwarstadt/blimp/tree/master/](https://github.com/alexwarstadt/blimp/tree/master/)
33
+ - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
34
+ - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
35
+ - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
36
+ - **Size of downloaded dataset files:** 28.21 MB
37
+ - **Size of the generated dataset:** 10.92 MB
38
+ - **Total amount of disk used:** 39.13 MB
39
+
40
+ ### [Dataset Summary](#dataset-summary)
41
+
42
+ BLiMP is a challenge set for evaluating what language models (LMs) know about
43
+ major grammatical phenomena in English. BLiMP consists of 67 sub-datasets, each
44
+ containing 1000 minimal pairs isolating specific contrasts in syntax,
45
+ morphology, or semantics. The data is automatically generated according to
46
+ expert-crafted grammars.
47
+
48
+ ### [Supported Tasks](#supported-tasks)
49
+
50
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
51
+
52
+ ### [Languages](#languages)
53
+
54
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
55
+
56
+ ## [Dataset Structure](#dataset-structure)
57
+
58
+ We show detailed information for up to 5 configurations of the dataset.
59
+
60
+ ### [Data Instances](#data-instances)
61
+
62
+ #### adjunct_island
63
+
64
+ - **Size of downloaded dataset files:** 0.34 MB
65
+ - **Size of the generated dataset:** 0.16 MB
66
+ - **Total amount of disk used:** 0.50 MB
67
+
68
+ An example of 'train' looks as follows.
69
+ ```
70
+ {
71
+ "UID": "tough_vs_raising_1",
72
+ "field": "syntax_semantics",
73
+ "lexically_identical": false,
74
+ "linguistics_term": "control_raising",
75
+ "one_prefix_method": false,
76
+ "pair_id": 2,
77
+ "sentence_bad": "Benjamin's tutor was certain to boast about.",
78
+ "sentence_good": "Benjamin's tutor was easy to boast about.",
79
+ "simple_LM_method": true,
80
+ "two_prefix_method": false
81
+ }
82
+ ```
83
+
84
+ #### anaphor_gender_agreement
85
+
86
+ - **Size of downloaded dataset files:** 0.42 MB
87
+ - **Size of the generated dataset:** 0.13 MB
88
+ - **Total amount of disk used:** 0.54 MB
89
+
90
+ An example of 'train' looks as follows.
91
+ ```
92
+ {
93
+ "UID": "tough_vs_raising_1",
94
+ "field": "syntax_semantics",
95
+ "lexically_identical": false,
96
+ "linguistics_term": "control_raising",
97
+ "one_prefix_method": false,
98
+ "pair_id": 2,
99
+ "sentence_bad": "Benjamin's tutor was certain to boast about.",
100
+ "sentence_good": "Benjamin's tutor was easy to boast about.",
101
+ "simple_LM_method": true,
102
+ "two_prefix_method": false
103
+ }
104
+ ```
105
+
106
+ #### anaphor_number_agreement
107
+
108
+ - **Size of downloaded dataset files:** 0.43 MB
109
+ - **Size of the generated dataset:** 0.13 MB
110
+ - **Total amount of disk used:** 0.56 MB
111
+
112
+ An example of 'train' looks as follows.
113
+ ```
114
+ {
115
+ "UID": "tough_vs_raising_1",
116
+ "field": "syntax_semantics",
117
+ "lexically_identical": false,
118
+ "linguistics_term": "control_raising",
119
+ "one_prefix_method": false,
120
+ "pair_id": 2,
121
+ "sentence_bad": "Benjamin's tutor was certain to boast about.",
122
+ "sentence_good": "Benjamin's tutor was easy to boast about.",
123
+ "simple_LM_method": true,
124
+ "two_prefix_method": false
125
+ }
126
+ ```
127
+
128
+ #### animate_subject_passive
129
+
130
+ - **Size of downloaded dataset files:** 0.44 MB
131
+ - **Size of the generated dataset:** 0.14 MB
132
+ - **Total amount of disk used:** 0.58 MB
133
+
134
+ An example of 'train' looks as follows.
135
+ ```
136
+ {
137
+ "UID": "tough_vs_raising_1",
138
+ "field": "syntax_semantics",
139
+ "lexically_identical": false,
140
+ "linguistics_term": "control_raising",
141
+ "one_prefix_method": false,
142
+ "pair_id": 2,
143
+ "sentence_bad": "Benjamin's tutor was certain to boast about.",
144
+ "sentence_good": "Benjamin's tutor was easy to boast about.",
145
+ "simple_LM_method": true,
146
+ "two_prefix_method": false
147
+ }
148
+ ```
149
+
150
+ #### animate_subject_trans
151
+
152
+ - **Size of downloaded dataset files:** 0.41 MB
153
+ - **Size of the generated dataset:** 0.12 MB
154
+ - **Total amount of disk used:** 0.54 MB
155
+
156
+ An example of 'train' looks as follows.
157
+ ```
158
+ {
159
+ "UID": "tough_vs_raising_1",
160
+ "field": "syntax_semantics",
161
+ "lexically_identical": false,
162
+ "linguistics_term": "control_raising",
163
+ "one_prefix_method": false,
164
+ "pair_id": 2,
165
+ "sentence_bad": "Benjamin's tutor was certain to boast about.",
166
+ "sentence_good": "Benjamin's tutor was easy to boast about.",
167
+ "simple_LM_method": true,
168
+ "two_prefix_method": false
169
+ }
170
+ ```
171
+
172
+ ### [Data Fields](#data-fields)
173
+
174
+ The data fields are the same among all splits.
175
+
176
+ #### adjunct_island
177
+ - `sentence_good`: a `string` feature.
178
+ - `sentence_bad`: a `string` feature.
179
+ - `field`: a `string` feature.
180
+ - `linguistics_term`: a `string` feature.
181
+ - `UID`: a `string` feature.
182
+ - `simple_LM_method`: a `bool` feature.
183
+ - `one_prefix_method`: a `bool` feature.
184
+ - `two_prefix_method`: a `bool` feature.
185
+ - `lexically_identical`: a `bool` feature.
186
+ - `pair_id`: a `int32` feature.
187
+
188
+ #### anaphor_gender_agreement
189
+ - `sentence_good`: a `string` feature.
190
+ - `sentence_bad`: a `string` feature.
191
+ - `field`: a `string` feature.
192
+ - `linguistics_term`: a `string` feature.
193
+ - `UID`: a `string` feature.
194
+ - `simple_LM_method`: a `bool` feature.
195
+ - `one_prefix_method`: a `bool` feature.
196
+ - `two_prefix_method`: a `bool` feature.
197
+ - `lexically_identical`: a `bool` feature.
198
+ - `pair_id`: a `int32` feature.
199
+
200
+ #### anaphor_number_agreement
201
+ - `sentence_good`: a `string` feature.
202
+ - `sentence_bad`: a `string` feature.
203
+ - `field`: a `string` feature.
204
+ - `linguistics_term`: a `string` feature.
205
+ - `UID`: a `string` feature.
206
+ - `simple_LM_method`: a `bool` feature.
207
+ - `one_prefix_method`: a `bool` feature.
208
+ - `two_prefix_method`: a `bool` feature.
209
+ - `lexically_identical`: a `bool` feature.
210
+ - `pair_id`: a `int32` feature.
211
+
212
+ #### animate_subject_passive
213
+ - `sentence_good`: a `string` feature.
214
+ - `sentence_bad`: a `string` feature.
215
+ - `field`: a `string` feature.
216
+ - `linguistics_term`: a `string` feature.
217
+ - `UID`: a `string` feature.
218
+ - `simple_LM_method`: a `bool` feature.
219
+ - `one_prefix_method`: a `bool` feature.
220
+ - `two_prefix_method`: a `bool` feature.
221
+ - `lexically_identical`: a `bool` feature.
222
+ - `pair_id`: a `int32` feature.
223
+
224
+ #### animate_subject_trans
225
+ - `sentence_good`: a `string` feature.
226
+ - `sentence_bad`: a `string` feature.
227
+ - `field`: a `string` feature.
228
+ - `linguistics_term`: a `string` feature.
229
+ - `UID`: a `string` feature.
230
+ - `simple_LM_method`: a `bool` feature.
231
+ - `one_prefix_method`: a `bool` feature.
232
+ - `two_prefix_method`: a `bool` feature.
233
+ - `lexically_identical`: a `bool` feature.
234
+ - `pair_id`: a `int32` feature.
235
+
236
+ ### [Data Splits Sample Size](#data-splits-sample-size)
237
+
238
+ | name |train|
239
+ |------------------------|----:|
240
+ |adjunct_island | 1000|
241
+ |anaphor_gender_agreement| 1000|
242
+ |anaphor_number_agreement| 1000|
243
+ |animate_subject_passive | 1000|
244
+ |animate_subject_trans | 1000|
245
+
246
+ ## [Dataset Creation](#dataset-creation)
247
+
248
+ ### [Curation Rationale](#curation-rationale)
249
+
250
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
251
+
252
+ ### [Source Data](#source-data)
253
+
254
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
255
+
256
+ ### [Annotations](#annotations)
257
+
258
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
259
+
260
+ ### [Personal and Sensitive Information](#personal-and-sensitive-information)
261
+
262
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
263
+
264
+ ## [Considerations for Using the Data](#considerations-for-using-the-data)
265
+
266
+ ### [Social Impact of Dataset](#social-impact-of-dataset)
267
+
268
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
269
+
270
+ ### [Discussion of Biases](#discussion-of-biases)
271
+
272
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
273
+
274
+ ### [Other Known Limitations](#other-known-limitations)
275
+
276
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
277
+
278
+ ## [Additional Information](#additional-information)
279
+
280
+ ### [Dataset Curators](#dataset-curators)
281
+
282
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
283
+
284
+ ### [Licensing Information](#licensing-information)
285
+
286
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
287
+
288
+ ### [Citation Information](#citation-information)
289
+
290
+ ```
291
+
292
+ @article{warstadt2019blimp,
293
+ title={BLiMP: A Benchmark of Linguistic Minimal Pairs for English},
294
+ author={Warstadt, Alex and Parrish, Alicia and Liu, Haokun and Mohananey, Anhad and Peng, Wei, and Wang, Sheng-Fu and Bowman, Samuel R},
295
+ journal={arXiv preprint arXiv:1912.00582},
296
+ year={2019}
297
+ }
298
+
299
+ ```
300
+
301
+
302
+ ### Contributions
303
+
304
+ Thanks to [@lhoestq](https://github.com/lhoestq), [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf) for adding this dataset.