parquet-converter commited on
Commit
201ec2a
1 Parent(s): acd4175

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,51 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bz2 filter=lfs diff=lfs merge=lfs -text
5
- *.ftz filter=lfs diff=lfs merge=lfs -text
6
- *.gz filter=lfs diff=lfs merge=lfs -text
7
- *.h5 filter=lfs diff=lfs merge=lfs -text
8
- *.joblib filter=lfs diff=lfs merge=lfs -text
9
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
10
- *.lz4 filter=lfs diff=lfs merge=lfs -text
11
- *.model filter=lfs diff=lfs merge=lfs -text
12
- *.msgpack filter=lfs diff=lfs merge=lfs -text
13
- *.npy filter=lfs diff=lfs merge=lfs -text
14
- *.npz filter=lfs diff=lfs merge=lfs -text
15
- *.onnx filter=lfs diff=lfs merge=lfs -text
16
- *.ot filter=lfs diff=lfs merge=lfs -text
17
- *.parquet filter=lfs diff=lfs merge=lfs -text
18
- *.pb filter=lfs diff=lfs merge=lfs -text
19
- *.pickle filter=lfs diff=lfs merge=lfs -text
20
- *.pkl filter=lfs diff=lfs merge=lfs -text
21
- *.pt filter=lfs diff=lfs merge=lfs -text
22
- *.pth filter=lfs diff=lfs merge=lfs -text
23
- *.rar filter=lfs diff=lfs merge=lfs -text
24
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
25
- *.tar.* filter=lfs diff=lfs merge=lfs -text
26
- *.tflite filter=lfs diff=lfs merge=lfs -text
27
- *.tgz filter=lfs diff=lfs merge=lfs -text
28
- *.wasm filter=lfs diff=lfs merge=lfs -text
29
- *.xz filter=lfs diff=lfs merge=lfs -text
30
- *.zip filter=lfs diff=lfs merge=lfs -text
31
- *.zst filter=lfs diff=lfs merge=lfs -text
32
- *tfevents* filter=lfs diff=lfs merge=lfs -text
33
- # Audio files - uncompressed
34
- *.pcm filter=lfs diff=lfs merge=lfs -text
35
- *.sam filter=lfs diff=lfs merge=lfs -text
36
- *.raw filter=lfs diff=lfs merge=lfs -text
37
- # Audio files - compressed
38
- *.aac filter=lfs diff=lfs merge=lfs -text
39
- *.flac filter=lfs diff=lfs merge=lfs -text
40
- *.mp3 filter=lfs diff=lfs merge=lfs -text
41
- *.ogg filter=lfs diff=lfs merge=lfs -text
42
- *.wav filter=lfs diff=lfs merge=lfs -text
43
- # Image files - uncompressed
44
- *.bmp filter=lfs diff=lfs merge=lfs -text
45
- *.gif filter=lfs diff=lfs merge=lfs -text
46
- *.png filter=lfs diff=lfs merge=lfs -text
47
- *.tiff filter=lfs diff=lfs merge=lfs -text
48
- # Image files - compressed
49
- *.jpg filter=lfs diff=lfs merge=lfs -text
50
- *.jpeg filter=lfs diff=lfs merge=lfs -text
51
- *.webp filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,244 +0,0 @@
1
- ---
2
- pretty_name: TyDi QA
3
- annotations_creators:
4
- - crowdsourced
5
- language_creators:
6
- - crowdsourced
7
- language:
8
- - ar
9
- - bn
10
- - en
11
- - fi
12
- - id
13
- - ja
14
- - ko
15
- - ru
16
- - sw
17
- - te
18
- - th
19
- license:
20
- - apache-2.0
21
- multilinguality:
22
- - multilingual
23
- size_categories:
24
- - unknown
25
- source_datasets:
26
- - extended|wikipedia
27
- task_categories:
28
- - question-answering
29
- task_ids:
30
- - extractive-qa
31
- paperswithcode_id: tydi-qa
32
- ---
33
-
34
- # Dataset Card for "tydiqa"
35
-
36
- ## Table of Contents
37
- - [Dataset Description](#dataset-description)
38
- - [Dataset Summary](#dataset-summary)
39
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
40
- - [Languages](#languages)
41
- - [Dataset Structure](#dataset-structure)
42
- - [Data Instances](#data-instances)
43
- - [Data Fields](#data-fields)
44
- - [Data Splits](#data-splits)
45
- - [Dataset Creation](#dataset-creation)
46
- - [Curation Rationale](#curation-rationale)
47
- - [Source Data](#source-data)
48
- - [Annotations](#annotations)
49
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
50
- - [Considerations for Using the Data](#considerations-for-using-the-data)
51
- - [Social Impact of Dataset](#social-impact-of-dataset)
52
- - [Discussion of Biases](#discussion-of-biases)
53
- - [Other Known Limitations](#other-known-limitations)
54
- - [Additional Information](#additional-information)
55
- - [Dataset Curators](#dataset-curators)
56
- - [Licensing Information](#licensing-information)
57
- - [Citation Information](#citation-information)
58
- - [Contributions](#contributions)
59
-
60
- ## Dataset Description
61
-
62
- - **Homepage:** [https://github.com/google-research-datasets/tydiqa](https://github.com/google-research-datasets/tydiqa)
63
- - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
64
- - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
65
- - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
66
- - **Size of downloaded dataset files:** 3726.74 MB
67
- - **Size of the generated dataset:** 5812.92 MB
68
- - **Total amount of disk used:** 9539.67 MB
69
-
70
- ### Dataset Summary
71
-
72
- TyDi QA is a question answering dataset covering 11 typologically diverse languages with 204K question-answer pairs.
73
- The languages of TyDi QA are diverse with regard to their typology -- the set of linguistic features that each language
74
- expresses -- such that we expect models performing well on this set to generalize across a large number of the languages
75
- in the world. It contains language phenomena that would not be found in English-only corpora. To provide a realistic
76
- information-seeking task and avoid priming effects, questions are written by people who want to know the answer, but
77
- don’t know the answer yet, (unlike SQuAD and its descendents) and the data is collected directly in each language without
78
- the use of translation (unlike MLQA and XQuAD).
79
-
80
- ### Supported Tasks and Leaderboards
81
-
82
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
83
-
84
- ### Languages
85
-
86
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
87
-
88
- ## Dataset Structure
89
-
90
- ### Data Instances
91
-
92
- #### primary_task
93
-
94
- - **Size of downloaded dataset files:** 1863.37 MB
95
- - **Size of the generated dataset:** 5757.59 MB
96
- - **Total amount of disk used:** 7620.96 MB
97
-
98
- An example of 'validation' looks as follows.
99
- ```
100
- This example was too long and was cropped:
101
-
102
- {
103
- "annotations": {
104
- "minimal_answers_end_byte": [-1, -1, -1],
105
- "minimal_answers_start_byte": [-1, -1, -1],
106
- "passage_answer_candidate_index": [-1, -1, -1],
107
- "yes_no_answer": ["NONE", "NONE", "NONE"]
108
- },
109
- "document_plaintext": "\"\\nรองศาสตราจารย์[1] หม่อมราชวงศ์สุขุมพันธุ์ บริพัตร (22 กันยายน 2495 -) ผู้ว่าราชการกรุงเทพมหานครคนที่ 15 อดีตรองหัวหน้าพรรคปร...",
110
- "document_title": "หม่อมราชวงศ์สุขุมพันธุ์ บริพัตร",
111
- "document_url": "\"https://th.wikipedia.org/wiki/%E0%B8%AB%E0%B8%A1%E0%B9%88%E0%B8%AD%E0%B8%A1%E0%B8%A3%E0%B8%B2%E0%B8%8A%E0%B8%A7%E0%B8%87%E0%B8%...",
112
- "language": "thai",
113
- "passage_answer_candidates": "{\"plaintext_end_byte\": [494, 1779, 2931, 3904, 4506, 5588, 6383, 7122, 8224, 9375, 10473, 12563, 15134, 17765, 19863, 21902, 229...",
114
- "question_text": "\"หม่อมราชวงศ์สุขุมพันธุ์ บริพัตร เรียนจบจากที่ไหน ?\"..."
115
- }
116
- ```
117
-
118
- #### secondary_task
119
-
120
- - **Size of downloaded dataset files:** 1863.37 MB
121
- - **Size of the generated dataset:** 55.34 MB
122
- - **Total amount of disk used:** 1918.71 MB
123
-
124
- An example of 'validation' looks as follows.
125
- ```
126
- This example was too long and was cropped:
127
-
128
- {
129
- "answers": {
130
- "answer_start": [394],
131
- "text": ["بطولتين"]
132
- },
133
- "context": "\"أقيمت البطولة 21 مرة، شارك في النهائيات 78 دولة، وعدد الفرق التي فازت بالبطولة حتى الآن 8 فرق، ويعد المنتخب البرازيلي الأكثر تت...",
134
- "id": "arabic-2387335860751143628-1",
135
- "question": "\"كم عدد مرات فوز الأوروغواي ببطولة كاس العالم لكرو القدم؟\"...",
136
- "title": "قائمة نهائيات كأس العالم"
137
- }
138
- ```
139
-
140
- ### Data Fields
141
-
142
- The data fields are the same among all splits.
143
-
144
- #### primary_task
145
- - `passage_answer_candidates`: a dictionary feature containing:
146
- - `plaintext_start_byte`: a `int32` feature.
147
- - `plaintext_end_byte`: a `int32` feature.
148
- - `question_text`: a `string` feature.
149
- - `document_title`: a `string` feature.
150
- - `language`: a `string` feature.
151
- - `annotations`: a dictionary feature containing:
152
- - `passage_answer_candidate_index`: a `int32` feature.
153
- - `minimal_answers_start_byte`: a `int32` feature.
154
- - `minimal_answers_end_byte`: a `int32` feature.
155
- - `yes_no_answer`: a `string` feature.
156
- - `document_plaintext`: a `string` feature.
157
- - `document_url`: a `string` feature.
158
-
159
- #### secondary_task
160
- - `id`: a `string` feature.
161
- - `title`: a `string` feature.
162
- - `context`: a `string` feature.
163
- - `question`: a `string` feature.
164
- - `answers`: a dictionary feature containing:
165
- - `text`: a `string` feature.
166
- - `answer_start`: a `int32` feature.
167
-
168
- ### Data Splits
169
-
170
- | name | train | validation |
171
- | -------------- | -----: | ---------: |
172
- | primary_task | 166916 | 18670 |
173
- | secondary_task | 49881 | 5077 |
174
-
175
- ## Dataset Creation
176
-
177
- ### Curation Rationale
178
-
179
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
180
-
181
- ### Source Data
182
-
183
- #### Initial Data Collection and Normalization
184
-
185
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
186
-
187
- #### Who are the source language producers?
188
-
189
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
190
-
191
- ### Annotations
192
-
193
- #### Annotation process
194
-
195
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
196
-
197
- #### Who are the annotators?
198
-
199
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
200
-
201
- ### Personal and Sensitive Information
202
-
203
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
204
-
205
- ## Considerations for Using the Data
206
-
207
- ### Social Impact of Dataset
208
-
209
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
210
-
211
- ### Discussion of Biases
212
-
213
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
214
-
215
- ### Other Known Limitations
216
-
217
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
218
-
219
- ## Additional Information
220
-
221
- ### Dataset Curators
222
-
223
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
224
-
225
- ### Licensing Information
226
-
227
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
228
-
229
- ### Citation Information
230
-
231
- ```
232
- @article{tydiqa,
233
- title = {TyDi QA: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages},
234
- author = {Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki}
235
- year = {2020},
236
- journal = {Transactions of the Association for Computational Linguistics}
237
- }
238
-
239
- ```
240
-
241
-
242
- ### Contributions
243
-
244
- Thanks to [@thomwolf](https://github.com/thomwolf), [@albertvillanova](https://github.com/albertvillanova), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/validation_answerable.csv → copenlu--tydiqa_copenlu/csv-train.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6d5170fb2c2f26f85bc9a8676366fba442b1c13b1233839988e2b646008b8ee5
3
- size 14182184
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b5514b0d1c8d835ad3f42b24cb62a1df520fb269154312521754e00eb5873f0e
3
+ size 74001075
data/train_answerable.csv → copenlu--tydiqa_copenlu/csv-validation.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8073f9077f36e5a7d4d1da42f99622c175443ab7ddb78c62cfd5a185221ea3e6
3
- size 130083504
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ff0d118db80ad5dd118a4d97d46c349cb92ac08c6766c0f6e1ea1da8ba0a1cb2
3
+ size 8091071