Datasets:

Modalities:
Text
Formats:
parquet
Sub-tasks:
extractive-qa
Libraries:
Datasets
Dask
License:
system HF staff commited on
Commit
e84bd97
1 Parent(s): 0c8deb3

Update files from the datasets library (from 1.3.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.3.0

Files changed (1) hide show
  1. README.md +204 -0
README.md ADDED
@@ -0,0 +1,204 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ ---
3
+
4
+ # Dataset Card for "tydiqa"
5
+
6
+ ## Table of Contents
7
+ - [Dataset Description](#dataset-description)
8
+ - [Dataset Summary](#dataset-summary)
9
+ - [Supported Tasks](#supported-tasks)
10
+ - [Languages](#languages)
11
+ - [Dataset Structure](#dataset-structure)
12
+ - [Data Instances](#data-instances)
13
+ - [Data Fields](#data-fields)
14
+ - [Data Splits Sample Size](#data-splits-sample-size)
15
+ - [Dataset Creation](#dataset-creation)
16
+ - [Curation Rationale](#curation-rationale)
17
+ - [Source Data](#source-data)
18
+ - [Annotations](#annotations)
19
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
20
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
21
+ - [Social Impact of Dataset](#social-impact-of-dataset)
22
+ - [Discussion of Biases](#discussion-of-biases)
23
+ - [Other Known Limitations](#other-known-limitations)
24
+ - [Additional Information](#additional-information)
25
+ - [Dataset Curators](#dataset-curators)
26
+ - [Licensing Information](#licensing-information)
27
+ - [Citation Information](#citation-information)
28
+ - [Contributions](#contributions)
29
+
30
+ ## [Dataset Description](#dataset-description)
31
+
32
+ - **Homepage:** [https://github.com/google-research-datasets/tydiqa](https://github.com/google-research-datasets/tydiqa)
33
+ - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
34
+ - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
35
+ - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
36
+ - **Size of downloaded dataset files:** 3726.74 MB
37
+ - **Size of the generated dataset:** 5812.92 MB
38
+ - **Total amount of disk used:** 9539.67 MB
39
+
40
+ ### [Dataset Summary](#dataset-summary)
41
+
42
+ TyDi QA is a question answering dataset covering 11 typologically diverse languages with 204K question-answer pairs.
43
+ The languages of TyDi QA are diverse with regard to their typology -- the set of linguistic features that each language
44
+ expresses -- such that we expect models performing well on this set to generalize across a large number of the languages
45
+ in the world. It contains language phenomena that would not be found in English-only corpora. To provide a realistic
46
+ information-seeking task and avoid priming effects, questions are written by people who want to know the answer, but
47
+ don’t know the answer yet, (unlike SQuAD and its descendents) and the data is collected directly in each language without
48
+ the use of translation (unlike MLQA and XQuAD).
49
+
50
+ ### [Supported Tasks](#supported-tasks)
51
+
52
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
53
+
54
+ ### [Languages](#languages)
55
+
56
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
57
+
58
+ ## [Dataset Structure](#dataset-structure)
59
+
60
+ We show detailed information for up to 5 configurations of the dataset.
61
+
62
+ ### [Data Instances](#data-instances)
63
+
64
+ #### primary_task
65
+
66
+ - **Size of downloaded dataset files:** 1863.37 MB
67
+ - **Size of the generated dataset:** 5757.59 MB
68
+ - **Total amount of disk used:** 7620.96 MB
69
+
70
+ An example of 'validation' looks as follows.
71
+ ```
72
+ This example was too long and was cropped:
73
+
74
+ {
75
+ "annotations": {
76
+ "minimal_answers_end_byte": [-1, -1, -1],
77
+ "minimal_answers_start_byte": [-1, -1, -1],
78
+ "passage_answer_candidate_index": [-1, -1, -1],
79
+ "yes_no_answer": ["NONE", "NONE", "NONE"]
80
+ },
81
+ "document_plaintext": "\"\\nรองศาสตราจารย์[1] หม่อมราชวงศ์สุขุมพันธุ์ บริพัตร (22 กันยายน 2495 -) ผู้ว่าราชการกรุงเทพมหานครคนที่ 15 อดีตรองหัวหน้าพรรคปร...",
82
+ "document_title": "หม่อมราชวงศ์สุขุมพันธุ์ บริพัตร",
83
+ "document_url": "\"https://th.wikipedia.org/wiki/%E0%B8%AB%E0%B8%A1%E0%B9%88%E0%B8%AD%E0%B8%A1%E0%B8%A3%E0%B8%B2%E0%B8%8A%E0%B8%A7%E0%B8%87%E0%B8%...",
84
+ "language": "thai",
85
+ "passage_answer_candidates": "{\"plaintext_end_byte\": [494, 1779, 2931, 3904, 4506, 5588, 6383, 7122, 8224, 9375, 10473, 12563, 15134, 17765, 19863, 21902, 229...",
86
+ "question_text": "\"หม่อมราชวงศ์สุขุมพันธุ์ บริพัตร เรียนจบจากที่ไหน ?\"..."
87
+ }
88
+ ```
89
+
90
+ #### secondary_task
91
+
92
+ - **Size of downloaded dataset files:** 1863.37 MB
93
+ - **Size of the generated dataset:** 55.34 MB
94
+ - **Total amount of disk used:** 1918.71 MB
95
+
96
+ An example of 'validation' looks as follows.
97
+ ```
98
+ This example was too long and was cropped:
99
+
100
+ {
101
+ "answers": {
102
+ "answer_start": [394],
103
+ "text": ["بطولتين"]
104
+ },
105
+ "context": "\"أقيمت البطولة 21 مرة، شارك في النهائيات 78 دولة، وعدد الفرق التي فازت بالبطولة حتى الآن 8 فرق، ويعد المنتخب البرازيلي الأكثر تت...",
106
+ "id": "arabic-2387335860751143628-1",
107
+ "question": "\"كم عدد مرات فوز الأوروغواي ببطولة كاس العالم لكرو القدم؟\"...",
108
+ "title": "قائمة نهائيات كأس العالم"
109
+ }
110
+ ```
111
+
112
+ ### [Data Fields](#data-fields)
113
+
114
+ The data fields are the same among all splits.
115
+
116
+ #### primary_task
117
+ - `passage_answer_candidates`: a dictionary feature containing:
118
+ - `plaintext_start_byte`: a `int32` feature.
119
+ - `plaintext_end_byte`: a `int32` feature.
120
+ - `question_text`: a `string` feature.
121
+ - `document_title`: a `string` feature.
122
+ - `language`: a `string` feature.
123
+ - `annotations`: a dictionary feature containing:
124
+ - `passage_answer_candidate_index`: a `int32` feature.
125
+ - `minimal_answers_start_byte`: a `int32` feature.
126
+ - `minimal_answers_end_byte`: a `int32` feature.
127
+ - `yes_no_answer`: a `string` feature.
128
+ - `document_plaintext`: a `string` feature.
129
+ - `document_url`: a `string` feature.
130
+
131
+ #### secondary_task
132
+ - `id`: a `string` feature.
133
+ - `title`: a `string` feature.
134
+ - `context`: a `string` feature.
135
+ - `question`: a `string` feature.
136
+ - `answers`: a dictionary feature containing:
137
+ - `text`: a `string` feature.
138
+ - `answer_start`: a `int32` feature.
139
+
140
+ ### [Data Splits Sample Size](#data-splits-sample-size)
141
+
142
+ | name |train |validation|
143
+ |--------------|-----:|---------:|
144
+ |primary_task |166916| 18670|
145
+ |secondary_task| 49881| 5077|
146
+
147
+ ## [Dataset Creation](#dataset-creation)
148
+
149
+ ### [Curation Rationale](#curation-rationale)
150
+
151
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
152
+
153
+ ### [Source Data](#source-data)
154
+
155
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
156
+
157
+ ### [Annotations](#annotations)
158
+
159
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
160
+
161
+ ### [Personal and Sensitive Information](#personal-and-sensitive-information)
162
+
163
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
164
+
165
+ ## [Considerations for Using the Data](#considerations-for-using-the-data)
166
+
167
+ ### [Social Impact of Dataset](#social-impact-of-dataset)
168
+
169
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
170
+
171
+ ### [Discussion of Biases](#discussion-of-biases)
172
+
173
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
174
+
175
+ ### [Other Known Limitations](#other-known-limitations)
176
+
177
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
178
+
179
+ ## [Additional Information](#additional-information)
180
+
181
+ ### [Dataset Curators](#dataset-curators)
182
+
183
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
184
+
185
+ ### [Licensing Information](#licensing-information)
186
+
187
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
188
+
189
+ ### [Citation Information](#citation-information)
190
+
191
+ ```
192
+ @article{tydiqa,
193
+ title = {TyDi QA: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages},
194
+ author = {Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki}
195
+ year = {2020},
196
+ journal = {Transactions of the Association for Computational Linguistics}
197
+ }
198
+
199
+ ```
200
+
201
+
202
+ ### Contributions
203
+
204
+ Thanks to [@thomwolf](https://github.com/thomwolf), [@albertvillanova](https://github.com/albertvillanova), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.