system HF staff commited on
Commit
fd15583
1 Parent(s): 4940254

Update files from the datasets library (from 1.3.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.3.0

Files changed (1) hide show
  1. README.md +223 -0
README.md ADDED
@@ -0,0 +1,223 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ ---
3
+
4
+ # Dataset Card for "ted_hrlr"
5
+
6
+ ## Table of Contents
7
+ - [Dataset Description](#dataset-description)
8
+ - [Dataset Summary](#dataset-summary)
9
+ - [Supported Tasks](#supported-tasks)
10
+ - [Languages](#languages)
11
+ - [Dataset Structure](#dataset-structure)
12
+ - [Data Instances](#data-instances)
13
+ - [Data Fields](#data-fields)
14
+ - [Data Splits Sample Size](#data-splits-sample-size)
15
+ - [Dataset Creation](#dataset-creation)
16
+ - [Curation Rationale](#curation-rationale)
17
+ - [Source Data](#source-data)
18
+ - [Annotations](#annotations)
19
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
20
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
21
+ - [Social Impact of Dataset](#social-impact-of-dataset)
22
+ - [Discussion of Biases](#discussion-of-biases)
23
+ - [Other Known Limitations](#other-known-limitations)
24
+ - [Additional Information](#additional-information)
25
+ - [Dataset Curators](#dataset-curators)
26
+ - [Licensing Information](#licensing-information)
27
+ - [Citation Information](#citation-information)
28
+ - [Contributions](#contributions)
29
+
30
+ ## [Dataset Description](#dataset-description)
31
+
32
+ - **Homepage:** [https://github.com/neulab/word-embeddings-for-nmt](https://github.com/neulab/word-embeddings-for-nmt)
33
+ - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
34
+ - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
35
+ - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
36
+ - **Size of downloaded dataset files:** 1749.12 MB
37
+ - **Size of the generated dataset:** 268.61 MB
38
+ - **Total amount of disk used:** 2017.73 MB
39
+
40
+ ### [Dataset Summary](#dataset-summary)
41
+
42
+ Data sets derived from TED talk transcripts for comparing similar language pairs
43
+ where one is high resource and the other is low resource.
44
+
45
+ ### [Supported Tasks](#supported-tasks)
46
+
47
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
48
+
49
+ ### [Languages](#languages)
50
+
51
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
52
+
53
+ ## [Dataset Structure](#dataset-structure)
54
+
55
+ We show detailed information for up to 5 configurations of the dataset.
56
+
57
+ ### [Data Instances](#data-instances)
58
+
59
+ #### az_to_en
60
+
61
+ - **Size of downloaded dataset files:** 124.94 MB
62
+ - **Size of the generated dataset:** 1.46 MB
63
+ - **Total amount of disk used:** 126.40 MB
64
+
65
+ An example of 'train' looks as follows.
66
+ ```
67
+ {
68
+ "translation": {
69
+ "az": "zəhmət olmasa , sizə xitab edən sözlər eşidəndə əlinizi qaldırın .",
70
+ "en": "please raise your hand if something applies to you ."
71
+ }
72
+ }
73
+ ```
74
+
75
+ #### aztr_to_en
76
+
77
+ - **Size of downloaded dataset files:** 124.94 MB
78
+ - **Size of the generated dataset:** 38.28 MB
79
+ - **Total amount of disk used:** 163.22 MB
80
+
81
+ An example of 'train' looks as follows.
82
+ ```
83
+ {
84
+ "translation": {
85
+ "az_tr": "zəhmət olmasa , sizə xitab edən sözlər eşidəndə əlinizi qaldırın .",
86
+ "en": "please raise your hand if something applies to you ."
87
+ }
88
+ }
89
+ ```
90
+
91
+ #### be_to_en
92
+
93
+ - **Size of downloaded dataset files:** 124.94 MB
94
+ - **Size of the generated dataset:** 1.36 MB
95
+ - **Total amount of disk used:** 126.29 MB
96
+
97
+ An example of 'train' looks as follows.
98
+ ```
99
+ {
100
+ "translation": {
101
+ "be": "zəhmət olmasa , sizə xitab edən sözlər eşidəndə əlinizi qaldırın .",
102
+ "en": "please raise your hand if something applies to you ."
103
+ }
104
+ }
105
+ ```
106
+
107
+ #### beru_to_en
108
+
109
+ - **Size of downloaded dataset files:** 124.94 MB
110
+ - **Size of the generated dataset:** 57.41 MB
111
+ - **Total amount of disk used:** 182.35 MB
112
+
113
+ An example of 'validation' looks as follows.
114
+ ```
115
+ This example was too long and was cropped:
116
+
117
+ {
118
+ "translation": "{\"be_ru\": \"11 yaşımdaydım . səhərin birində , evimizdəki sevinc səslərinə oyandığım indiki kimi yadımdadır .\", \"en\": \"when i was..."
119
+ }
120
+ ```
121
+
122
+ #### es_to_pt
123
+
124
+ - **Size of downloaded dataset files:** 124.94 MB
125
+ - **Size of the generated dataset:** 8.71 MB
126
+ - **Total amount of disk used:** 133.65 MB
127
+
128
+ An example of 'validation' looks as follows.
129
+ ```
130
+ This example was too long and was cropped:
131
+
132
+ {
133
+ "translation": "{\"es\": \"11 yaşımdaydım . səhərin birində , evimizdəki sevinc səslərinə oyandığım indiki kimi yadımdadır .\", \"pt\": \"when i was 11..."
134
+ }
135
+ ```
136
+
137
+ ### [Data Fields](#data-fields)
138
+
139
+ The data fields are the same among all splits.
140
+
141
+ #### az_to_en
142
+ - `translation`: a multilingual `string` variable, with possible languages including `az`, `en`.
143
+
144
+ #### aztr_to_en
145
+ - `translation`: a multilingual `string` variable, with possible languages including `az_tr`, `en`.
146
+
147
+ #### be_to_en
148
+ - `translation`: a multilingual `string` variable, with possible languages including `be`, `en`.
149
+
150
+ #### beru_to_en
151
+ - `translation`: a multilingual `string` variable, with possible languages including `be_ru`, `en`.
152
+
153
+ #### es_to_pt
154
+ - `translation`: a multilingual `string` variable, with possible languages including `es`, `pt`.
155
+
156
+ ### [Data Splits Sample Size](#data-splits-sample-size)
157
+
158
+ | name |train |validation|test|
159
+ |----------|-----:|---------:|---:|
160
+ |az_to_en | 5947| 672| 904|
161
+ |aztr_to_en|188397| 672| 904|
162
+ |be_to_en | 4510| 249| 665|
163
+ |beru_to_en|212615| 249| 665|
164
+ |es_to_pt | 44939| 1017|1764|
165
+
166
+ ## [Dataset Creation](#dataset-creation)
167
+
168
+ ### [Curation Rationale](#curation-rationale)
169
+
170
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
171
+
172
+ ### [Source Data](#source-data)
173
+
174
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
175
+
176
+ ### [Annotations](#annotations)
177
+
178
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
179
+
180
+ ### [Personal and Sensitive Information](#personal-and-sensitive-information)
181
+
182
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
183
+
184
+ ## [Considerations for Using the Data](#considerations-for-using-the-data)
185
+
186
+ ### [Social Impact of Dataset](#social-impact-of-dataset)
187
+
188
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
189
+
190
+ ### [Discussion of Biases](#discussion-of-biases)
191
+
192
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
193
+
194
+ ### [Other Known Limitations](#other-known-limitations)
195
+
196
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
197
+
198
+ ## [Additional Information](#additional-information)
199
+
200
+ ### [Dataset Curators](#dataset-curators)
201
+
202
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
203
+
204
+ ### [Licensing Information](#licensing-information)
205
+
206
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
207
+
208
+ ### [Citation Information](#citation-information)
209
+
210
+ ```
211
+ @inproceedings{Ye2018WordEmbeddings,
212
+ author = {Ye, Qi and Devendra, Sachan and Matthieu, Felix and Sarguna, Padmanabhan and Graham, Neubig},
213
+ title = {When and Why are pre-trained word embeddings useful for Neural Machine Translation},
214
+ booktitle = {HLT-NAACL},
215
+ year = {2018},
216
+ }
217
+
218
+ ```
219
+
220
+
221
+ ### Contributions
222
+
223
+ Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.