system HF staff commited on
Commit
4c67fb8
1 Parent(s): cdb0294

Update files from the datasets library (from 1.3.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.3.0

Files changed (1) hide show
  1. README.md +254 -0
README.md ADDED
@@ -0,0 +1,254 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ ---
3
+
4
+ # Dataset Card for "wiki_dpr"
5
+
6
+ ## Table of Contents
7
+ - [Dataset Description](#dataset-description)
8
+ - [Dataset Summary](#dataset-summary)
9
+ - [Supported Tasks](#supported-tasks)
10
+ - [Languages](#languages)
11
+ - [Dataset Structure](#dataset-structure)
12
+ - [Data Instances](#data-instances)
13
+ - [Data Fields](#data-fields)
14
+ - [Data Splits Sample Size](#data-splits-sample-size)
15
+ - [Dataset Creation](#dataset-creation)
16
+ - [Curation Rationale](#curation-rationale)
17
+ - [Source Data](#source-data)
18
+ - [Annotations](#annotations)
19
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
20
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
21
+ - [Social Impact of Dataset](#social-impact-of-dataset)
22
+ - [Discussion of Biases](#discussion-of-biases)
23
+ - [Other Known Limitations](#other-known-limitations)
24
+ - [Additional Information](#additional-information)
25
+ - [Dataset Curators](#dataset-curators)
26
+ - [Licensing Information](#licensing-information)
27
+ - [Citation Information](#citation-information)
28
+ - [Contributions](#contributions)
29
+
30
+ ## [Dataset Description](#dataset-description)
31
+
32
+ - **Homepage:** [https://github.com/facebookresearch/DPR](https://github.com/facebookresearch/DPR)
33
+ - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
34
+ - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
35
+ - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
36
+ - **Size of downloaded dataset files:** 406068.98 MB
37
+ - **Size of the generated dataset:** 448718.73 MB
38
+ - **Total amount of disk used:** 932739.13 MB
39
+
40
+ ### [Dataset Summary](#dataset-summary)
41
+
42
+ This is the wikipedia split used to evaluate the Dense Passage Retrieval (DPR) model.
43
+ It contains 21M passages from wikipedia along with their DPR embeddings.
44
+ The wikipedia articles were split into multiple, disjoint text blocks of 100 words as passages.
45
+
46
+ ### [Supported Tasks](#supported-tasks)
47
+
48
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
49
+
50
+ ### [Languages](#languages)
51
+
52
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
53
+
54
+ ## [Dataset Structure](#dataset-structure)
55
+
56
+ We show detailed information for up to 5 configurations of the dataset.
57
+
58
+ ### [Data Instances](#data-instances)
59
+
60
+ #### psgs_w100.multiset.compressed
61
+
62
+ - **Size of downloaded dataset files:** 67678.16 MB
63
+ - **Size of the generated dataset:** 74786.45 MB
64
+ - **Total amount of disk used:** 145204.14 MB
65
+
66
+ An example of 'train' looks as follows.
67
+ ```
68
+ This example was too long and was cropped:
69
+
70
+ {
71
+ "embeddings": "[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1....",
72
+ "id": "0",
73
+ "text": "his is the text of a dummy passag",
74
+ "title": "Title of the article"
75
+ }
76
+ ```
77
+
78
+ #### psgs_w100.multiset.exact
79
+
80
+ - **Size of downloaded dataset files:** 67678.16 MB
81
+ - **Size of the generated dataset:** 74786.45 MB
82
+ - **Total amount of disk used:** 178700.81 MB
83
+
84
+ An example of 'train' looks as follows.
85
+ ```
86
+ This example was too long and was cropped:
87
+
88
+ {
89
+ "embeddings": "[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1....",
90
+ "id": "0",
91
+ "text": "his is the text of a dummy passag",
92
+ "title": "Title of the article"
93
+ }
94
+ ```
95
+
96
+ #### psgs_w100.multiset.no_index
97
+
98
+ - **Size of downloaded dataset files:** 67678.16 MB
99
+ - **Size of the generated dataset:** 74786.45 MB
100
+ - **Total amount of disk used:** 142464.62 MB
101
+
102
+ An example of 'train' looks as follows.
103
+ ```
104
+ This example was too long and was cropped:
105
+
106
+ {
107
+ "embeddings": "[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1....",
108
+ "id": "0",
109
+ "text": "his is the text of a dummy passag",
110
+ "title": "Title of the article"
111
+ }
112
+ ```
113
+
114
+ #### psgs_w100.nq.compressed
115
+
116
+ - **Size of downloaded dataset files:** 67678.16 MB
117
+ - **Size of the generated dataset:** 74786.45 MB
118
+ - **Total amount of disk used:** 145204.14 MB
119
+
120
+ An example of 'train' looks as follows.
121
+ ```
122
+ This example was too long and was cropped:
123
+
124
+ {
125
+ "embeddings": "[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1....",
126
+ "id": "0",
127
+ "text": "his is the text of a dummy passag",
128
+ "title": "Title of the article"
129
+ }
130
+ ```
131
+
132
+ #### psgs_w100.nq.exact
133
+
134
+ - **Size of downloaded dataset files:** 67678.16 MB
135
+ - **Size of the generated dataset:** 74786.45 MB
136
+ - **Total amount of disk used:** 178700.81 MB
137
+
138
+ An example of 'train' looks as follows.
139
+ ```
140
+ This example was too long and was cropped:
141
+
142
+ {
143
+ "embeddings": "[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1....",
144
+ "id": "0",
145
+ "text": "his is the text of a dummy passag",
146
+ "title": "Title of the article"
147
+ }
148
+ ```
149
+
150
+ ### [Data Fields](#data-fields)
151
+
152
+ The data fields are the same among all splits.
153
+
154
+ #### psgs_w100.multiset.compressed
155
+ - `id`: a `string` feature.
156
+ - `text`: a `string` feature.
157
+ - `title`: a `string` feature.
158
+ - `embeddings`: a `list` of `float32` features.
159
+
160
+ #### psgs_w100.multiset.exact
161
+ - `id`: a `string` feature.
162
+ - `text`: a `string` feature.
163
+ - `title`: a `string` feature.
164
+ - `embeddings`: a `list` of `float32` features.
165
+
166
+ #### psgs_w100.multiset.no_index
167
+ - `id`: a `string` feature.
168
+ - `text`: a `string` feature.
169
+ - `title`: a `string` feature.
170
+ - `embeddings`: a `list` of `float32` features.
171
+
172
+ #### psgs_w100.nq.compressed
173
+ - `id`: a `string` feature.
174
+ - `text`: a `string` feature.
175
+ - `title`: a `string` feature.
176
+ - `embeddings`: a `list` of `float32` features.
177
+
178
+ #### psgs_w100.nq.exact
179
+ - `id`: a `string` feature.
180
+ - `text`: a `string` feature.
181
+ - `title`: a `string` feature.
182
+ - `embeddings`: a `list` of `float32` features.
183
+
184
+ ### [Data Splits Sample Size](#data-splits-sample-size)
185
+
186
+ | name | train |
187
+ |-----------------------------|-------:|
188
+ |psgs_w100.multiset.compressed|21015300|
189
+ |psgs_w100.multiset.exact |21015300|
190
+ |psgs_w100.multiset.no_index |21015300|
191
+ |psgs_w100.nq.compressed |21015300|
192
+ |psgs_w100.nq.exact |21015300|
193
+
194
+ ## [Dataset Creation](#dataset-creation)
195
+
196
+ ### [Curation Rationale](#curation-rationale)
197
+
198
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
199
+
200
+ ### [Source Data](#source-data)
201
+
202
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
203
+
204
+ ### [Annotations](#annotations)
205
+
206
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
207
+
208
+ ### [Personal and Sensitive Information](#personal-and-sensitive-information)
209
+
210
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
211
+
212
+ ## [Considerations for Using the Data](#considerations-for-using-the-data)
213
+
214
+ ### [Social Impact of Dataset](#social-impact-of-dataset)
215
+
216
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
217
+
218
+ ### [Discussion of Biases](#discussion-of-biases)
219
+
220
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
221
+
222
+ ### [Other Known Limitations](#other-known-limitations)
223
+
224
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
225
+
226
+ ## [Additional Information](#additional-information)
227
+
228
+ ### [Dataset Curators](#dataset-curators)
229
+
230
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
231
+
232
+ ### [Licensing Information](#licensing-information)
233
+
234
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
235
+
236
+ ### [Citation Information](#citation-information)
237
+
238
+ ```
239
+
240
+ @misc{karpukhin2020dense,
241
+ title={Dense Passage Retrieval for Open-Domain Question Answering},
242
+ author={Vladimir Karpukhin and Barlas Oğuz and Sewon Min and Patrick Lewis and Ledell Wu and Sergey Edunov and Danqi Chen and Wen-tau Yih},
243
+ year={2020},
244
+ eprint={2004.04906},
245
+ archivePrefix={arXiv},
246
+ primaryClass={cs.CL}
247
+ }
248
+
249
+ ```
250
+
251
+
252
+ ### Contributions
253
+
254
+ Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@lhoestq](https://github.com/lhoestq) for adding this dataset.