Datasets:

ArXiv:
Libraries:
Datasets
License:
thomasw21 commited on
Commit
ff240c8
1 Parent(s): 3c05b3b
Files changed (4) hide show
  1. README.md +344 -1
  2. dataset_infos.json +1 -0
  3. dummy/0.0.0/dummy_data.zip +3 -0
  4. wit.py +97 -0
README.md CHANGED
@@ -1,3 +1,346 @@
1
  ---
2
- license: cc-by-sa-3.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ annotations_creators:
3
+ - machine-generated
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - af
8
+ - ar
9
+ - ast
10
+ - azb
11
+ - be
12
+ - bg
13
+ - bn
14
+ - br
15
+ - ca
16
+ - cs
17
+ - cy
18
+ - da
19
+ - de
20
+ - el
21
+ - en
22
+ - eo
23
+ - es
24
+ - et
25
+ - eu
26
+ - fa
27
+ - fi
28
+ - fr
29
+ - fy
30
+ - ga
31
+ - gl
32
+ - hr
33
+ - hu
34
+ - hy
35
+ - id
36
+ - it
37
+ - iw
38
+ - ja
39
+ - ka
40
+ - ko
41
+ - la
42
+ - lt
43
+ - lv
44
+ - mk
45
+ - ml
46
+ - ms
47
+ - nl
48
+ - nn
49
+ - 'no'
50
+ - pl
51
+ - pt
52
+ - ro
53
+ - ru
54
+ - sk
55
+ - sl
56
+ - sr
57
+ - sv
58
+ - th
59
+ - tr
60
+ - uk
61
+ - ur
62
+ - vi
63
+ - vo
64
+ - zh
65
+ licenses:
66
+ - cc-by-sa-3.0
67
+ multilinguality:
68
+ - multilingual
69
+ paperswithcode_id: wit
70
+ pretty_name: Wikipedia-based Image Text
71
+ size_categories:
72
+ - 10M<n<100M
73
+ source_datasets:
74
+ - original
75
+ - extended|wikipedia
76
+ task_categories:
77
+ - text-retrieval
78
+ - image-to-text
79
+ task_ids:
80
+ - text-retrieval-other-text-image-retrieval
81
+ - image-captioning
82
  ---
83
+
84
+ # Dataset Card for WIT
85
+
86
+ ## Table of Contents
87
+ - [Table of Contents](#table-of-contents)
88
+ - [Dataset Description](#dataset-description)
89
+ - [Dataset Summary](#dataset-summary)
90
+ - [Dataset Preprocessing](#dataset-preprocessing)
91
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
92
+ - [Languages](#languages)
93
+ - [Dataset Structure](#dataset-structure)
94
+ - [Data Instances](#data-instances)
95
+ - [Data Fields](#data-fields)
96
+ - [Data Splits](#data-splits)
97
+ - [Dataset Creation](#dataset-creation)
98
+ - [Curation Rationale](#curation-rationale)
99
+ - [Source Data](#source-data)
100
+ - [Annotations](#annotations)
101
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
102
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
103
+ - [Social Impact of Dataset](#social-impact-of-dataset)
104
+ - [Discussion of Biases](#discussion-of-biases)
105
+ - [Other Known Limitations](#other-known-limitations)
106
+ - [Additional Information](#additional-information)
107
+ - [Dataset Curators](#dataset-curators)
108
+ - [Licensing Information](#licensing-information)
109
+ - [Citation Information](#citation-information)
110
+ - [Contributions](#contributions)
111
+
112
+ ## Dataset Description
113
+
114
+ - **Homepage:** [WIT homepage](https://github.com/google-research-datasets/wit)
115
+ - **Repository:** [WIT repository](https://github.com/google-research-datasets/wit)
116
+ - **Paper:** [WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning
117
+ ](https://arxiv.org/abs/2103.01913)
118
+ - **Leaderboard:** [WIT leaderboard](https://www.kaggle.com/c/wikipedia-image-caption)
119
+ - **Point of Contact:** [WIT e-mail](mailto:wit-dataset@google.com)
120
+
121
+ ### Dataset Summary
122
+
123
+ Wikipedia-based Image Text (WIT) Dataset is a large multimodal multilingual dataset. WIT is composed of a curated set of 37.6 million entity rich image-text examples with 11.5 million unique images across 108 Wikipedia languages. Its size enables WIT to be used as a pretraining dataset for multimodal machine learning models.
124
+
125
+ A few unique advantages of WIT:
126
+
127
+ * The largest multimodal dataset (time of this writing) by the number of image-text examples.
128
+ * A massively multilingual (first of its kind) with coverage for over 100+ languages.
129
+ * A collection of diverse set of concepts and real world entities.
130
+ * Brings forth challenging real-world test sets.
131
+
132
+ ### Dataset Preprocessing
133
+
134
+ This dataset doesn't download the images locally by default. Instead, it exposes URLs to the images. To fetch the images, use the following code:
135
+
136
+ ```python
137
+ from concurrent.futures import ThreadPoolExecutor
138
+ from functools import partial
139
+ import io
140
+ import urllib
141
+
142
+ import PIL.Image
143
+
144
+ from datasets import load_dataset
145
+ from datasets.utils.file_utils import get_datasets_user_agent
146
+
147
+
148
+ def fetch_single_image(image_url, timeout=None, retries=0):
149
+ for _ in range(retries + 1):
150
+ try:
151
+ request = urllib.request.Request(
152
+ image_url,
153
+ data=None,
154
+ headers={"user-agent": get_datasets_user_agent()},
155
+ )
156
+ with urllib.request.urlopen(request, timeout=timeout) as req:
157
+ image = PIL.Image.open(io.BytesIO(req.read()))
158
+ break
159
+ except Exception:
160
+ image = None
161
+ return image
162
+
163
+
164
+ def fetch_images(batch, num_threads, timeout=None, retries=0):
165
+ fetch_single_image_with_args = partial(fetch_single_image, timeout=timeout, retries=retries)
166
+ with ThreadPoolExecutor(max_workers=num_threads) as executor:
167
+ batch["image"] = list(executor.map(fetch_single_image_with_args, batch["image_url"]))
168
+ return batch
169
+
170
+
171
+ num_threads = 20
172
+ dset = load_dataset("wit")
173
+ dset = dset.map(fetch_images, batched=True, batch_size=100, fn_kwargs={"num_threads": num_threads})
174
+ ```
175
+
176
+ ### Supported Tasks and Leaderboards
177
+
178
+ - `image-captioning`: This dataset can be used to train a model for image captioning where the goal is to predict a caption given the image.
179
+
180
+ - `text-retrieval`: The goal in this task is to build a model that retrieves the text closest to an image.
181
+
182
+ In these tasks, any combination of the `caption_reference_description`, `caption_attribution_description` and `caption_alt_text_description` fields can be used as the input text/caption.
183
+
184
+ ### Languages
185
+
186
+ The dataset contains examples from all Wikipedia languages, with the following stats:
187
+
188
+ Image-Text | # Lang | Uniq. Images | # Lang
189
+ ------------ | ------ | ------------- | ------
190
+ total > 1M | 9 | images > 1M | 6
191
+ total > 500K | 10 | images > 500K | 12
192
+ total > 100K | 36 | images > 100K | 35
193
+ total > 50K | 15 | images > 50K | 17
194
+ total > 14K | 38 | images > 13K | 38
195
+
196
+ ## Dataset Structure
197
+
198
+ ### Data Instances
199
+
200
+ ```
201
+ {
202
+ 'language': 'en',
203
+ 'page_url': 'https://en.wikipedia.org/wiki/Oxydactylus',
204
+ 'image_url': 'https://upload.wikimedia.org/wikipedia/commons/5/5f/Oxydactylus_longipes_fm.jpg',
205
+ 'page_title': 'Oxydactylus',
206
+ 'section_title': None,
207
+ 'hierarchical_section_title': 'Oxydactylus',
208
+ 'caption_reference_description': None,
209
+ 'caption_attribution_description': 'English: Mounted skeleton of Oxydactylus longipes in the Field Museum of Natural History.',
210
+ 'caption_alt_text_description': None,
211
+ 'mime_type': 'image/jpeg',
212
+ 'original_height': 3564,
213
+ 'original_width': 2748,
214
+ 'is_main_image': True,
215
+ 'attribution_passes_lang_id': True,
216
+ 'page_changed_recently': True,
217
+ 'context_page_description': 'Oxydactylus is an extinct genus of camelid endemic to North America. It lived from the Late Oligocene to the Middle Miocene, existing for approximately 14 million years. The name is from the Ancient Greek οξύς and δάκτυλος.\nThey had very long legs and necks, and were probably adapted to eating high vegetation, much like modern giraffes. Unlike modern camelids, they had hooves, rather than tough sole-pads, and splayed toes.',
218
+ 'context_section_description': 'Oxydactylus is an extinct genus of camelid endemic to North America. It lived from the Late Oligocene to the Middle Miocene (28.4–13.7 mya), existing for approximately 14 million years. The name is from the Ancient Greek οξύς (oxys, "sharp")and δάκτυλος (daktylos, "finger").\n \nThey had very long legs and necks, and were probably adapted to eating high vegetation, much like modern giraffes. Unlike modern camelids, they had hooves, rather than tough sole-pads, and splayed toes.'
219
+ }
220
+ ```
221
+
222
+ ### Data Fields
223
+
224
+ - `language`: Language code depicting wikipedia language of the page
225
+ - `page_url`: URL to wikipedia page
226
+ - `image_url`: URL to wikipedia image
227
+ - `page_title`: Wikipedia page's title
228
+ - `section_title`: Section's title
229
+ - `hierarchical_section_title`: Hierarchical section's title
230
+ - `caption_reference_description`: This is the caption that is visible on the wiki page directly below the image.
231
+ - `caption_attribution_description`: This is the text found on the Wikimedia page of the image. This text is common to all occurrences of that image across all Wikipedias and thus can be in a language different to the original page article.
232
+ - `caption_alt_text_description`: This is the “alt” text associated with the image. While not visible in general, it is commonly used for accessibility / screen readers
233
+ - `mime_type`: Mime type associated to the image.
234
+ - `original_height`: Image height
235
+ - `original_width`: Image width
236
+ - `is_main_image`: Flag determining if the image is the first image of the page. Usually displayed on the top-right part of the page when using web browsers.
237
+ - `attribution_passes_lang_id`: Compared `language` field with the attribution language (written in the prefix of the attribution description).
238
+ - `page_changed_recently`: [More Information Needed]
239
+ - `context_page_description`: Page description corresponds to the short description of the page. It provides a concise explanation of the scope of the page.
240
+ - `context_section_description`: Text within the image's section.
241
+
242
+ <p align='center'>
243
+ <img width='75%' src='https://production-media.paperswithcode.com/datasets/Screenshot_2021-03-04_at_14.26.02.png' alt="Half Dome" /> </br>
244
+ <b>Figure: WIT annotation example. </b>
245
+ </p>
246
+
247
+ Details on the field content can be found directly in the [paper, figure 5 and table 12.](https://arxiv.org/abs/2103.01913)
248
+
249
+ ### Data Splits
250
+
251
+ All data is held in `train` split, with a total of 37046386 rows.
252
+
253
+ ## Dataset Creation
254
+
255
+ ### Curation Rationale
256
+
257
+ From the [repository](https://github.com/google-research-datasets/wit#motivation):
258
+
259
+ > Multimodal visio-linguistic models rely on a rich dataset to help them learn to model the relationship between images and texts. Having large image-text datasets can significantly improve performance, as shown by recent works. Furthermore the lack of language coverage in existing datasets (which are mostly only in English) also impedes research in the multilingual multimodal space – we consider this a lost opportunity given the potential shown in leveraging images (as a language-agnostic medium) to help improve our multilingual textual understanding.
260
+ >
261
+ > To address these challenges and advance research on multilingual, multimodal learning we created the Wikipedia-based Image Text (WIT) Dataset. WIT is created by extracting multiple different texts associated with an image (e.g., as shown in the above image) from Wikipedia articles and Wikimedia image links. This was accompanied by rigorous filtering to only retain high quality image-text sets.
262
+ >
263
+ > The resulting dataset contains over 37.6 million image-text sets – making WIT the largest multimodal dataset (publicly available at the time of this writing) with unparalleled multilingual coverage – with 12K+ examples in each of 108 languages (53 languages have 100K+ image-text pairs).
264
+
265
+ ### Source Data
266
+
267
+ #### Initial Data Collection and Normalization
268
+
269
+ From the [paper, section 3.1](https://arxiv.org/abs/2103.01913):
270
+
271
+ > We started with all Wikipedia content pages (i.e., ignoring other
272
+ pages that have discussions, comments and such). These number about ∼124M pages across 279 languages.
273
+
274
+ #### Who are the source language producers?
275
+
276
+ Text was extracted from Wikipedia.
277
+
278
+ ### Annotations
279
+
280
+ #### Annotation process
281
+
282
+ WIT was constructed using an automatic process. However it was human-validated.
283
+
284
+ From the [paper, section 3.7](https://arxiv.org/abs/2103.01913):
285
+
286
+ > To further verify the quality of the WIT dataset we performed a
287
+ study using (crowd-sourced) human annotators. As seen in Fig. 3,
288
+ we asked raters to answer 3 questions. Given an image and the page
289
+ title, raters first evaluate the quality of the attribution description
290
+ and reference description in the first two questions (order randomized). The third question understands the contextual quality of these
291
+ text descriptions given the page description and caption. Each response is on a 3-point scale: "Yes" if the text perfectly describes
292
+ the image, "Maybe" if it is sufficiently explanatory and "No" if it is
293
+ irrelevant or the image is inappropriate.
294
+
295
+ #### Who are the annotators?
296
+
297
+ [More Information Needed]
298
+
299
+ ### Personal and Sensitive Information
300
+
301
+ [More Information Needed]
302
+
303
+ ## Considerations for Using the Data
304
+
305
+ ### Social Impact of Dataset
306
+
307
+ [More Information Needed]
308
+
309
+ ### Discussion of Biases
310
+
311
+ From the [paper, section 3.4](https://arxiv.org/abs/2103.01913):
312
+
313
+ > Lastly we found that certain image-text pairs occurred very
314
+ frequently. These were often generic images that did not have
315
+ much to do with the main article page. Common examples
316
+ included flags, logos, maps, insignia and such. To prevent
317
+ biasing the data, we heavily under-sampled all such images
318
+
319
+ ### Other Known Limitations
320
+
321
+ [More Information Needed]
322
+
323
+ ## Additional Information
324
+
325
+ ### Dataset Curators
326
+
327
+ [More Information Needed]
328
+
329
+ ### Licensing Information
330
+
331
+ [More Information Needed]
332
+
333
+ ### Citation Information
334
+
335
+ ```bibtex
336
+ @article{srinivasan2021wit,
337
+ title={WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning},
338
+ author={Srinivasan, Krishna and Raman, Karthik and Chen, Jiecao and Bendersky, Michael and Najork, Marc},
339
+ journal={arXiv preprint arXiv:2103.01913},
340
+ year={2021}
341
+ }
342
+ ```
343
+
344
+ ### Contributions
345
+
346
+ Thanks to [@thomasw21](https://github.com/thomasw21), [@nateraw](https://github.com/nateraw) and [hassiahk](https://github.com/hassiahk) for adding this dataset.
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"default": {"description": "Wikipedia-based Image Text (WIT) Dataset is a large multimodal multilingual dataset.\nWIT is composed of a curated set of 37.6 million entity rich image-text examples with 11.5 million unique images across 108 Wikipedia languages.\nIts size enables WIT to be used as a pretraining dataset for multimodal machine learning models.\n", "citation": "@article{srinivasan2021wit,\n title={WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning},\n author={Srinivasan, Krishna and Raman, Karthik and Chen, Jiecao and Bendersky, Michael and Najork, Marc},\n journal={arXiv preprint arXiv:2103.01913},\n year={2021}\n}\n", "homepage": "https://github.com/google-research-datasets/wit", "license": "Data is available under the Creative Commons Attribution-ShareAlike 3.0 Unported license.", "features": {"language": {"dtype": "string", "id": null, "_type": "Value"}, "page_url": {"dtype": "string", "id": null, "_type": "Value"}, "image_url": {"dtype": "string", "id": null, "_type": "Value"}, "page_title": {"dtype": "string", "id": null, "_type": "Value"}, "section_title": {"dtype": "string", "id": null, "_type": "Value"}, "hierarchical_section_title": {"dtype": "string", "id": null, "_type": "Value"}, "caption_reference_description": {"dtype": "string", "id": null, "_type": "Value"}, "caption_attribution_description": {"dtype": "string", "id": null, "_type": "Value"}, "caption_alt_text_description": {"dtype": "string", "id": null, "_type": "Value"}, "mime_type": {"dtype": "string", "id": null, "_type": "Value"}, "original_height": {"dtype": "int32", "id": null, "_type": "Value"}, "original_width": {"dtype": "int32", "id": null, "_type": "Value"}, "is_main_image": {"dtype": "bool", "id": null, "_type": "Value"}, "attribution_passes_lang_id": {"dtype": "bool", "id": null, "_type": "Value"}, "page_changed_recently": {"dtype": "bool", "id": null, "_type": "Value"}, "context_page_description": {"dtype": "string", "id": null, "_type": "Value"}, "context_section_description": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "wit", "config_name": "default", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 69619778881, "num_examples": 37046386, "dataset_name": "wit"}}, "download_checksums": {"https://storage.googleapis.com/gresearch/wit/wit_v1.train.all-00000-of-00010.tsv.gz": {"num_bytes": 2672819495, "checksum": "1fdd379b55e559fa6d0884aa3c57066bb1f206b183b5b4ce6a8128f486f2e8b3"}, "https://storage.googleapis.com/gresearch/wit/wit_v1.train.all-00001-of-00010.tsv.gz": {"num_bytes": 2667931762, "checksum": "2fb22ceab0cd33168367fd6d268c8d803982cfe924b5d01cdf43457c32591f27"}, "https://storage.googleapis.com/gresearch/wit/wit_v1.train.all-00002-of-00010.tsv.gz": {"num_bytes": 2669251466, "checksum": "316fd4471585df14c33425a199e1fbb843ea8ae1f42b2800ddea1d959d403dcd"}, "https://storage.googleapis.com/gresearch/wit/wit_v1.train.all-00003-of-00010.tsv.gz": {"num_bytes": 2670373763, "checksum": "fdf2fb4e667de19e9dca276257322687fff597ce43d91c0ad456b5da32e3a028"}, "https://storage.googleapis.com/gresearch/wit/wit_v1.train.all-00004-of-00010.tsv.gz": {"num_bytes": 2668172723, "checksum": "16033a528ed0cabec9571f4a41c5dd039fc4c0ef84ee8e62a802ba760c79c170"}, "https://storage.googleapis.com/gresearch/wit/wit_v1.train.all-00005-of-00010.tsv.gz": {"num_bytes": 2673104331, "checksum": "a749c620fae1773d5818324c717c33ea23ce1c286daa34dcea290a056f0934ba"}, "https://storage.googleapis.com/gresearch/wit/wit_v1.train.all-00006-of-00010.tsv.gz": {"num_bytes": 2670156092, "checksum": "90e00f3f82afd21e1322ebd969958960238abaeb9c5f00cb3f053ce7e1c5e32c"}, "https://storage.googleapis.com/gresearch/wit/wit_v1.train.all-00007-of-00010.tsv.gz": {"num_bytes": 2669891774, "checksum": "b3728292b163f98858ff4c1f9f619259376ed063f9f36f0bdd66169982c40187"}, "https://storage.googleapis.com/gresearch/wit/wit_v1.train.all-00008-of-00010.tsv.gz": {"num_bytes": 2669091199, "checksum": "6104064c2981696c2a91f1aa35d952aae488ea07c2263cdc0c66b202cdb43170"}, "https://storage.googleapis.com/gresearch/wit/wit_v1.train.all-00009-of-00010.tsv.gz": {"num_bytes": 2670659115, "checksum": "3388614d12905c9a1ddb4c27445899a1f73ed75d12eaa36dd112e481330ecfaa"}}, "download_size": 26701451720, "post_processing_size": null, "dataset_size": 69619778881, "size_in_bytes": 96321230601}}
dummy/0.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2a32bc09f4e331daafad101808ef218731c7990f23e81b84efd31069d27ee3e3
3
+ size 931
wit.py ADDED
@@ -0,0 +1,97 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """Wikipedia-based Image Text (WIT) Dataset is a large multimodal multilingual dataset"""
16
+ import csv
17
+
18
+ import datasets
19
+
20
+
21
+ # Find for instance the citation on arxiv or on the dataset repo/website
22
+ _CITATION = """\
23
+ @article{srinivasan2021wit,
24
+ title={WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning},
25
+ author={Srinivasan, Krishna and Raman, Karthik and Chen, Jiecao and Bendersky, Michael and Najork, Marc},
26
+ journal={arXiv preprint arXiv:2103.01913},
27
+ year={2021}
28
+ }
29
+ """
30
+
31
+ # You can copy an official description
32
+ _DESCRIPTION = """\
33
+ Wikipedia-based Image Text (WIT) Dataset is a large multimodal multilingual dataset.
34
+ WIT is composed of a curated set of 37.6 million entity rich image-text examples with 11.5 million unique images across 108 Wikipedia languages.
35
+ Its size enables WIT to be used as a pretraining dataset for multimodal machine learning models.
36
+ """
37
+
38
+ _HOMEPAGE = "https://github.com/google-research-datasets/wit"
39
+
40
+ _LICENSE = "Data is available under the Creative Commons Attribution-ShareAlike 3.0 Unported license."
41
+
42
+ _URLs = [f"https://storage.googleapis.com/gresearch/wit/wit_v1.train.all-{i:05}-of-00010.tsv.gz" for i in range(0, 10)]
43
+
44
+ _FEATURES = datasets.Features(
45
+ {
46
+ "language": datasets.Value("string"),
47
+ "page_url": datasets.Value("string"),
48
+ "image_url": datasets.Value("string"),
49
+ "page_title": datasets.Value("string"),
50
+ "section_title": datasets.Value("string"),
51
+ "hierarchical_section_title": datasets.Value("string"),
52
+ "caption_reference_description": datasets.Value("string"),
53
+ "caption_attribution_description": datasets.Value("string"),
54
+ "caption_alt_text_description": datasets.Value("string"),
55
+ "mime_type": datasets.Value("string"),
56
+ "original_height": datasets.Value("int32"),
57
+ "original_width": datasets.Value("int32"),
58
+ "is_main_image": datasets.Value("bool"),
59
+ "attribution_passes_lang_id": datasets.Value("bool"),
60
+ "page_changed_recently": datasets.Value("bool"),
61
+ "context_page_description": datasets.Value("string"),
62
+ "context_section_description": datasets.Value("string"),
63
+ }
64
+ )
65
+
66
+
67
+ class WIT(datasets.GeneratorBasedBuilder):
68
+ """Builder for WIT."""
69
+
70
+ def _info(self):
71
+ return datasets.DatasetInfo(
72
+ description=_DESCRIPTION,
73
+ features=_FEATURES,
74
+ homepage=_HOMEPAGE,
75
+ license=_LICENSE,
76
+ citation=_CITATION,
77
+ )
78
+
79
+ def _split_generators(self, dl_manager):
80
+ files = dl_manager.download_and_extract(_URLs)
81
+ return [
82
+ datasets.SplitGenerator(
83
+ name=datasets.Split.TRAIN,
84
+ gen_kwargs={
85
+ "files": files,
86
+ },
87
+ ),
88
+ ]
89
+
90
+ def _generate_examples(self, files):
91
+ idx = 0
92
+ for file in files:
93
+ with open(file, "r", encoding="utf-8") as f:
94
+ examples = csv.DictReader(f, delimiter="\t")
95
+ for example in examples:
96
+ yield idx, {k: v if v != "" else None for k, v in example.items()}
97
+ idx += 1