parquet-converter commited on
Commit
9b9e011
1 Parent(s): e1a96a4

Update parquet files

Browse files
README.md DELETED
@@ -1,307 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - found
4
- language_creators:
5
- - found
6
- language:
7
- - en
8
- license:
9
- - other
10
- multilinguality:
11
- - monolingual
12
- size_categories:
13
- - 1M<n<10M
14
- source_datasets:
15
- - original
16
- task_categories:
17
- - image-to-text
18
- task_ids:
19
- - image-captioning
20
- paperswithcode_id: conceptual-captions
21
- pretty_name: Conceptual Captions
22
- dataset_info:
23
- - config_name: default
24
- features:
25
- - name: id
26
- dtype: string
27
- - name: caption
28
- dtype: string
29
- - name: url
30
- dtype: string
31
- splits:
32
- - name: train
33
- num_bytes: 623230370
34
- num_examples: 3318333
35
- - name: validation
36
- num_bytes: 2846024
37
- num_examples: 15840
38
- download_size: 0
39
- dataset_size: 626076394
40
- - config_name: unlabeled
41
- features:
42
- - name: image_url
43
- dtype: string
44
- - name: caption
45
- dtype: string
46
- splits:
47
- - name: train
48
- num_bytes: 584520156
49
- num_examples: 3318333
50
- - name: validation
51
- num_bytes: 2698726
52
- num_examples: 15840
53
- download_size: 567211172
54
- dataset_size: 587218882
55
- - config_name: labeled
56
- features:
57
- - name: image_url
58
- dtype: string
59
- - name: caption
60
- dtype: string
61
- - name: labels
62
- sequence: string
63
- - name: MIDs
64
- sequence: string
65
- - name: confidence_scores
66
- sequence: float64
67
- splits:
68
- - name: train
69
- num_bytes: 1199330856
70
- num_examples: 2007090
71
- download_size: 1282463277
72
- dataset_size: 1199330856
73
- ---
74
-
75
- # Dataset Card for Conceptual Captions
76
-
77
- ## Table of Contents
78
- - [Dataset Description](#dataset-description)
79
- - [Dataset Summary](#dataset-summary)
80
- - [Dataset Preprocessing](#dataset-preprocessing)
81
- - [Supported Tasks](#supported-tasks-and-leaderboards)
82
- - [Languages](#languages)
83
- - [Dataset Structure](#dataset-structure)
84
- - [Data Instances](#data-instances)
85
- - [Data Fields](#data-instances)
86
- - [Data Splits](#data-instances)
87
- - [Dataset Creation](#dataset-creation)
88
- - [Curation Rationale](#curation-rationale)
89
- - [Source Data](#source-data)
90
- - [Annotations](#annotations)
91
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
92
- - [Considerations for Using the Data](#considerations-for-using-the-data)
93
- - [Social Impact of Dataset](#social-impact-of-dataset)
94
- - [Discussion of Biases](#discussion-of-biases)
95
- - [Other Known Limitations](#other-known-limitations)
96
- - [Additional Information](#additional-information)
97
- - [Dataset Curators](#dataset-curators)
98
- - [Licensing Information](#licensing-information)
99
- - [Citation Information](#citation-information)
100
-
101
- ## Dataset Description
102
-
103
- - **Homepage:** [Conceptual Captions homepage](https://ai.google.com/research/ConceptualCaptions/)
104
- - **Repository:** [Conceptual Captions repository](https://github.com/google-research-datasets/conceptual-captions)
105
- - **Paper:** [Conceptual Captions: A Cleaned, Hypernymed, Image Alt-text Dataset For Automatic Image Captioning](https://www.aclweb.org/anthology/P18-1238/)
106
- - **Leaderboard:** [Conceptual Captions leaderboard](https://ai.google.com/research/ConceptualCaptions/competition?active_tab=leaderboard)https://ai.google.com/research/ConceptualCaptions/leaderboard?active_tab=leaderboard
107
- - **Point of Contact:** [Conceptual Captions e-mail](mailto:conceptual-captions@google.com)
108
-
109
- ### Dataset Summary
110
-
111
- Conceptual Captions is a dataset consisting of ~3.3M images annotated with captions. In contrast with the curated style of other image caption annotations, Conceptual Caption images and their raw descriptions are harvested from the web, and therefore represent a wider variety of styles. More precisely, the raw descriptions are harvested from the Alt-text HTML attribute associated with web images. To arrive at the current version of the captions, we have developed an automatic pipeline that extracts, filters, and transforms candidate image/caption pairs, with the goal of achieving a balance of cleanliness, informativeness, fluency, and learnability of the resulting captions.
112
-
113
- ### Dataset Preprocessing
114
-
115
- This dataset doesn't download the images locally by default. Instead, it exposes URLs to the images. To fetch the images, use the following code:
116
-
117
- ```python
118
- from concurrent.futures import ThreadPoolExecutor
119
- from functools import partial
120
- import io
121
- import urllib
122
-
123
- import PIL.Image
124
-
125
- from datasets import load_dataset
126
- from datasets.utils.file_utils import get_datasets_user_agent
127
-
128
-
129
- USER_AGENT = get_datasets_user_agent()
130
-
131
-
132
- def fetch_single_image(image_url, timeout=None, retries=0):
133
- for _ in range(retries + 1):
134
- try:
135
- request = urllib.request.Request(
136
- image_url,
137
- data=None,
138
- headers={"user-agent": USER_AGENT},
139
- )
140
- with urllib.request.urlopen(request, timeout=timeout) as req:
141
- image = PIL.Image.open(io.BytesIO(req.read()))
142
- break
143
- except Exception:
144
- image = None
145
- return image
146
-
147
-
148
- def fetch_images(batch, num_threads, timeout=None, retries=0):
149
- fetch_single_image_with_args = partial(fetch_single_image, timeout=timeout, retries=retries)
150
- with ThreadPoolExecutor(max_workers=num_threads) as executor:
151
- batch["image"] = list(executor.map(fetch_single_image_with_args, batch["image_url"]))
152
- return batch
153
-
154
-
155
- num_threads = 20
156
- dset = load_dataset("conceptual_captions")
157
- dset = dset.map(fetch_images, batched=True, batch_size=100, fn_kwargs={"num_threads": num_threads})
158
- ```
159
-
160
- ### Supported Tasks and Leaderboards
161
-
162
- - `image-captioning`: This dataset can be used to train model for the Image Captioning task. The leaderboard for this task is available [here](https://ai.google.com/research/ConceptualCaptions/competition?active_tab=leaderboard). Official submission output captions are scored against the reference captions from the hidden test set using [this](https://github.com/tylin/coco-caption) implementation of the CIDEr (primary), ROUGE-L and SPICE metrics.
163
-
164
- ### Languages
165
-
166
- All captions are in English.
167
-
168
- ## Dataset Structure
169
-
170
- ### Data Instances
171
-
172
- #### `unlabeled`
173
-
174
- Each instance in this configuration represents a single image with a caption:
175
-
176
- ```
177
- {
178
- 'image_url': 'http://lh6.ggpht.com/-IvRtNLNcG8o/TpFyrudaT6I/AAAAAAAAM6o/_11MuAAKalQ/IMG_3422.JPG?imgmax=800',
179
- 'caption': 'a very typical bus station'
180
- }
181
- ```
182
-
183
- #### `labeled`
184
-
185
- Each instance in this configuration represents a single image with a caption with addtional machine-generated image labels and confidence scores:
186
-
187
- ```
188
- {
189
- 'image_url': 'https://thumb1.shutterstock.com/display_pic_with_logo/261388/223876810/stock-vector-christmas-tree-on-a-black-background-vector-223876810.jpg',
190
- 'caption': 'christmas tree on a black background .',
191
- 'labels': ['christmas tree', 'christmas decoration', 'font', 'text', 'graphic design', 'illustration','interior design', 'tree', 'christmas eve', 'ornament', 'fir', 'plant', 'pine', 'pine family', 'graphics'],
192
- 'MIDs': ['/m/025nd', '/m/05fc9mj', '/m/03gq5hm', '/m/07s6nbt', '/m/03c31', '/m/01kr8f', '/m/0h8nzzj', '/m/07j7r', '/m/014r1s', '/m/05ykl4', '/m/016x4z', '/m/05s2s', '/m/09t57', '/m/01tfm0', '/m/021sdg'],
193
- 'confidence_scores': [0.9818305373191833, 0.952756941318512, 0.9227379560470581, 0.8524878621101379, 0.7597672343254089, 0.7493422031402588, 0.7332468628883362, 0.6869218349456787, 0.6552258133888245, 0.6357356309890747, 0.5992692708969116, 0.585474967956543, 0.5222904086112976, 0.5113164782524109, 0.5036579966545105]
194
- }
195
- ```
196
-
197
- ### Data Fields
198
-
199
- #### `unlabeled`
200
-
201
- - `image_url`: Static URL for downloading the image associated with the post.
202
- - `caption`: Textual description of the image.
203
-
204
- #### `labeled`
205
-
206
- - `image_url`: Static URL for downloading the image associated with the post.
207
- - `caption`: Textual description of the image.
208
- - `labels`: A sequence of machine-generated labels obtained using the [Google Cloud Vision API](https://cloud.google.com/vision).
209
- - `MIDs`: A sequence of machine-generated identifiers (MID) corresponding to the label's Google Knowledge Graph entry.
210
- - `confidence_scores`: A sequence of confidence scores denoting how likely the corresponing labels are present on the image.
211
-
212
- ### Data Splits
213
-
214
- #### `unlabeled`
215
-
216
- The basic version of the dataset split into Training and Validation splits. The Training split consists of 3,318,333 image-URL/caption pairs and the Validation split consists of 15,840 image-URL/caption pairs.
217
-
218
- #### `labeled`
219
-
220
- The labeled version of the dataset with a single. The entire data is contained in Training split, which is a subset of 2,007,090 image-URL/caption pairs from the Training set of the `unlabeled` config.
221
-
222
- ## Dataset Creation
223
-
224
- ### Curation Rationale
225
-
226
- From the paper:
227
- > In this paper, we make contributions to both the data and modeling categories. First, we present a new dataset of caption annotations Conceptual Captions (Fig. 1), which has an order of magnitude more images than the COCO dataset. Conceptual Captions consists of about 3.3M himage, descriptioni pairs. In contrast with the curated style of the COCO images, Conceptual Captions images and their raw descriptions are harvested from the web, and therefore represent a wider variety of styles.
228
-
229
- ### Source Data
230
-
231
- #### Initial Data Collection and Normalization
232
-
233
- From the homepage:
234
- >For Conceptual Captions, we developed a fully automatic pipeline that extracts, filters, and transforms candidate image/caption pairs, with the goal of achieving a balance of cleanliness, informativeness, fluency, and learnability of the resulting captions. Because no human annotators are involved, the Conceptual Captions dataset generation process is highly scalable.
235
- >
236
- >To generate this dataset, we started with a Flume pipeline that processes billions of Internet webpages, extracting, filtering, and processing candidate image and caption pairs, and keeping those that pass through several filters.
237
- >
238
- >We first screen for certain properties like size, aspect ratio, adult content scores. These filters discard more than 65% of the candidates. Next, we use Alt-Texts for text-based filtering, removing captions with non-descriptive text (such as SEO tags or hashtags); we also discard texts with high sentiment polarity or adult content scores, resulting in just 3% of the incoming candidates passing through.
239
- >
240
- >In the next step, we filter out candidates for which none of the text tokens can be mapped to the visual content of the image. We use image classifiers (e.g., Google Cloud Vision APIs) to assign class labels to images and match these labels against the candidate text (allowing morphological transformations), discarding >around 60% of the candidates that reach this stage.
241
- >
242
- >The candidates passing the above filters tend to be good Alt-text image descriptions. However, a large majority of these use proper names (for people, venues, locations, etc.), brands, dates, quotes, etc. This creates two distinct problems. First, some of these cannot be inferred based on the image pixels alone. This is problematic because unless the image has the necessary visual information it is not useful for training. Second, even if the proper names could be inferred from the image it is extremely difficult for a model to learn to perform both fine-grained classification and natural-language descriptions simultaneously. We posit that if automatic determination of names, locations, brands, etc. is needed, it should be done as a separate task that may leverage image meta-information (e.g. GPS info), or complementary techniques such as OCR.
243
- >
244
- >We address the above problems with the insight that proper names should be replaced by words that represent the same general notion, i.e., by their concept. For example, we remove locations (“Crowd at a concert in Los Angeles“ becomes “Crowd at a concert”), names (e.g., “Former Miss World Priyanka Chopra on the red carpet” becomes “actor on the red carpet”), proper noun modifiers (e.g., “Italian cuisine” becomes just “cuisine”) and noun phrases (e.g., “actor and actor” becomes “actors”). Around 20% of the samples are discarded during this transformation because it can leave sentences too short, or otherwise inconsistent.
245
- >
246
- >Finally, we perform another round of filtering to identify concepts with low-count. We cluster all resolved entities (e.g., “actor”, “dog”, “neighborhood”, etc.) and keep only the candidate types which have a count of over 100 mentions. This retains around 16K entity concepts such as: “person”, “actor”, “artist”, “player” and “illustration”. The less frequent ones that we dropped include “baguette”, “bridle”, “deadline”, “ministry” and “funnel”.
247
-
248
- #### Who are the source language producers?
249
-
250
- Not specified.
251
-
252
- ### Annotations
253
-
254
- #### Annotation process
255
-
256
- Annotations are extracted jointly with the images using the automatic pipeline.
257
-
258
- #### Who are the annotators?
259
-
260
- Not specified.
261
-
262
- ### Personal and Sensitive Information
263
-
264
- [More Information Needed]
265
-
266
- ## Considerations for Using the Data
267
-
268
- ### Social Impact of Dataset
269
-
270
- [More Information Needed]
271
-
272
- ### Discussion of Biases
273
-
274
- [More Information Needed]
275
-
276
- ### Other Known Limitations
277
-
278
- [More Information Needed]
279
-
280
- ## Additional Information
281
-
282
- ### Dataset Curators
283
-
284
- Piyush Sharma, Nan Ding, Sebastian Goodman and Radu Soricut.
285
-
286
- ### Licensing Information
287
-
288
- The dataset may be freely used for any purpose, although acknowledgement of
289
- Google LLC ("Google") as the data source would be appreciated. The dataset is
290
- provided "AS IS" without any warranty, express or implied. Google disclaims all
291
- liability for any damages, direct or indirect, resulting from the use of the
292
- dataset.
293
-
294
- ### Citation Information
295
-
296
- ```bibtex
297
- @inproceedings{sharma2018conceptual,
298
- title = {Conceptual Captions: A Cleaned, Hypernymed, Image Alt-text Dataset For Automatic Image Captioning},
299
- author = {Sharma, Piyush and Ding, Nan and Goodman, Sebastian and Soricut, Radu},
300
- booktitle = {Proceedings of ACL},
301
- year = {2018},
302
- }
303
- ```
304
-
305
- ### Contributions
306
-
307
- Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) and [@mariosasko](https://github.com/mariosasko) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
conceptual_captions.py DELETED
@@ -1,159 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2020 HuggingFace Datasets Authors.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
-
16
- # Lint as: python3
17
- """Conceptual Captions dataset."""
18
-
19
- import csv
20
- import textwrap
21
-
22
- import datasets
23
-
24
-
25
- _DESCRIPTION = """\
26
- Google's Conceptual Captions dataset has more than 3 million images, paired with natural-language captions.
27
- In contrast with the curated style of the MS-COCO images, Conceptual Captions images and their raw descriptions are harvested from the web,
28
- and therefore represent a wider variety of styles. The raw descriptions are harvested from the Alt-text HTML attribute associated with web images.
29
- The authors developed an automatic pipeline that extracts, filters, and transforms candidate image/caption pairs, with the goal of achieving a balance of cleanliness,
30
- informativeness, fluency, and learnability of the resulting captions.
31
- """
32
-
33
- _HOMEPAGE = "http://data.statmt.org/cc-100/"
34
-
35
- _LICENSE = """\
36
- The dataset may be freely used for any purpose, although acknowledgement of
37
- Google LLC ("Google") as the data source would be appreciated. The dataset is
38
- provided "AS IS" without any warranty, express or implied. Google disclaims all
39
- liability for any damages, direct or indirect, resulting from the use of the
40
- dataset.
41
- """
42
-
43
- _CITATION = """\
44
- @inproceedings{sharma2018conceptual,
45
- title = {Conceptual Captions: A Cleaned, Hypernymed, Image Alt-text Dataset For Automatic Image Captioning},
46
- author = {Sharma, Piyush and Ding, Nan and Goodman, Sebastian and Soricut, Radu},
47
- booktitle = {Proceedings of ACL},
48
- year = {2018},
49
- }
50
- """
51
-
52
- _URLS = {
53
- "unlabeled": {
54
- "train": "https://storage.googleapis.com/gcc-data/Train/GCC-training.tsv?_ga=2.191230122.-1896153081.1529438250",
55
- "validation": "https://storage.googleapis.com/gcc-data/Validation/GCC-1.1.0-Validation.tsv?_ga=2.141047602.-1896153081.1529438250",
56
- },
57
- "labeled": {
58
- "train": "https://storage.googleapis.com/conceptual-captions-v1-1-labels/Image_Labels_Subset_Train_GCC-Labels-training.tsv?_ga=2.234395421.-20118413.1607637118",
59
- },
60
- }
61
-
62
- _DESCRIPTIONS = {
63
- "unlabeled": textwrap.dedent(
64
- """\
65
- The basic version of the dataset split into Training, Validation, and Test splits.
66
- The Training split consists of 3,318,333 image-URL/caption pairs, with a total number of 51,201 total token types in the captions (i.e., total vocabulary).
67
- The average number of tokens per captions is 10.3 (standard deviation of 4.5), while the median is 9.0 tokens per caption.
68
- The Validation split consists of 15,840 image-URL/caption pairs, with similar statistics.
69
- """
70
- ),
71
- "labeled": textwrap.dedent(
72
- """\
73
- A subset of 2,007,090 image-URL/caption pairs from the training set with machine-generated image labels.
74
- The image labels are obtained using the Google Cloud Vision API.
75
- Each image label has a machine-generated identifier (MID) corresponding to the label's Google Knowledge Graph entry and a confidence score for its presence in the image.
76
-
77
- Note: 2,007,528 is the number of image-URL/caption pairs specified by the authors, but some rows are missing labels, so they are not included.
78
- """
79
- ),
80
- }
81
-
82
-
83
- class ConceptualCaptions(datasets.GeneratorBasedBuilder):
84
- """Builder for Conceptual Captions dataset."""
85
-
86
- VERSION = datasets.Version("1.0.0")
87
-
88
- BUILDER_CONFIGS = [
89
- datasets.BuilderConfig("unlabeled", version=VERSION, description=_DESCRIPTIONS["unlabeled"]),
90
- datasets.BuilderConfig("labeled", version=VERSION, description=_DESCRIPTIONS["labeled"]),
91
- ]
92
-
93
- DEFAULT_CONFIG_NAME = "unlabeled"
94
-
95
- def _info(self):
96
- features = datasets.Features(
97
- {
98
- "image_url": datasets.Value("string"),
99
- "caption": datasets.Value("string"),
100
- },
101
- )
102
- if self.config.name == "labeled":
103
- features.update(
104
- {
105
- "labels": datasets.Sequence(datasets.Value("string")),
106
- "MIDs": datasets.Sequence(datasets.Value("string")),
107
- "confidence_scores": datasets.Sequence(datasets.Value("float64")),
108
- }
109
- )
110
- return datasets.DatasetInfo(
111
- description=_DESCRIPTION,
112
- features=features,
113
- supervised_keys=None,
114
- homepage=_HOMEPAGE,
115
- license=_LICENSE,
116
- citation=_CITATION,
117
- )
118
-
119
- def _split_generators(self, dl_manager):
120
- downloaded_data = dl_manager.download(_URLS[self.config.name])
121
- splits = [
122
- datasets.SplitGenerator(
123
- name=datasets.Split.TRAIN,
124
- gen_kwargs={"annotations_file": downloaded_data["train"]},
125
- ),
126
- ]
127
- if self.config.name == "unlabeled":
128
- splits += [
129
- datasets.SplitGenerator(
130
- name=datasets.Split.VALIDATION,
131
- gen_kwargs={"annotations_file": downloaded_data["validation"]},
132
- ),
133
- ]
134
- return splits
135
-
136
- def _generate_examples(self, annotations_file):
137
- if self.config.name == "unlabeled":
138
- with open(annotations_file, encoding="utf-8") as f:
139
- for i, row in enumerate(csv.reader(f, delimiter="\t")):
140
- # Sanity check
141
- assert len(row) == 2
142
- caption, image_url = row
143
- yield i, {
144
- "image_url": image_url,
145
- "caption": caption,
146
- },
147
- else:
148
- with open(annotations_file, encoding="utf-8") as f:
149
- for i, row in enumerate(csv.reader(f, delimiter="\t")):
150
- caption, image_url, labels, MIDs, confidence_scores = row
151
- if not labels:
152
- continue
153
- yield i, {
154
- "image_url": image_url,
155
- "caption": caption,
156
- "labels": labels.split(","),
157
- "MIDs": MIDs.split(","),
158
- "confidence_scores": [float(x) for x in confidence_scores.split(",")],
159
- },
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"default": {"description": "Image captioning dataset\nThe resulting dataset (version 1.1) has been split into Training, Validation, and Test splits. The Training split consists of 3,318,333 image-URL/caption pairs, with a total number of 51,201 total token types in the captions (i.e., total vocabulary). The average number of tokens per captions is 10.3 (standard deviation of 4.5), while the median is 9.0 tokens per caption. The Validation split consists of 15,840 image-URL/caption pairs, with similar statistics.\n", "citation": "@inproceedings{sharma-etal-2018-conceptual,\n title = \"Conceptual Captions: A Cleaned, Hypernymed, Image Alt-text Dataset For Automatic Image Captioning\",\n author = \"Sharma, Piyush and\n Ding, Nan and\n Goodman, Sebastian and\n Soricut, Radu\",\n booktitle = \"Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)\",\n month = jul,\n year = \"2018\",\n address = \"Melbourne, Australia\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://www.aclweb.org/anthology/P18-1238\",\n doi = \"10.18653/v1/P18-1238\",\n pages = \"2556--2565\",\n abstract = \"We present a new dataset of image caption annotations, Conceptual Captions, which contains an order of magnitude more images than the MS-COCO dataset (Lin et al., 2014) and represents a wider variety of both images and image caption styles. We achieve this by extracting and filtering image caption annotations from billions of webpages. We also present quantitative evaluations of a number of image captioning models and show that a model architecture based on Inception-ResNetv2 (Szegedy et al., 2016) for image-feature extraction and Transformer (Vaswani et al., 2017) for sequence modeling achieves the best performance when trained on the Conceptual Captions dataset.\",\n}\n", "homepage": "http://data.statmt.org/cc-100/", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "caption": {"dtype": "string", "id": null, "_type": "Value"}, "url": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "conceptual_captions", "config_name": "default", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 623230370, "num_examples": 3318333, "dataset_name": "conceptual_captions"}, "validation": {"name": "validation", "num_bytes": 2846024, "num_examples": 15840, "dataset_name": "conceptual_captions"}}, "download_checksums": {}, "download_size": 0, "post_processing_size": null, "dataset_size": 626076394, "size_in_bytes": 626076394}, "unlabeled": {"description": "Google's Conceptual Captions dataset has more than 3 million images, paired with natural-language captions.\nIn contrast with the curated style of the MS-COCO images, Conceptual Captions images and their raw descriptions are harvested from the web,\nand therefore represent a wider variety of styles. The raw descriptions are harvested from the Alt-text HTML attribute associated with web images.\nThe authors developed an automatic pipeline that extracts, filters, and transforms candidate image/caption pairs, with the goal of achieving a balance of cleanliness,\ninformativeness, fluency, and learnability of the resulting captions.\n", "citation": "@inproceedings{sharma2018conceptual,\n title = {Conceptual Captions: A Cleaned, Hypernymed, Image Alt-text Dataset For Automatic Image Captioning},\n author = {Sharma, Piyush and Ding, Nan and Goodman, Sebastian and Soricut, Radu},\n booktitle = {Proceedings of ACL},\n year = {2018},\n}\n", "homepage": "http://data.statmt.org/cc-100/", "license": "The dataset may be freely used for any purpose, although acknowledgement of\nGoogle LLC (\"Google\") as the data source would be appreciated. The dataset is\nprovided \"AS IS\" without any warranty, express or implied. Google disclaims all\nliability for any damages, direct or indirect, resulting from the use of the\ndataset.\n", "features": {"image_url": {"dtype": "string", "id": null, "_type": "Value"}, "caption": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "conceptual_captions", "config_name": "unlabeled", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 584520156, "num_examples": 3318333, "dataset_name": "conceptual_captions"}, "validation": {"name": "validation", "num_bytes": 2698726, "num_examples": 15840, "dataset_name": "conceptual_captions"}}, "download_checksums": {"https://storage.googleapis.com/gcc-data/Train/GCC-training.tsv?_ga=2.191230122.-1896153081.1529438250": {"num_bytes": 564607502, "checksum": "eab84e5ebc713a41a6b1f6ae6fa3d6617821a13b03fe24e16004cc4aac189635"}, "https://storage.googleapis.com/gcc-data/Validation/GCC-1.1.0-Validation.tsv?_ga=2.141047602.-1896153081.1529438250": {"num_bytes": 2603670, "checksum": "528a0c939ec2ad8d1740bd3f459a51e9fe67643050e29f68fabb6da3f8ac985d"}}, "download_size": 567211172, "post_processing_size": null, "dataset_size": 587218882, "size_in_bytes": 1154430054}, "labeled": {"description": "Google's Conceptual Captions dataset has more than 3 million images, paired with natural-language captions.\nIn contrast with the curated style of the MS-COCO images, Conceptual Captions images and their raw descriptions are harvested from the web,\nand therefore represent a wider variety of styles. The raw descriptions are harvested from the Alt-text HTML attribute associated with web images.\nThe authors developed an automatic pipeline that extracts, filters, and transforms candidate image/caption pairs, with the goal of achieving a balance of cleanliness,\ninformativeness, fluency, and learnability of the resulting captions.\n", "citation": "@inproceedings{sharma2018conceptual,\n title = {Conceptual Captions: A Cleaned, Hypernymed, Image Alt-text Dataset For Automatic Image Captioning},\n author = {Sharma, Piyush and Ding, Nan and Goodman, Sebastian and Soricut, Radu},\n booktitle = {Proceedings of ACL},\n year = {2018},\n}\n", "homepage": "http://data.statmt.org/cc-100/", "license": "The dataset may be freely used for any purpose, although acknowledgement of\nGoogle LLC (\"Google\") as the data source would be appreciated. The dataset is\nprovided \"AS IS\" without any warranty, express or implied. Google disclaims all\nliability for any damages, direct or indirect, resulting from the use of the\ndataset.\n", "features": {"image_url": {"dtype": "string", "id": null, "_type": "Value"}, "caption": {"dtype": "string", "id": null, "_type": "Value"}, "labels": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "MIDs": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "confidence_scores": {"feature": {"dtype": "float64", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "conceptual_captions", "config_name": "labeled", "version": "0.0.0", "splits": {"train": {"name": "train", "num_bytes": 1199330856, "num_examples": 2007090, "dataset_name": "conceptual_captions"}}, "download_checksums": {"https://storage.googleapis.com/conceptual-captions-v1-1-labels/Image_Labels_Subset_Train_GCC-Labels-training.tsv?_ga=2.234395421.-20118413.1607637118": {"num_bytes": 1282463277, "checksum": "d63f475306f376e4df2d365003f321468032278cd241d4c9eefc3c3e232baa38"}}, "download_size": 1282463277, "post_processing_size": null, "dataset_size": 1199330856, "size_in_bytes": 2481794133}}
 
 
labeled/conceptual_captions-train-00000-of-00003.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:db3ebb6e95496fbf7b6d292e4831e3ef600e1de3e9a9fbe096fa81ec37896453
3
+ size 222195794
labeled/conceptual_captions-train-00001-of-00003.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:32651126337566d9424d303a1b5ff3fce5e73a901ea541cd4a5ea58a898b6949
3
+ size 222152246
labeled/conceptual_captions-train-00002-of-00003.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c4fabeb3c40e47c0fb24e38e5b72479a7702a223067e0d869bc62cc74ce5d019
3
+ size 88385898
unlabeled/conceptual_captions-train-00000-of-00002.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9fdd5033b0152bd1d54a9828cd8636e77935a438029abf23177b03530f7bb755
3
+ size 319561397
unlabeled/conceptual_captions-train-00001-of-00002.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dd60b90b7ab71c63ae6c7a95d454e4e9b3fda31af94008e244a08c811a642c96
3
+ size 53937857
unlabeled/conceptual_captions-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:81994b99b78888e0f991c4bb00ee9f3e4c51f1bc35bfb6950f3dd8d7ddef2562
3
+ size 1774068