Datasets:
Tasks:
Image-to-Text
Formats:
parquet
Sub-tasks:
image-captioning
Languages:
English
Size:
100K - 1M
File size: 2,705 Bytes
57caa61 3955bc3 ae80fcb 3955bc3 ae80fcb 3955bc3 ae80fcb 57caa61 e108b18 57caa61 e108b18 57caa61 036f3f8 57caa61 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 |
---
language:
- en
pretty_name: COCO2017
size_categories:
- 100K<n<1M
task_categories:
- image-to-text
task_ids:
- image-captioning
tags:
- coco
- image-captioning
dataset_info:
features:
- name: license
dtype: int64
- name: file_name
dtype: string
- name: coco_url
dtype: string
- name: height
dtype: int64
- name: width
dtype: int64
- name: date_captured
dtype: string
- name: flickr_url
dtype: string
- name: image_id
dtype: int64
- name: ids
sequence: int64
- name: captions
sequence: string
splits:
- name: train
num_bytes: 64026361
num_examples: 118287
- name: validation
num_bytes: 2684731
num_examples: 5000
download_size: 30170127
dataset_size: 66711092
---
# coco2017
Image-text pairs from [MS COCO2017](https://cocodataset.org/#download).
## Data origin
* Data originates from [cocodataset.org](http://images.cocodataset.org/annotations/annotations_trainval2017.zip)
* While `coco-karpathy` uses a dense format (with several sentences and sendids per row), `coco-karpathy-long` uses a long format with one `sentence` (aka caption) and `sendid` per row. `coco-karpathy-long` uses the first five sentences and therefore is five times as long as `coco-karpathy`.
* `phiyodr/coco2017`: One row corresponds one image with several sentences.
* `phiyodr/coco2017-long`: One row correspond one sentence (aka caption). There are 5 rows (sometimes more) with the same image details.
## Format
```python
DatasetDict({
train: Dataset({
features: ['license', 'file_name', 'coco_url', 'height', 'width', 'date_captured', 'flickr_url', 'image_id', 'ids', 'captions'],
num_rows: 118287
})
validation: Dataset({
features: ['license', 'file_name', 'coco_url', 'height', 'width', 'date_captured', 'flickr_url', 'image_id', 'ids', 'captions'],
num_rows: 5000
})
})
```
## Usage
* Download image data and unzip
```bash
cd PATH_TO_IMAGE_FOLDER
wget http://images.cocodataset.org/zips/train2017.zip
wget http://images.cocodataset.org/zips/val2017.zip
#wget http://images.cocodataset.org/annotations/annotations_trainval2017.zip # zip not needed: everything you need is in load_dataset("phiyodr/coco2017")
unzip train2017.zip
unzip val2017.zip
```
* Load dataset in Python
```python
import os
from datasets import load_dataset
PATH_TO_IMAGE_FOLDER = "COCO2017"
def create_full_path(example):
"""Create full path to image using `base_path` to COCO2017 folder."""
example["image_path"] = os.path.join(PATH_TO_IMAGE_FOLDER, example["file_name"])
return example
dataset = load_dataset("phiyodr/coco2017")
dataset = dataset.map(create_full_path)
``` |