multilingual-coco / README.md
romrawinjp's picture
Update README.md
2b65d7c verified
---
language:
- en
- th
- ru
- ja
- it
- de
- vi
- zh
- ar
- es
license: cc-by-4.0
size_categories:
- 100K<n<1M
task_categories:
- image-to-text
pretty_name: multilingual-coco
dataset_info:
features:
- name: cocoid
dtype: int64
- name: filename
dtype: string
- name: en
sequence: string
- name: th
sequence: string
- name: ru
sequence: string
- name: jp-stair
sequence: string
- name: it
sequence: string
- name: de
sequence: string
- name: vi
sequence: string
- name: cn
sequence: string
- name: jp-yj
sequence: string
- name: ar
sequence: string
- name: es
sequence: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 13852882321.001
num_examples: 82783
- name: val
num_bytes: 811780220
num_examples: 5000
- name: restval
num_bytes: 5123622277.68
num_examples: 30504
- name: test
num_bytes: 823623386
num_examples: 5000
download_size: 20265033594
dataset_size: 20611908204.681
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
- split: restval
path: data/restval-*
- split: test
path: data/test-*
---
# Multilingual Common Objects in Context (COCO) Dataset
This dataset is a collection of multiple language open-source captions of [COCO](https://cocodataset.org/) dataset.
The split in this dataset is set according to [Andrej Karpathy's split](https://www.kaggle.com/datasets/shtvkumar/karpathy-splits) from `dataset_coco.json` file. The collection was created specifically for simplicity of use in training and evaluation pipeline by non-commercial and research purposes. The COCO images dataset is licensed under a Creative Commons Attribution 4.0 License.
# Multilanguage Feature's Code and Sources
If you use any part of the dataset, we recommend that you directly cite the original source for each language in this collection.
## English `en`
English caption is retrieved from the original [COCO dataset repository](http://images.cocodataset.org/annotations/stuff_annotations_trainval2017.zip)’s annotation file.
```
@misc{lin2015microsoftcococommonobjects,
title={Microsoft COCO: Common Objects in Context},
author={Tsung-Yi Lin and Michael Maire and Serge Belongie and Lubomir Bourdev and Ross Girshick and James Hays and Pietro Perona and Deva Ramanan and C. Lawrence Zitnick and Piotr Dollár},
year={2015},
eprint={1405.0312},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/1405.0312},
}
```
## Thai `th`
Thai captions were a part of Romrawin Chumpu’s work at NECTEC. This work is partially supported by the Program Management Unit for Human Resources & Institutional Development, Research and Innovation (PMU-B) [Grant number B04G640107]. <br> The captions were translated from English to Thai using google translate API.
## Russian `ru`
Source: [AlexWortega/ru_COCO: Translated coco dataset with "facebook/wmt19-en-ru" model](https://github.com/AlexWortega/ru_COCO) <br> The captions were translated by using `facebook/wmt19-en-ru` model.
## Japanese STAIR `jp-stair`
Source: [STAIR Captions](https://stair-lab-cit.github.io/STAIR-captions-web/) <br> The captions were translated from English to Japanese using machine translation.
```
@InProceedings{Yoshikawa2017,
title = {STAIR Captions: Constructing a Large-Scale Japanese Image Caption Dataset},
booktitle = {Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)},
month = {July},
year = {2017},
address = {Vancouver, Canada},
publisher = {Association for Computational Linguistics},
pages = {417--421},
url = {http://aclweb.org/anthology/P17-2066}
}
```
## Japanese YJ `jp-yj`
Source: [yahoojapan/YJCaptions](https://github.com/yahoojapan/YJCaptions) by Yahoo Japan. <br> Total captions of this Japanese version is around 26k captions.
## Italian `it`
Source: [crux82/mscoco-it: A large scale dataset for Image Captioning in Italian](https://github.com/crux82/mscoco-it) <br> The captions were obtained through semi-automatic translation from English to Italian.
## German `de`
Source: [Jotschi/coco-karpathy-opus-de · Datasets at Hugging Face](https://huggingface.co/datasets/Jotschi/coco-karpathy-opus-de) <br> The captions were translated by using [Helsinki-NLP/opus-mt-en-de · Hugging Face](https://huggingface.co/Helsinki-NLP/opus-mt-en-de) model.
## Vietnamese `vi`
Source: [dinhanhx/coco-2017-vi · Datasets at Hugging Face](https://huggingface.co/datasets/dinhanhx/coco-2017-vi) <br> The captions were translated by VinAI from English to Vietnamese.
```
@software{dinhanhx_VisualRoBERTa_2022,
title = {{VisualRoBERTa}},
author = {dinhanhx},
year = 2022,
month = 9,
url = {https://github.com/dinhanhx/VisualRoBERTa}
}
```
## Chinese `cn`
Source: [li-xirong/coco-cn: Enriching MS-COCO with Chinese sentences and tags for cross-lingual multimedia tasks](https://github.com/li-xirong/coco-cn) <br> We selected only human generated dataset.
## Arabic `ar`
Source: [canesee-project/Arabic-COCO: MS COCO captions in Arabic](https://github.com/canesee-project/Arabic-COCO) <br> The captions were fully translated with Google's Advanced Cloud Translation API.
## Spanish `es`
Source: [carlosGarciaHe/MS-COCO-ES: MS-COCO-ES is a dataset created from the original MS-COCO dataset. This project aims to provide a small subset of the original image captions translated into Spanish by humans annotators. This subset is composed by 20,000 captions of 4,000 images.](https://github.com/carlosGarciaHe/MS-COCO-ES) <br> The captions were translated by human.