The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider removing the loading script and relying on automated data support (you can use convert_to_parquet from the datasets library). If this is not possible, please open a discussion for direct help.

Dataset Card for "COCO Stuff"

Quick Start

Usage

>>> from datasets.load import load_dataset

>>> dataset = load_dataset('whyen-wang/coco_stuff')
>>> example = dataset['train'][500]
>>> print(example)
{'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x426>,
 'bboxes': [
  [192.4199981689453, 220.17999267578125,
   129.22999572753906, 148.3800048828125],
  [76.94000244140625, 146.6300048828125,
   104.55000305175781, 109.33000183105469],
  [302.8800048828125, 115.2699966430664,
   99.11000061035156, 119.2699966430664],
  [0.0, 0.800000011920929,
   592.5700073242188, 420.25]],
 'categories': [46, 46, 46, 55],
 'inst.rles': {
  'size': [[426, 640], [426, 640], [426, 640], [426, 640]],
  'counts': [
   'gU`2b0d;...', 'RXP16m<=...', ']Xn34S=4...', 'n:U2o8W2...'
  ]}}

Visualization

>>> import cv2
>>> import numpy as np
>>> from PIL import Image

>>> def transforms(examples):
        sem_rles = examples.pop('sem.rles')
        annotation = []
        for i in sem_rles:
            sem_rles = [
                {'size': size, 'counts': counts}
                for size, counts in zip(i['size'], i['counts'])
            ]
            annotation.append(maskUtils.decode(sem_rles))
        examples['annotation'] = annotation
        return examples

>>> def visualize(example, colors):
        image = np.array(example['image'])
        categories = example['categories']
        masks = example['annotation']
        n = len(categories)
        for i in range(n):
            c = categories[i]
            color = colors[c]
            image[masks[..., i] == 1] = image[masks[..., i] == 1] // 2 + color // 2
        return image

>>> dataset.set_transform(transforms)

>>> names = dataset['train'].features['categories'].feature.names

>>> colors = np.ones((92, 3), np.uint8) * 255
>>> colors[:, 0] = np.linspace(0, 255, 92)
>>> colors = cv2.cvtColor(colors[None], cv2.COLOR_HSV2RGB)[0]

>>> example = dataset['train'][500]
>>> Image.fromarray(visualize(example, colors))

Dataset Summary

COCO is a large-scale object detection, segmentation, and captioning dataset.

Supported Tasks and Leaderboards

Image Segmentation

Languages

en

Dataset Structure

Data Instances

An example looks as follows.

{
    "image": PIL.Image(mode="RGB"),
    "categories": [29, 73, 91],
    "sem.rles": {
        "size": [[426, 640], [426, 640], [426, 640]],
        "counts": [
          "S=7T=O1O0000000000...",
          "c1Y3P:10O1O010O100...",
          "n:U2o8W2N1O1O2M2N2..."
        ]
    }
}

Data Fields

[More Information Needed]

Data Splits

name train validation
default 118,287 5,000

Dataset Creation

Curation Rationale

[More Information Needed]

Source Data

Initial Data Collection and Normalization

[More Information Needed]

Who are the source language producers?

[More Information Needed]

Annotations

Annotation process

[More Information Needed]

Who are the annotators?

[More Information Needed]

Personal and Sensitive Information

[More Information Needed]

Considerations for Using the Data

Social Impact of Dataset

[More Information Needed]

Discussion of Biases

[More Information Needed]

Other Known Limitations

[More Information Needed]

Additional Information

Dataset Curators

[More Information Needed]

Licensing Information

Creative Commons Attribution 4.0 License

Citation Information

@article{cocodataset,
  author    = {Tsung{-}Yi Lin and Michael Maire and Serge J. Belongie and Lubomir D. Bourdev and Ross B. Girshick and James Hays and Pietro Perona and Deva Ramanan and Piotr Doll{'{a} }r and C. Lawrence Zitnick},
  title     = {Microsoft {COCO:} Common Objects in Context},
  journal   = {CoRR},
  volume    = {abs/1405.0312},
  year      = {2014},
  url       = {http://arxiv.org/abs/1405.0312},
  archivePrefix = {arXiv},
  eprint    = {1405.0312},
  timestamp = {Mon, 13 Aug 2018 16:48:13 +0200},
  biburl    = {https://dblp.org/rec/bib/journals/corr/LinMBHPRDZ14},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}

Contributions

Thanks to @github-whyen-wang for adding this dataset.

Downloads last month
39