Datasets:
Tasks:
Visual Question Answering
Sub-tasks:
visual-question-answering
Languages:
English
Multilinguality:
monolingual
Size Categories:
100K<n<1M
Language Creators:
found
Annotations Creators:
machine-generated
Source Datasets:
extended|other-guesswhat
ArXiv:
Tags:
License:
The dataset viewer is taking too long to fetch the data. Try to refresh this page.
Server-side error
Error code: ClientConnectionError
Dataset Card for "compguesswhat"
Dataset Summary
CompGuessWhat?! is an instance of a multi-task framework for evaluating the quality of learned neural representations, in particular concerning attribute grounding. Use this dataset if you want to use the set of games whose reference scene is an image in VisualGenome. Visit the website for more details: https://compguesswhat.github.io
Supported Tasks and Leaderboards
Languages
Dataset Structure
Data Instances
compguesswhat-original
- Size of downloaded dataset files: 107.21 MB
- Size of the generated dataset: 174.37 MB
- Total amount of disk used: 281.57 MB
An example of 'validation' looks as follows.
This example was too long and was cropped:
{
"id": 2424,
"image": "{\"coco_url\": \"http://mscoco.org/images/270512\", \"file_name\": \"COCO_train2014_000000270512.jpg\", \"flickr_url\": \"http://farm6.stat...",
"objects": "{\"area\": [1723.5133056640625, 4838.5361328125, 287.44476318359375, 44918.7109375, 3688.09375, 522.1935424804688], \"bbox\": [[5.61...",
"qas": {
"answer": ["Yes", "No", "No", "Yes"],
"id": [4983, 4996, 5006, 5017],
"question": ["Is it in the foreground?", "Does it have wings?", "Is it a person?", "Is it a vehicle?"]
},
"status": "success",
"target_id": 1197044,
"timestamp": "2016-07-08 15:07:38"
}
compguesswhat-zero_shot
- Size of downloaded dataset files: 4.84 MB
- Size of the generated dataset: 96.74 MB
- Total amount of disk used: 101.59 MB
An example of 'nd_valid' looks as follows.
This example was too long and was cropped:
{
"id": 0,
"image": {
"coco_url": "https://s3.amazonaws.com/nocaps/val/004e21eb2e686f40.jpg",
"date_captured": "2018-11-06 11:04:33",
"file_name": "004e21eb2e686f40.jpg",
"height": 1024,
"id": 6,
"license": 0,
"open_images_id": "004e21eb2e686f40",
"width": 768
},
"objects": "{\"IsOccluded\": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], \"IsTruncated\": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], \"area\": [3...",
"status": "incomplete",
"target_id": "004e21eb2e686f40_30"
}
Data Fields
The data fields are the same among all splits.
compguesswhat-original
id
: aint32
feature.target_id
: aint32
feature.timestamp
: astring
feature.status
: astring
feature.id
: aint32
feature.file_name
: astring
feature.flickr_url
: astring
feature.coco_url
: astring
feature.height
: aint32
feature.width
: aint32
feature.width
: aint32
feature.height
: aint32
feature.url
: astring
feature.coco_id
: aint32
feature.flickr_id
: astring
feature.image_id
: astring
feature.qas
: a dictionary feature containing:question
: astring
feature.answer
: astring
feature.id
: aint32
feature.
objects
: a dictionary feature containing:id
: aint32
feature.bbox
: alist
offloat32
features.category
: astring
feature.area
: afloat32
feature.category_id
: aint32
feature.segment
: a dictionary feature containing:feature
: afloat32
feature.
compguesswhat-zero_shot
id
: aint32
feature.target_id
: astring
feature.status
: astring
feature.id
: aint32
feature.file_name
: astring
feature.coco_url
: astring
feature.height
: aint32
feature.width
: aint32
feature.license
: aint32
feature.open_images_id
: astring
feature.date_captured
: astring
feature.objects
: a dictionary feature containing:id
: astring
feature.bbox
: alist
offloat32
features.category
: astring
feature.area
: afloat32
feature.category_id
: aint32
feature.IsOccluded
: aint32
feature.IsTruncated
: aint32
feature.segment
: a dictionary feature containing:MaskPath
: astring
feature.LabelName
: astring
feature.BoxID
: astring
feature.BoxXMin
: astring
feature.BoxXMax
: astring
feature.BoxYMin
: astring
feature.BoxYMax
: astring
feature.PredictedIoU
: astring
feature.Clicks
: astring
feature.
Data Splits
compguesswhat-original
train | validation | test | |
---|---|---|---|
compguesswhat-original | 46341 | 9738 | 9621 |
compguesswhat-zero_shot
nd_valid | od_valid | nd_test | od_test | |
---|---|---|---|---|
compguesswhat-zero_shot | 5343 | 5372 | 13836 | 13300 |
Dataset Creation
Curation Rationale
Source Data
Initial Data Collection and Normalization
Who are the source language producers?
Annotations
Annotation process
Who are the annotators?
Personal and Sensitive Information
Considerations for Using the Data
Social Impact of Dataset
Discussion of Biases
Other Known Limitations
Additional Information
Dataset Curators
Licensing Information
Citation Information
@inproceedings{suglia-etal-2020-compguesswhat,
title = "{C}omp{G}uess{W}hat?!: A Multi-task Evaluation Framework for Grounded Language Learning",
author = "Suglia, Alessandro and
Konstas, Ioannis and
Vanzo, Andrea and
Bastianelli, Emanuele and
Elliott, Desmond and
Frank, Stella and
Lemon, Oliver",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.682",
pages = "7625--7641",
abstract = "Approaches to Grounded Language Learning are commonly focused on a single task-based final performance measure which may not depend on desirable properties of the learned hidden representations, such as their ability to predict object attributes or generalize to unseen situations. To remedy this, we present GroLLA, an evaluation framework for Grounded Language Learning with Attributes based on three sub-tasks: 1) Goal-oriented evaluation; 2) Object attribute prediction evaluation; and 3) Zero-shot evaluation. We also propose a new dataset CompGuessWhat?! as an instance of this framework for evaluating the quality of learned neural representations, in particular with respect to attribute grounding. To this end, we extend the original GuessWhat?! dataset by including a semantic layer on top of the perceptual one. Specifically, we enrich the VisualGenome scene graphs associated with the GuessWhat?! images with several attributes from resources such as VISA and ImSitu. We then compare several hidden state representations from current state-of-the-art approaches to Grounded Language Learning. By using diagnostic classifiers, we show that current models{'} learned representations are not expressive enough to encode object attributes (average F1 of 44.27). In addition, they do not learn strategies nor representations that are robust enough to perform well when novel scenes or objects are involved in gameplay (zero-shot best accuracy 50.06{\%}).",
}
Contributions
Thanks to @thomwolf, @aleSuglia, @lhoestq for adding this dataset.
- Downloads last month
- 13