Papers
arxiv:1511.02283

Generation and Comprehension of Unambiguous Object Descriptions

Published on Nov 7, 2015
Authors:
,
,
,
,
,

Abstract

We propose a method that can generate an unambiguous description (known as a referring expression) of a specific object or region in an image, and which can also comprehend or interpret such an expression to infer which object is being described. We show that our method outperforms previous methods that generate descriptions of objects without taking into account other potentially ambiguous objects in the scene. Our model is inspired by recent successes of deep learning methods for image captioning, but while image captioning is difficult to evaluate, our task allows for easy objective evaluation. We also present a new large-scale dataset for referring expressions, based on MS-COCO. We have released the dataset and a toolbox for visualization and evaluation, see https://github.com/mjhucla/Google_Refexp_toolbox

Community

Sign up or log in to comment

Models citing this paper 43

Browse 43 models citing this paper

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/1511.02283 in a dataset README.md to link it from this page.

Spaces citing this paper 16

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.