Papers
arxiv:1401.3590
This paper has been withdrawn

An Enhanced Method For Evaluating Automatic Video Summaries

Published on Jan 14, 2014
Authors:

Abstract

Evaluation of automatic video summaries is a challenging problem. In the past years, some evaluation methods are presented that utilize only a single feature like color feature to detect similarity between automatic video summaries and ground-truth user summaries. One of the drawbacks of using a single feature is that sometimes it gives a false similarity detection which makes the assessment of the quality of the generated video summary less perceptual and not accurate. In this paper, a novel method for evaluating automatic video summaries is presented. This method is based on comparing automatic video summaries generated by video summarization techniques with ground-truth user summaries. The objective of this evaluation method is to quantify the quality of video summaries, and allow comparing different video summarization techniques utilizing both color and texture features of the video frames and using the Bhattacharya distance as a dissimilarity measure due to its advantages. Our Experiments show that the proposed evaluation method overcomes the drawbacks of other methods and gives a more perceptual evaluation of the quality of the automatic video summaries.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/1401.3590 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/1401.3590 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/1401.3590 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.