Papers
arxiv:1909.07957

An Internal Learning Approach to Video Inpainting

Published on Sep 17, 2019
Authors:
,
,
,
,
,

Abstract

We propose a novel video inpainting algorithm that simultaneously hallucinates missing appearance and motion (optical <PRE_TAG>flow</POST_TAG>) information, building upon the recent 'Deep Image Prior' (DIP) that exploits convolutional network architectures to enforce plausible texture in static images. In extending DIP to video we make two important contributions. First, we show that coherent video inpainting is possible without a priori training. We take a generative approach to inpainting based on internal (within-video) learning without reliance upon an external corpus of visual data to train a one-size-fits-all model for the large space of general videos. Second, we show that such a framework can jointly generate both appearance and flow, whilst exploiting these complementary modalities to ensure mutual consistency. We show that leveraging <PRE_TAG>appearance statistics</POST_TAG> specific to each video achieves visually plausible results whilst handling the challenging problem of long-term consistency.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/1909.07957 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/1909.07957 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/1909.07957 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.