Papers
arxiv:2406.13457

EvTexture: Event-driven Texture Enhancement for Video Super-Resolution

Published on Jun 19
ยท Submitted by BoyDachun on Jun 24
Authors:
,

Abstract

Event-based vision has drawn increasing attention due to its unique characteristics, such as high temporal resolution and high dynamic range. It has been used in video super-resolution (VSR) recently to enhance the flow estimation and temporal alignment. Rather than for motion learning, we propose in this paper the first VSR method that utilizes event signals for texture enhancement. Our method, called EvTexture, leverages high-frequency details of events to better recover texture regions in VSR. In our EvTexture, a new texture enhancement branch is presented. We further introduce an iterative texture enhancement module to progressively explore the high-temporal-resolution event information for texture restoration. This allows for gradual refinement of texture regions across multiple iterations, leading to more accurate and rich high-resolution details. Experimental results show that our EvTexture achieves state-of-the-art performance on four datasets. For the Vid4 dataset with rich textures, our method can get up to 4.67dB gain compared with recent event-based methods. Code: https://github.com/DachunKai/EvTexture.

Community

Paper author Paper submitter

Texture restoration is a very challenging problem in video super-resolution (VSR). In this paper, we propose the first event-driven scheme for texture restoration in VSR.
Project Page: https://dachunkai.github.io/evtexture.github.io/.
Code: https://github.com/DachunKai/EvTexture.

ยท

Thanks for sharing! Really cool work ๐Ÿ”ฅ
Would be great to build a demo and share the model on the hub.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2406.13457 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2406.13457 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2406.13457 in a Space README.md to link it from this page.

Collections including this paper 8