Papers
arxiv:2411.18412

Adaptive Blind All-in-One Image Restoration

Published on Nov 27
· Submitted by davidserra9 on Nov 28
Authors:
,
,

Abstract

Blind all-in-one image restoration models aim to recover a high-quality image from an input degraded with unknown distortions. However, these models require all the possible degradation types to be defined during the training stage while showing limited generalization to unseen degradations, which limits their practical application in complex cases. In this paper, we propose a simple but effective adaptive blind all-in-one restoration (ABAIR) model, which can address multiple degradations, generalizes well to unseen degradations, and efficiently incorporate new degradations by training a small fraction of parameters. First, we train our baseline model on a large dataset of natural images with multiple synthetic degradations, augmented with a segmentation head to estimate per-pixel degradation types, resulting in a powerful backbone able to generalize to a wide range of degradations. Second, we adapt our baseline model to varying image restoration tasks using independent low-rank adapters. Third, we learn to adaptively combine adapters to versatile images via a flexible and lightweight degradation estimator. Our model is both powerful in handling specific distortions and flexible in adapting to complex tasks, it not only outperforms the state-of-the-art by a large margin on five- and three-task IR setups, but also shows improved generalization to unseen degradations and also composite distortions.

Community

Paper author Paper submitter

Our Adaptive Blind All-in-One Image Restoration (ABAIR) model combines a powerful baseline trained on images with synthetic degradations, low-rank decompositions for task-specific adaptation, and a lightweight estimator to handle complex distortions.

We propose a pre-training strategy based on synthetic degradations, which are parameterized to control both the type and severity of degradations, including rain, blur, noise, haze, and low-light conditions. To establish a strong weight initialization for subsequent fine-tuning, we propose a degradation CutMix method. This approach seamlessly blends multiple degradation types and severity levels within a single image, enhancing the model’s generalization ability. Additionally, we incorporate an auxiliary segmentation head and optimize the model using a Cross-Entropy Loss to perform pixel-wise degradation type estimation. Then, we adapt our baseline model to varying image restoration tasks using independent low-rank adapters (LoRA). Third, we learn to adaptively combine adapters to versatile images via a flexible and lightweight degradation estimator.

Our approach is highly effective at handling known degradations while remaining adaptable to new image restoration tasks, enabled by its ability to learn a disentangled representation for each degradation type. In contrast, current methods require retraining the entire architecture with all degradation types to accommodate a new task, making them computationally expensive and inefficient. By using a robust baseline model and a disentangled representation, our method requires only training a small set of parameters for new tasks, preserving the knowledge acquired from previous degradations.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2411.18412 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2411.18412 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2411.18412 in a Space README.md to link it from this page.

Collections including this paper 1