Abstract
Blind all-in-one image restoration models aim to recover a high-quality image from an input degraded with unknown distortions. However, these models require all the possible degradation types to be defined during the training stage while showing limited generalization to unseen degradations, which limits their practical application in complex cases. In this paper, we propose a simple but effective adaptive blind all-in-one restoration (ABAIR) model, which can address multiple degradations, generalizes well to unseen degradations, and efficiently incorporate new degradations by training a small fraction of parameters. First, we train our baseline model on a large dataset of natural images with multiple synthetic degradations, augmented with a segmentation head to estimate per-pixel degradation types, resulting in a powerful backbone able to generalize to a wide range of degradations. Second, we adapt our baseline model to varying image restoration tasks using independent low-rank adapters. Third, we learn to adaptively combine adapters to versatile images via a flexible and lightweight degradation estimator. Our model is both powerful in handling specific distortions and flexible in adapting to complex tasks, it not only outperforms the state-of-the-art by a large margin on five- and three-task IR setups, but also shows improved generalization to unseen degradations and also composite distortions.
Community
Our Adaptive Blind All-in-One Image Restoration (ABAIR) model combines a powerful baseline trained on images with synthetic degradations, low-rank decompositions for task-specific adaptation, and a lightweight estimator to handle complex distortions.
We propose a pre-training strategy based on synthetic degradations, which are parameterized to control both the type and severity of degradations, including rain, blur, noise, haze, and low-light conditions. To establish a strong weight initialization for subsequent fine-tuning, we propose a degradation CutMix method. This approach seamlessly blends multiple degradation types and severity levels within a single image, enhancing the model’s generalization ability. Additionally, we incorporate an auxiliary segmentation head and optimize the model using a Cross-Entropy Loss to perform pixel-wise degradation type estimation. Then, we adapt our baseline model to varying image restoration tasks using independent low-rank adapters (LoRA). Third, we learn to adaptively combine adapters to versatile images via a flexible and lightweight degradation estimator.
Our approach is highly effective at handling known degradations while remaining adaptable to new image restoration tasks, enabled by its ability to learn a disentangled representation for each degradation type. In contrast, current methods require retraining the entire architecture with all degradation types to accommodate a new task, making them computationally expensive and inefficient. By using a robust baseline model and a disentangled representation, our method requires only training a small set of parameters for new tasks, preserving the knowledge acquired from previous degradations.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- LoRA-IR: Taming Low-Rank Experts for Efficient All-in-One Image Restoration (2024)
- GenDeg: Diffusion-Based Degradation Synthesis for Generalizable All-in-One Image Restoration (2024)
- Mixed Degradation Image Restoration via Local Dynamic Optimization and Conditional Embedding (2024)
- Towards Unsupervised Blind Face Restoration using Diffusion Prior (2024)
- A Survey on All-in-One Image Restoration: Taxonomy, Evaluation and Future Trends (2024)
- All-in-one Weather-degraded Image Restoration via Adaptive Degradation-aware Self-prompting Model (2024)
- Complexity Experts are Task-Discriminative Learners for Any Image Restoration (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper