IFAdapter: Instance Feature Control for Grounded Text-to-Image Generation
Abstract
While Text-to-Image (T2I) diffusion models excel at generating visually appealing images of individual instances, they struggle to accurately position and control the features generation of multiple instances. The Layout-to-Image (L2I) task was introduced to address the positioning challenges by incorporating bounding boxes as spatial control signals, but it still falls short in generating precise instance features. In response, we propose the Instance Feature Generation (IFG) task, which aims to ensure both positional accuracy and feature fidelity in generated instances. To address the IFG task, we introduce the Instance Feature Adapter (IFAdapter). The IFAdapter enhances feature depiction by incorporating additional appearance tokens and utilizing an Instance Semantic Map to align instance-level features with spatial locations. The IFAdapter guides the diffusion process as a plug-and-play module, making it adaptable to various community models. For evaluation, we contribute an IFG benchmark and develop a verification pipeline to objectively compare models' abilities to generate instances with accurate positioning and features. Experimental results demonstrate that IFAdapter outperforms other models in both quantitative and qualitative evaluations.
Community
We present IFAdapter, a novel approach designed to exert fine-grained control over localized content generation in pretrained diffusion models. (a) IFAdapter has the capacity to generate intricate features with precision. (b) The plug-and-play design of IFAdapter enables it to be seamlessly applied to various community models.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Rethinking The Training And Evaluation of Rich-Context Layout-to-Image Generation (2024)
- ReCorD: Reasoning and Correcting Diffusion for HOI Generation (2024)
- Training-free Composite Scene Generation for Layout-to-Image Synthesis (2024)
- Training-Free Sketch-Guided Diffusion with Latent Optimization (2024)
- Safeguard Text-to-Image Diffusion Models with Human Feedback Inversion (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper