FashionComposer: Compositional Fashion Image Generation
Abstract
We present FashionComposer for compositional fashion image generation. Unlike previous methods, FashionComposer is highly flexible. It takes multi-modal input (i.e., text prompt, parametric human model, garment image, and face image) and supports personalizing the appearance, pose, and figure of the human and assigning multiple garments in one pass. To achieve this, we first develop a universal framework capable of handling diverse input modalities. We construct scaled training data to enhance the model's robust compositional capabilities. To accommodate multiple reference images (garments and faces) seamlessly, we organize these references in a single image as an "asset library" and employ a reference UNet to extract appearance features. To inject the appearance features into the correct pixels in the generated result, we propose subject-binding attention. It binds the appearance features from different "assets" with the corresponding text features. In this way, the model could understand each asset according to their semantics, supporting arbitrary numbers and types of reference images. As a comprehensive solution, FashionComposer also supports many other applications like human album generation, diverse virtual try-on tasks, etc.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- UniReal: Universal Image Generation and Editing via Learning Real-world Dynamics (2024)
- AnyDressing: Customizable Multi-Garment Virtual Dressing via Latent Diffusion Models (2024)
- FitDiT: Advancing the Authentic Garment Details for High-fidelity Virtual Try-on (2024)
- RelationBooth: Towards Relation-Aware Customized Object Generation (2024)
- IGR: Improving Diffusion Model for Garment Restoration from Person Image (2024)
- LocRef-Diffusion:Tuning-Free Layout and Appearance-Guided Generation (2024)
- DreamBlend: Advancing Personalized Fine-tuning of Text-to-Image Diffusion Models (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper