QQGYLab commited on
Commit
186435c
1 Parent(s): 6a97b1f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +61 -0
README.md CHANGED
@@ -1,3 +1,64 @@
1
  ---
2
  license: apache-2.0
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ tags:
4
+ - text2image
5
+ - stable-diffusion
6
  ---
7
+
8
+ # Model Card for ELLA
9
+
10
+ <!-- Provide a quick summary of what the model is/does. -->
11
+
12
+
13
+ ELLA(*Efficient Large Language Model Adapter*) equips text-to-image diffusion models with powerful Large Language Models (LLM) to enhance text alignment without training of either U-Net or LLM.
14
+
15
+
16
+
17
+ [**Project Page**](https://ella-diffusion.github.io/) **|** [**Paper (ArXiv)**](https://arxiv.org/abs/2403.05135) **|** [**Code**](https://github.com/TencentQQGYLab/ELLA)
18
+
19
+ ---
20
+
21
+ ## Model Details
22
+
23
+ ### Model Description
24
+
25
+ <!-- Provide a longer summary of what this model is. -->
26
+
27
+ Diffusion models have demonstrated remarkable performance in the domain of text-to-image generation. However, the majority of these models still employ CLIP as their text encoder, which constrains their ability to comprehend dense prompts, which encompass multiple objects, detailed attributes, complex relationships, long-text alignment, etc. In this paper, We introduce an Efficient Large Language Model Adapter, termed ELLA, which equips text-to-image diffusion models with powerful Large Language Models (LLM) to enhance text alignment without training of either U-Net or LLM. To seamlessly bridge two pre-trained models, we investigate a range of semantic alignment connector designs and propose a novel module, the Timestep-Aware Semantic Connector (TSC), which dynamically extracts timestep-dependent conditions from LLM. Our approach adapts semantic features at different stages of the denoising process, assisting diffusion models in interpreting lengthy and intricate prompts over sampling timesteps. Additionally, ELLA can be readily incorporated with community models and tools to improve their prompt-following capabilities. To assess text-to-image models in dense prompt following, we introduce Dense Prompt Graph Benchmark (DPG-Bench), a challenging benchmark consisting of 1K dense prompts. Extensive experiments demonstrate the superiority of ELLA in dense prompt following compared to state-of-the-art methods, particularly in multiple object compositions involving diverse attributes and relationships.
28
+
29
+
30
+
31
+
32
+ - **Developed by:** [Xiwei Hu*](https://openreview.net/profile?id=~Xiwei_Hu1)
33
+ [Rui Wang*](https://wrong.wang/)
34
+ [Yixiao Fang*](https://openreview.net/profile?id=~Yixiao_Fang1)
35
+ [Bin Fu*](https://openreview.net/profile?id=~BIN_FU2)
36
+ [Pei Cheng](https://openreview.net/profile?id=~Pei_Cheng1)
37
+ [Gang Yu✦](https://www.skicyyu.org/)
38
+ - **License:** [apache-2.0]
39
+
40
+
41
+ ### Model Sources
42
+
43
+ <!-- Provide the basic links for the model. -->
44
+
45
+ - **Repository:** https://github.com/TencentQQGYLab/ELLA
46
+ - **Paper:** https://arxiv.org/abs/2403.05135
47
+
48
+
49
+ ## Citation
50
+
51
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
52
+
53
+ **BibTeX:**
54
+
55
+ ```
56
+ @misc{hu2024ella,
57
+ title={ELLA: Equip Diffusion Models with LLM for Enhanced Semantic Alignment},
58
+ author={Xiwei Hu and Rui Wang and Yixiao Fang and Bin Fu and Pei Cheng and Gang Yu},
59
+ year={2024},
60
+ eprint={2403.05135},
61
+ archivePrefix={arXiv},
62
+ primaryClass={cs.CV}
63
+ }
64
+ ```