TokenCompose_SD14_A / README.md
zwcolin's picture
Update README.md
d165e27 verified
---
license: creativeml-openrail-m
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- stable-diffusion
- cvpr
- text-to-image
- image-generation
- compositionality
---
# 🧩 TokenCompose SD14 Model Card
## 🎬CVPR 2024
[TokenCompose_SD14_A](https://mlpc-ucsd.github.io/TokenCompose/) is a [latent text-to-image diffusion model](https://arxiv.org/abs/2112.10752) finetuned from the [**Stable-Diffusion-v1-4**](https://huggingface.co/CompVis/stable-diffusion-v1-4) checkpoint at resolution 512x512 on the [VSR](https://github.com/cambridgeltl/visual-spatial-reasoning) split of [COCO image-caption pairs](https://cocodataset.org/#download) for 24,000 steps with a learning rate of 5e-6. The training objective involves token-level grounding terms in addition to denoising loss for enhanced multi-category instance composition and photorealism. The "_A/B" postfix indicates different finetuning runs of the model using the same above configurations.
# 📄 Paper
Please follow [this](https://arxiv.org/abs/2312.03626) link.
# 🧨Example Usage
We strongly recommend using the [🤗Diffuser](https://github.com/huggingface/diffusers) library to run our model.
```python
import torch
from diffusers import StableDiffusionPipeline
model_id = "mlpc-lab/TokenCompose_SD14_A"
device = "cuda"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float32)
pipe = pipe.to(device)
prompt = "A cat and a wine glass"
image = pipe(prompt).images[0]
image.save("cat_and_wine_glass.png")
```
# ⬆️Improvements over SD14
<table>
<tr>
<th rowspan="3" align="center">Method</th>
<th colspan="9" align="center">Multi-category Instance Composition</th>
<th colspan="2" align="center">Photorealism</th>
<th colspan="1" align="center">Efficiency</th>
</tr>
<tr>
<!-- <th align="center">&nbsp;</th> -->
<th rowspan="2" align="center">Object Accuracy</th>
<th colspan="4" align="center">COCO</th>
<th colspan="4" align="center">ADE20K</th>
<th rowspan="2" align="center">FID (COCO)</th>
<th rowspan="2" align="center">FID (Flickr30K)</th>
<th rowspan="2" align="center">Latency</th>
</tr>
<tr>
<!-- <th align="center">&nbsp;</th> -->
<th align="center">MG2</th>
<th align="center">MG3</th>
<th align="center">MG4</th>
<th align="center">MG5</th>
<th align="center">MG2</th>
<th align="center">MG3</th>
<th align="center">MG4</th>
<th align="center">MG5</th>
</tr>
<tr>
<td align="center"><a href="https://huggingface.co/CompVis/stable-diffusion-v1-4">SD 1.4</a></td>
<td align="center">29.86</td>
<td align="center">90.72<sub>1.33</sub></td>
<td align="center">50.74<sub>0.89</sub></td>
<td align="center">11.68<sub>0.45</sub></td>
<td align="center">0.88<sub>0.21</sub></td>
<td align="center">89.81<sub>0.40</sub></td>
<td align="center">53.96<sub>1.14</sub></td>
<td align="center">16.52<sub>1.13</sub></td>
<td align="center">1.89<sub>0.34</sub></td>
<td align="center"><u>20.88</u></td>
<td align="center"><u>71.46</u></td>
<td align="center"><b>7.54</b><sub>0.17</sub></td>
</tr>
<tr>
<td align="center"><a href="https://github.com/mlpc-ucsd/TokenCompose"><strong>TokenCompose (Ours)</strong></a></td>
<td align="center"><b>52.15</b></td>
<td align="center"><b>98.08</b><sub>0.40</sub></td>
<td align="center"><b>76.16</b><sub>1.04</sub></td>
<td align="center"><b>28.81</b><sub>0.95</sub></td>
<td align="center"><u>3.28</u><sub>0.48</sub></td>
<td align="center"><b>97.75</b><sub>0.34</sub></td>
<td align="center"><b>76.93</b><sub>1.09</sub></td>
<td align="center"><b>33.92</b><sub>1.47</sub></td>
<td align="center"><b>6.21</b><sub>0.62</sub></td>
<td align="center"><b>20.19</b></td>
<td align="center"><b>71.13</b></td>
<td align="center"><b>7.56</b><sub>0.14</sub></td>
</tr>
</table>
# 📰 Citation
```bibtex
@InProceedings{Wang2024TokenCompose,
author = {Wang, Zirui and Sha, Zhizhou and Ding, Zheng and Wang, Yilin and Tu, Zhuowen},
title = {TokenCompose: Text-to-Image Diffusion with Token-level Supervision},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2024},
pages = {8553-8564}
}
```