File size: 3,328 Bytes
fc74d76 eebcdc2 fc74d76 eebcdc2 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 |
---
license: artistic-2.0
datasets:
- fka/awesome-chatgpt-prompts
metrics:
- character
pipeline_tag: image-to-image
---
# InstanceDiffusion: Instance-level Control for Image Generation
We introduce **InstanceDiffusion** that adds precise instance-level control to text-to-image diffusion models. InstanceDiffusion supports free-form language conditions per instance and allows flexible ways to specify instance locations such as simple **single points**, **scribbles**, **bounding boxes** or intricate **instance segmentation masks**, and combinations thereof.
Compared to the previous SOTA, InstanceDiffusion achieves **2.0 times** higher AP50 for box inputs and **1.7 times** higher IoU for mask inputs.
> [**InstanceDiffusion: Instance-level Control for Image Generation**](http://people.eecs.berkeley.edu/~xdwang/projects/InstDiff/)
> [Xudong Wang](https://people.eecs.berkeley.edu/~xdwang/), [Trevor Darrell](https://people.eecs.berkeley.edu/~trevor/), [Saketh Rambhatla](https://rssaketh.github.io/),
[Rohit Girdhar](https://rohitgirdhar.github.io/), [Ishan Misra](https://imisra.github.io/)
> GenAI, Meta; BAIR, UC Berkeley
> Preprint
## Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** [https://github.com/frank-xwang/InstanceDiffusion]
- **Paper:** [https://arxiv.org/pdf/2402.03290.pdf]
- **Project Page:** [https://people.eecs.berkeley.edu/~xdwang/projects/InstDiff/]
## Model Description
InstanceDiffusion enhances text-to-image models by providing additional instance-level control. In additon to a global text prompt, InstanceDiffusion allows for paired instance-level prompts and their locations (e.g. points, boxes, scribbles or instance masks) to be specified when generating images.
We add our proposed learnable UniFusion blocks to handle the additional per-instance conditioning. UniFusion fuses the instance conditioning with the backbone and modulate its features to enable instance conditioned image generation. Additionally, we propose ScaleU blocks that improve the UNet’s ability to respect instance-conditioning by rescaling the skip-connection and backbone feature maps produced in the UNet. At inference, we propose Multi-instance Sampler which reduces information leakage across multiple instances.
Please check our [paper](https://arxiv.org/abs/2402.03290) and [project page](http://people.eecs.berkeley.edu/~xdwang/projects/InstDiff/) for more details.
## Citation
If you find our work inspiring or use our codebase in your research, please consider giving a star ⭐ and a citation.
```
@misc{wang2024instancediffusion,
title={InstanceDiffusion: Instance-level Control for Image Generation},
author={Xudong Wang and Trevor Darrell and Sai Saketh Rambhatla and Rohit Girdhar and Ishan Misra},
year={2024},
eprint={2402.03290},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
## Disclaimer
This repository represents a re-implementation of InstanceDiffusion conducted by the first author during his time at UC Berkeley. Minor performance discrepancies may exist (differences of ~1% in AP) compared to the results reported in the original paper. The goal of this repository is to replicate the original paper's findings and insights, primarily for academic and research purposes. |