Spaces:
Sleeping
Sleeping
Nupur Kumari
commited on
Commit
•
d177f6b
1
Parent(s):
4b2c52c
concept ablation
Browse files
README.md
CHANGED
@@ -1,5 +1,5 @@
|
|
1 |
---
|
2 |
-
title:
|
3 |
emoji: 💡
|
4 |
colorFrom: indigo
|
5 |
colorTo: gray
|
@@ -12,24 +12,21 @@ license: mit
|
|
12 |
|
13 |
|
14 |
|
15 |
-
#
|
16 |
|
17 |
-
Project Website [https://
|
18 |
-
Arxiv Preprint [https://arxiv.org/
|
19 |
-
Fine-tuned Weights [https://erasing.baulab.info/weights/esd_models/](https://erasing.baulab.info/weights/esd_models/) <br>
|
20 |
<div align='center'>
|
21 |
<img src = 'images/applications.png'>
|
22 |
</div>
|
23 |
|
24 |
-
|
25 |
|
26 |
-
|
27 |
-
|
28 |
-
Given only a short text description of an undesired visual concept and no additional data, our method fine-tunes model weights to erase the targeted concept. Our method can avoid NSFW content, stop imitation of a specific artist's style, or even erase a whole object class from model output, while preserving the model's behavior and capabilities on other topics.
|
29 |
|
30 |
## Demo vs github
|
31 |
|
32 |
-
This demo uses
|
33 |
|
34 |
## Running locally
|
35 |
|
@@ -39,15 +36,15 @@ This demo uses an updated implementation from the original Erasing codebase the
|
|
39 |
|
40 |
3.) Open the application in browser at `http://127.0.0.1:7860/`
|
41 |
|
42 |
-
4.) Train, evaluate, and save models
|
43 |
|
44 |
## Citing our work
|
45 |
The preprint can be cited as follows
|
46 |
```
|
47 |
-
@
|
48 |
-
|
49 |
-
|
50 |
-
|
51 |
-
year={2023}
|
52 |
}
|
53 |
```
|
|
|
1 |
---
|
2 |
+
title: Ablating Concepts in Text-to-Image Diffusion Models
|
3 |
emoji: 💡
|
4 |
colorFrom: indigo
|
5 |
colorTo: gray
|
|
|
12 |
|
13 |
|
14 |
|
15 |
+
# Ablating Concepts in Text-to-Image Diffusion Models
|
16 |
|
17 |
+
Project Website [https://www.cs.cmu.edu/~concept-ablation/](https://www.cs.cmu.edu/~concept-ablation/) <br>
|
18 |
+
Arxiv Preprint [https://arxiv.org/abs/2303.13516](https://arxiv.org/abs/2303.13516) <br>
|
|
|
19 |
<div align='center'>
|
20 |
<img src = 'images/applications.png'>
|
21 |
</div>
|
22 |
|
23 |
+
Large-scale text-to-image diffusion models can generate high-fidelity images with powerful compositional ability. However, these models are typically trained on an enormous amount of Internet data, often containing copyrighted material, licensed images, and personal photos. Furthermore, they have been found to replicate the style of various living artists or memorize exact training samples. How can we remove such copyrighted concepts or images without retraining the model from scratch?
|
24 |
|
25 |
+
We propose an efficient method of ablating concepts in the pretrained model, i.e., preventing the generation of a target concept. Our algorithm learns to match the image distribution for a given target style, instance, or text prompt we wish to ablate to the distribution corresponding to an anchor concept, e.g., Grumpy Cat to Cats.
|
|
|
|
|
26 |
|
27 |
## Demo vs github
|
28 |
|
29 |
+
This demo uses different hyper-parameters than the github version for faster training.
|
30 |
|
31 |
## Running locally
|
32 |
|
|
|
36 |
|
37 |
3.) Open the application in browser at `http://127.0.0.1:7860/`
|
38 |
|
39 |
+
4.) Train, evaluate, and save models
|
40 |
|
41 |
## Citing our work
|
42 |
The preprint can be cited as follows
|
43 |
```
|
44 |
+
@inproceedings{kumari2023conceptablation,
|
45 |
+
author = {Kumari, Nupur and Zhang, Bingliang and Wang, Sheng-Yu and Shechtman, Eli and Zhang, Richard and Zhu, Jun-Yan},
|
46 |
+
title = {Ablating Concepts in Text-to-Image Diffusion Models},
|
47 |
+
booktitle = ICCV,
|
48 |
+
year = {2023},
|
49 |
}
|
50 |
```
|