Update README.md
Browse files
README.md
CHANGED
@@ -11,12 +11,14 @@ This model card focuses on the model associated with the Stable Diffusion v2-1 m
|
|
11 |
|
12 |
This `stable-diffusion-2-1-unclip` is a finetuned version of Stable Diffusion 2.1, modified to accept (noisy) CLIP image embedding in addition to the text prompt, and can be used to create image variations ([Examples](https://github.com/Stability-AI/stablediffusion/blob/main/doc/UNCLIP.MD)) or can be chained with text-to-image CLIP priors. The amount of noise added to the image embedding can be specified via the noise_level (0 means no noise, 1000 full noise).
|
13 |
|
14 |
-
If you plan on building applications on top of the model that the general public may use, you are responsible for adding the guardrails to minimize or prevent misuse of the application, especially for use-cases highlighted in the sections below (Misuse, Malicious Use, and Out-of-Scope Use).
|
15 |
-
|
16 |
- A public web demo of SD-unCLIP is available at [clipdrop.co/stable-diffusion-reimagine](https://clipdrop.co/stable-diffusion-reimagine)
|
17 |
- Use it with the [`stablediffusion`](https://github.com/Stability-AI/stablediffusion) repository: download the [sd21-unclip-h.ckpt](https://huggingface.co/stabilityai/stable-diffusion-2-1-unclip/resolve/main/sd21-unclip-h.ckpt) and [sd21-unclip-l.ckpt](https://huggingface.co/stabilityai/stable-diffusion-2-1-unclip/resolve/main/sd21-unclip-l.ckpt).
|
18 |
- Use it with 🧨 [`diffusers`](#examples)
|
19 |
|
|
|
|
|
|
|
|
|
20 |
## Model Details
|
21 |
- **Developed by:** Robin Rombach, Patrick Esser
|
22 |
- **Model type:** Diffusion-based text-to-image generation model
|
|
|
11 |
|
12 |
This `stable-diffusion-2-1-unclip` is a finetuned version of Stable Diffusion 2.1, modified to accept (noisy) CLIP image embedding in addition to the text prompt, and can be used to create image variations ([Examples](https://github.com/Stability-AI/stablediffusion/blob/main/doc/UNCLIP.MD)) or can be chained with text-to-image CLIP priors. The amount of noise added to the image embedding can be specified via the noise_level (0 means no noise, 1000 full noise).
|
13 |
|
|
|
|
|
14 |
- A public web demo of SD-unCLIP is available at [clipdrop.co/stable-diffusion-reimagine](https://clipdrop.co/stable-diffusion-reimagine)
|
15 |
- Use it with the [`stablediffusion`](https://github.com/Stability-AI/stablediffusion) repository: download the [sd21-unclip-h.ckpt](https://huggingface.co/stabilityai/stable-diffusion-2-1-unclip/resolve/main/sd21-unclip-h.ckpt) and [sd21-unclip-l.ckpt](https://huggingface.co/stabilityai/stable-diffusion-2-1-unclip/resolve/main/sd21-unclip-l.ckpt).
|
16 |
- Use it with 🧨 [`diffusers`](#examples)
|
17 |
|
18 |
+
If you plan on building applications on top of the model that the general public may use, you are responsible for adding the guardrails to minimize or prevent misuse of the application, especially for use-cases highlighted in the sections below (Misuse, Malicious Use, and Out-of-Scope Use).
|
19 |
+
|
20 |
+
![Example](./sd_unclip_examples.jpeg)
|
21 |
+
|
22 |
## Model Details
|
23 |
- **Developed by:** Robin Rombach, Patrick Esser
|
24 |
- **Model type:** Diffusion-based text-to-image generation model
|