File size: 2,432 Bytes
574aa4a
 
 
 
1305724
07e5eb7
02d376c
 
 
07e5eb7
 
 
e78bc50
07e5eb7
02d376c
 
5034861
e25e466
 
 
 
 
 
 
 
 
 
c2bd4be
5034861
92d8540
02d376c
 
92d8540
02d376c
 
 
 
 
 
92d8540
02d376c
 
 
 
e78bc50
 
70eb729
 
 
 
 
 
5034861
 
 
e78bc50
5034861
 
 
 
 
e78bc50
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
---
library_name: diffusers
pipeline_tag: text-to-image
inference: true
base_model: stabilityai/stable-diffusion-2-1
---
# DPO LoRA Stable Diffusion v2-1
Model trained with LoRA implementation of Diffusion DPO Read more [here](https://github.com/huggingface/diffusers/tree/main/examples/research_projects/diffusion_dpo)


Base Model: https://huggingface.co/stabilityai/stable-diffusion-2-1

## Running with [🧨 diffusers library](https://github.com/huggingface/diffusers)


```python

from diffusers import DiffusionPipeline
from diffusers.utils import make_image_grid
import torch

pipe = DiffusionPipeline.from_pretrained(
    "stabilityai/sd-turbo",  # SD Turbo is a destilled version of Stable Diffusion 2.1
    # "stabilityai/stable-diffusion-2-1", # for the original stable diffusion 2.1 model
    torch_dtype=torch.float16, variant="fp16"
)
pipe.to("cuda")
pipe.load_lora_weights("radames/sd-21-DPO-LoRA", adapter_name="dpo-lora-sd21")
pipe.set_adapters(["dpo-lora-sd21"], adapter_weights=[1.0]) # you can play with adapter_weights to increase the effect of the LoRA model
seed = 123123
prompt = "portrait headshot professional of elon musk"
negative_prompt = "3d render, cartoon, drawing, art, low light"
generator = torch.Generator().manual_seed(seed)
images = pipe(
    prompt=prompt,
    negative_prompt=negative_prompt,
    width=512,
    height=512,
    num_inference_steps=2,
    generator=generator,
    guidance_scale=1.0,
    num_images_per_prompt=4
).images
make_image_grid(images, 1, 4)
```

## Guidance Scale vs LoRA weights 

![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6064e095abd8d3692e3e2ed6/DoSPw5PiShRckeqjVperr.jpeg)

  

## Examples

Left Withoud DPO right with DPO LoRA

<img src=https://cdn-uploads.huggingface.co/production/uploads/6064e095abd8d3692e3e2ed6/R8E0hRpWIE6OhhtvgJeEU.png style="max-width: 60rem;">
<img src=https://cdn-uploads.huggingface.co/production/uploads/6064e095abd8d3692e3e2ed6/Eg4LbyxCfhmsk2INzqODw.png style="max-width: 60rem;">
<img src=https://cdn-uploads.huggingface.co/production/uploads/6064e095abd8d3692e3e2ed6/GD7KumSCNweBWMJ1TArI-.png style="max-width: 60rem;">
<img src=https://cdn-uploads.huggingface.co/production/uploads/6064e095abd8d3692e3e2ed6/SO7QoA9lZJY9hI0U4fBLy.png style="max-width: 60rem;">
<img src=https://cdn-uploads.huggingface.co/production/uploads/6064e095abd8d3692e3e2ed6/ZWbQwIQ5OklEgF9RW581R.png style="max-width: 60rem;">