File size: 3,491 Bytes
e3bc6c5
 
9cd691f
 
 
 
 
e3bc6c5
8296a97
e3bc6c5
 
 
9cd691f
e3bc6c5
 
7ee0eea
8292604
7ee0eea
 
de3909e
 
b22e286
b082435
 
 
ab2d4cc
b082435
7ee0eea
 
 
 
ab2d4cc
ddb3125
 
7ee0eea
 
 
 
 
 
 
 
 
a587bee
 
a4808bf
 
 
e4a37ca
a29c61a
a4808bf
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
28e651c
3c815e8
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
---
license: creativeml-openrail-m
language:
- en
pipeline_tag: text-to-image
tags:
- art
---
# Overview 📃✏️
This is a Diffusers-compatible version of [Yiffymix v51 by chilon249](https://civitai.com/models/3671?modelVersionId=658237).
 See the original page for more information.

 Keep in mind that this is [SDXL-Lightning](https://huggingface.co/ByteDance/SDXL-Lightning) checkpoint model,
 so using fewer steps (around 12 to 25) and low guidance
 scale (around 4 to 6) is recommended for the best result.
 It's also recommended to use clip skip of 2.

This repo uses DPM++ 2M Karras as its sampler (Diffusers only). 

# Diffusers Installation 🧨
### Dependencies Installation 📁
First, you'll need to install few dependencies. This is a one-time operation, you only need to run the code once.
```py
!pip install -q diffusers transformers accelerate
```
### Model Installation 💿
After the installation, you can run SDXL with Yiffymix v51 model using the code below:
```py
from diffusers import StableDiffusionXLPipeline
import torch

model = "IDK-ab0ut/Yiffymix_v51-XL"
pipeline = StableDiffusionXLPipeline.from_pretrained(
           model, torch_dtype=torch.float16).to("cuda")

prompt = "a cat, detailed background, dynamic lighting"
negative_prompt = "low resolution, bad quality, deformed"
steps = 25
guidance_scale = 4
image = pipeline(prompt=prompt, negative_prompt=negative_prompt,
        num_inference_steps=steps, guidance_scale=guidance_scale,
        clip_skip=2).images[0]
image
```

Feel free to edit the image's configuration with your desire.

### Scheduler's Customization ⚙️
ㅤㅤㅤㅤ<small>🧨</small><b>For Diffusers</b><small>🧨</small>

You can see all available schedulers [here](https://huggingface.co/docs/diffusers/v0.11.0/en/api/schedulers/overview).

To use scheduler other than DPM++ 2M Karras for this repo, make sure to import the
corresponding pipeline for the scheduler you want to use. For example, we want to use Euler. First, import [EulerDiscreteScheduler](https://huggingface.co/docs/diffusers/v0.29.2/en/api/schedulers/euler#diffusers.EulerDiscreteScheduler) from Diffusers by adding this line of code.
```py
from diffusers import StableDiffusionXLPipeline, EulerDiscreteScheduler
```

Next step is to load the scheduler.
```py
model = "IDK-ab0ut/Yiffymix_v51"
euler = EulerDiscreteScheduler.from_pretrained(
        model, subfolder="scheduler")
pipeline = StableDiffusionXLPipeline.from_pretrained(
           model, scheduler=euler, torch.dtype=torch.float16
           ).to("cuda")
```
Now you can generate any images using the scheduler you want.

Another example is using DPM++ 2M SDE Karras. We want to import [DPMSolverMultistepScheduler](https://huggingface.co/docs/diffusers/v0.29.2/api/schedulers/multistep_dpm_solver) from Diffusers first.
```py
from diffusers import StableDiffusionXLPipeline, DPMSolverMultistepScheduler
```
Next, load the scheduler into the model.
```py
model = "IDK-ab0ut/Yiffymix_v51"
dpmsolver = DPMSolverMultistepScheduler.from_pretrained(
            model, subfolder="scheduler", use_karras_sigmas=True,
            algorithm_type="sde-dpmsolver++").to("cuda")
# 'use_karras_sigmas' is called to make the scheduler
# use Karras sigmas during sampling.
pipeline = StableDiffusionXLPipeline.from_pretrained(
           model, scheduler=dpmsolver, torch.dtype=torch.float16,
           ).to("cuda")
```
# That's all for this repository. Thank you for reading my silly note. Have a nice day!