Update README.md
Browse files
README.md
CHANGED
@@ -2,16 +2,72 @@
|
|
2 |
license: apache-2.0
|
3 |
---
|
4 |
|
5 |
-
Pre-trained
|
6 |
|
7 |
Note: The model structure is highly experimental and may be subject to change in the future.
|
8 |
|
9 |
Inference with ComfyUI: https://github.com/kohya-ss/ControlNet-LLLite-ComfyUI
|
10 |
|
|
|
|
|
11 |
Training: https://github.com/kohya-ss/sd-scripts/blob/sdxl/docs/train_lllite_README.md
|
12 |
|
13 |
The recommended preprocessing for the blur model is Gaussian blur.
|
14 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
15 |
![source 1](./canny1.png)
|
16 |
|
17 |
![sample 1](./sample1.jpg)
|
|
|
2 |
license: apache-2.0
|
3 |
---
|
4 |
|
5 |
+
Pre-trained models and output samples of ControlNet-LLLite.
|
6 |
|
7 |
Note: The model structure is highly experimental and may be subject to change in the future.
|
8 |
|
9 |
Inference with ComfyUI: https://github.com/kohya-ss/ControlNet-LLLite-ComfyUI
|
10 |
|
11 |
+
For 1111's Web UI, [sd-webui-controlnet](https://github.com/Mikubill/sd-webui-controlnet) extension supports ControlNet-LLLite.
|
12 |
+
|
13 |
Training: https://github.com/kohya-ss/sd-scripts/blob/sdxl/docs/train_lllite_README.md
|
14 |
|
15 |
The recommended preprocessing for the blur model is Gaussian blur.
|
16 |
|
17 |
+
# Naming Rules
|
18 |
+
|
19 |
+
`controllllite_v01032064e_sdxl_blur_500-1000.safetensors`
|
20 |
+
|
21 |
+
- `v01` : Version Flag.
|
22 |
+
- `032` : Dimensions of conditioning.
|
23 |
+
- `064` : Dimensions of control module.
|
24 |
+
- `sdxl` : Base Model.
|
25 |
+
- `blur` : The control method. `anime` means the LLLite model is trained on/with anime sdxl model and images.
|
26 |
+
- `500-1000` : (Optional) Timesteps for training. If this is `500-1000`, please control only the first half step.
|
27 |
+
|
28 |
+
# Models
|
29 |
+
|
30 |
+
## Trained on sdxl base
|
31 |
+
|
32 |
+
- controllllite_v01032064e_sdxl_blur-500-1000.safetensors
|
33 |
+
- trained with 3,919 generated images and Gaussian blur preprocessing.
|
34 |
+
- controllllite_v01032064e_sdxl_canny.safetensors
|
35 |
+
- trained with 3,919 generated images and canny preprocessing.
|
36 |
+
- controllllite_v01032064e_sdxl_depth_500-1000.safetensors
|
37 |
+
- trained with 3,919 generated images and MiDaS v3 - Large preprocessing.
|
38 |
+
|
39 |
+
## Trained on anime model
|
40 |
+
|
41 |
+
The model ControlNet trained on is our custom model.
|
42 |
+
|
43 |
+
- controllllite_v01016032e_sdxl_blur_anime_beta.safetensors
|
44 |
+
- beta version.
|
45 |
+
- controllllite_v01032064e_sdxl_blur-anime_500-1000.safetensors
|
46 |
+
- trained with 2,836 generated images and Gaussian blur preprocessing.
|
47 |
+
- controllllite_v01032064e_sdxl_canny_anime.safetensors
|
48 |
+
- trained with 921 generated images and canny preprocessing.
|
49 |
+
- controllllite_v01008016e_sdxl_depth_anime.safetensors
|
50 |
+
- trained with 1,433 generated images and MiDaS v3 - Large preprocessing.
|
51 |
+
- controllllite_v01032064e_sdxl_fake_scribble_anime.safetensors
|
52 |
+
- trained with 921 generated images and PiDiNet preprocessing.
|
53 |
+
- controllllite_v01032064e_sdxl_pose_anime.safetensors
|
54 |
+
- trained with 921 generated images and MMPose preprocessing.
|
55 |
+
- controllllite_v01032064e_sdxl_pose_anime_v2_500-1000.safetensors
|
56 |
+
- trained with 1,415 generated images and MMPose preprocessing.
|
57 |
+
|
58 |
+
|
59 |
+
# Samples
|
60 |
+
|
61 |
+
## sdxl base
|
62 |
+
|
63 |
+
![blur](./bs.jpg)
|
64 |
+
|
65 |
+
![canny](./cs.jpg)
|
66 |
+
|
67 |
+
![depth](./ds.jpg)
|
68 |
+
|
69 |
+
## anime model
|
70 |
+
|
71 |
![source 1](./canny1.png)
|
72 |
|
73 |
![sample 1](./sample1.jpg)
|