Spaces:
Runtime error
Runtime error
AisingioroHao0
commited on
Commit
•
4de728a
1
Parent(s):
1522391
init
Browse files- README.assets/3x9_blueprint.png +0 -0
- README.assets/3x9_prompt.png +0 -0
- README.assets/3x9_result.png +0 -0
- README.md +49 -12
- app.py +182 -0
- requirements.txt +2 -0
README.assets/3x9_blueprint.png
ADDED
README.assets/3x9_prompt.png
ADDED
README.assets/3x9_result.png
ADDED
README.md
CHANGED
@@ -1,12 +1,49 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# StableDiffusionReferenceOnly
|
2 |
+
|
3 |
+
A general model for secondary creation.
|
4 |
+
|
5 |
+
No training is needed to achieve style transfer of any anime character and line drawing coloring.
|
6 |
+
|
7 |
+
Code: https://github.com/aihao2000/StableDiffusionReferenceOnly
|
8 |
+
|
9 |
+
|
10 |
+
|
11 |
+
Model: https://huggingface.co/AisingioroHao0/stable-diffusion-reference-only-automatic-coloring-0.1.2
|
12 |
+
|
13 |
+
|
14 |
+
|
15 |
+
| prompt | blueprint | result |
|
16 |
+
| :---------------------------------: | :------------------------------------: | :---------------------------------: |
|
17 |
+
| ![](./README.assets/3x9_prompt.png) | ![](./README.assets/3x9_blueprint.png) | ![](./README.assets/3x9_result.png) |
|
18 |
+
|
19 |
+
|
20 |
+
|
21 |
+
### Instructions
|
22 |
+
|
23 |
+
Secondary creation requires two images.
|
24 |
+
|
25 |
+
One is prompt image. It is a reference image that you wish to migrate to the new image. We provide the ```character segment``` function to clear the background, which often brings better results.
|
26 |
+
|
27 |
+
The other is blueprint image. It will control the picture structure of the new picture. It is also recommended to use ```character segment``` to enhance the effect. And there are two other buttons. If the blueprint you input is manual line drawing, you only need to click the ```color inversion``` button to ensure a black background and white lines. If you are entering a color image of another character, you need to click the ```get line art``` button and then click the ```color inversion``` button. Then click the inference button to get the results.
|
28 |
+
|
29 |
+
|
30 |
+
|
31 |
+
You can also directly upload reference images and line art image and click ```automatic coloring``` to get the results without the above operations.
|
32 |
+
|
33 |
+
You can also directly upload two color character pictures to try ```style transfer```
|
34 |
+
|
35 |
+
## 介绍
|
36 |
+
|
37 |
+
二次创作需要两张图片。
|
38 |
+
|
39 |
+
一是提示图像。 它是您希望迁移到新图像的参考图像。 我们提供了角色分割```character segment```功能来清除背景,这往往会带来更好的效果。
|
40 |
+
|
41 |
+
另一种是蓝图图像。 它将控制新图片的图片结构。还建议使用```character segment```来增强效果。 还有另外两个按钮。 如果您输入的图纸是手动画线,则只需点击```color inversion```按钮即可保证黑底白线。 如果您要输入另一个角色的彩色图像,则需要单击“获取线条艺术”按钮,然后单击```color inversion```按钮。 然后点击```inference```按钮即可得到结果。
|
42 |
+
|
43 |
+
|
44 |
+
|
45 |
+
您也可以直接上传参考图和线稿图,点击```automatic coloring```即可得到结果,无需进行上述操作。
|
46 |
+
|
47 |
+
也可以直接上传两张彩色人物图片来试试风格迁移```style transfer```。
|
48 |
+
|
49 |
+
##
|
app.py
ADDED
@@ -0,0 +1,182 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import huggingface_hub
|
2 |
+
import gradio as gr
|
3 |
+
from stable_diffusion_reference_only.pipelines.stable_diffusion_reference_only_pipeline import (
|
4 |
+
StableDiffusionReferenceOnlyPipeline,
|
5 |
+
)
|
6 |
+
import anime_segmentation
|
7 |
+
from diffusers.schedulers import UniPCMultistepScheduler
|
8 |
+
from PIL import Image
|
9 |
+
import cv2
|
10 |
+
import numpy as np
|
11 |
+
import os
|
12 |
+
|
13 |
+
automatic_coloring_pipeline = StableDiffusionReferenceOnlyPipeline.from_pretrained(
|
14 |
+
"AisingioroHao0/stable-diffusion-reference-only-automatic-coloring-0.1.2"
|
15 |
+
)
|
16 |
+
automatic_coloring_pipeline.scheduler = UniPCMultistepScheduler.from_config(
|
17 |
+
automatic_coloring_pipeline.scheduler.config
|
18 |
+
)
|
19 |
+
|
20 |
+
segment_model = anime_segmentation.get_model(
|
21 |
+
model_path=huggingface_hub.hf_hub_download("skytnt/anime-seg", "isnetis.ckpt")
|
22 |
+
)
|
23 |
+
|
24 |
+
|
25 |
+
def character_segment(img):
|
26 |
+
if img is None:
|
27 |
+
return None
|
28 |
+
img = anime_segmentation.character_segment(segment_model, img)
|
29 |
+
img = cv2.cvtColor(img, cv2.COLOR_RGBA2RGB)
|
30 |
+
return img
|
31 |
+
|
32 |
+
|
33 |
+
def color_inversion(img):
|
34 |
+
if img is None:
|
35 |
+
return None
|
36 |
+
return 255 - img
|
37 |
+
|
38 |
+
|
39 |
+
def get_line_art(img):
|
40 |
+
if img is None:
|
41 |
+
return None
|
42 |
+
img = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
|
43 |
+
img = cv2.adaptiveThreshold(
|
44 |
+
img,
|
45 |
+
255,
|
46 |
+
cv2.ADAPTIVE_THRESH_MEAN_C,
|
47 |
+
cv2.THRESH_BINARY,
|
48 |
+
blockSize=5,
|
49 |
+
C=7,
|
50 |
+
)
|
51 |
+
img = cv2.cvtColor(img, cv2.COLOR_GRAY2RGB)
|
52 |
+
return img
|
53 |
+
|
54 |
+
|
55 |
+
def inference(prompt, blueprint, num_inference_steps):
|
56 |
+
if prompt is None or blueprint is None:
|
57 |
+
return None
|
58 |
+
return np.array(
|
59 |
+
automatic_coloring_pipeline(
|
60 |
+
prompt=Image.fromarray(prompt),
|
61 |
+
blueprint=Image.fromarray(blueprint),
|
62 |
+
num_inference_steps=num_inference_steps,
|
63 |
+
).images[0]
|
64 |
+
)
|
65 |
+
|
66 |
+
|
67 |
+
def automatic_coloring(prompt, blueprint, num_inference_steps):
|
68 |
+
if prompt is None or blueprint is None:
|
69 |
+
return None
|
70 |
+
blueprint = color_inversion(blueprint)
|
71 |
+
return inference(prompt, blueprint, num_inference_steps)
|
72 |
+
|
73 |
+
|
74 |
+
def style_transfer(prompt, blueprint, num_inference_steps):
|
75 |
+
if prompt is None or blueprint is None:
|
76 |
+
return None
|
77 |
+
prompt = character_segment(prompt)
|
78 |
+
blueprint = character_segment(blueprint)
|
79 |
+
blueprint = get_line_art(blueprint)
|
80 |
+
blueprint = color_inversion(blueprint)
|
81 |
+
return inference(prompt, blueprint, num_inference_steps)
|
82 |
+
|
83 |
+
|
84 |
+
with gr.Blocks() as demo:
|
85 |
+
gr.Markdown(
|
86 |
+
"""
|
87 |
+
# Stable Diffusion Reference Only Automatic Coloring 0.1.2\n\n
|
88 |
+
demo for [<svg height="32" aria-hidden="true" viewBox="0 0 16 16" version="1.1" width="32" data-view-component="true" class="octicon octicon-mark-github v-align-middle color-fg-default">
|
89 |
+
<path d="M8 0c4.42 0 8 3.58 8 8a8.013 8.013 0 0 1-5.45 7.59c-.4.08-.55-.17-.55-.38 0-.27.01-1.13.01-2.2 0-.75-.25-1.23-.54-1.48 1.78-.2 3.65-.88 3.65-3.95 0-.88-.31-1.59-.82-2.15.08-.2.36-1.02-.08-2.12 0 0-.67-.22-2.2.82-.64-.18-1.32-.27-2-.27-.68 0-1.36.09-2 .27-1.53-1.03-2.2-.82-2.2-.82-.44 1.1-.16 1.92-.08 2.12-.51.56-.82 1.28-.82 2.15 0 3.06 1.86 3.75 3.64 3.95-.23.2-.44.55-.51 1.07-.46.21-1.61.55-2.33-.66-.15-.24-.6-.83-1.23-.82-.67.01-.27.38.01.53.34.19.73.9.82 1.13.16.45.68 1.31 2.69.94 0 .67.01 1.3.01 1.49 0 .21-.15.45-.55.38A7.995 7.995 0 0 1 0 8c0-4.42 3.58-8 8-8Z"></path>
|
90 |
+
</svg>](https://github.com/aihao2000/StableDiffusionReferenceOnly)
|
91 |
+
"""
|
92 |
+
)
|
93 |
+
with gr.Row():
|
94 |
+
with gr.Column():
|
95 |
+
prompt_input_compoent = gr.Image(shape=(512, 512), label="prompt")
|
96 |
+
prompt_character_segment_button = gr.Button(
|
97 |
+
"character segment",
|
98 |
+
)
|
99 |
+
prompt_character_segment_button.click(
|
100 |
+
character_segment,
|
101 |
+
inputs=prompt_input_compoent,
|
102 |
+
outputs=prompt_input_compoent,
|
103 |
+
)
|
104 |
+
with gr.Column():
|
105 |
+
blueprint_input_compoent = gr.Image(shape=(512, 512), label="blueprint")
|
106 |
+
blueprint_character_segment_button = gr.Button("character segment")
|
107 |
+
blueprint_character_segment_button.click(
|
108 |
+
character_segment,
|
109 |
+
inputs=blueprint_input_compoent,
|
110 |
+
outputs=blueprint_input_compoent,
|
111 |
+
)
|
112 |
+
get_line_art_button = gr.Button(
|
113 |
+
"get line art",
|
114 |
+
)
|
115 |
+
get_line_art_button.click(
|
116 |
+
get_line_art,
|
117 |
+
inputs=blueprint_input_compoent,
|
118 |
+
outputs=blueprint_input_compoent,
|
119 |
+
)
|
120 |
+
color_inversion_button = gr.Button(
|
121 |
+
"color inversion",
|
122 |
+
)
|
123 |
+
color_inversion_button.click(
|
124 |
+
color_inversion,
|
125 |
+
inputs=blueprint_input_compoent,
|
126 |
+
outputs=blueprint_input_compoent,
|
127 |
+
)
|
128 |
+
with gr.Column():
|
129 |
+
result_output_component = gr.Image(shape=(512, 512), label="result")
|
130 |
+
num_inference_steps_input_component = gr.Number(
|
131 |
+
20, label="num inference steps", minimum=1, maximum=1000, step=1
|
132 |
+
)
|
133 |
+
inference_button = gr.Button("inference")
|
134 |
+
inference_button.click(
|
135 |
+
inference,
|
136 |
+
inputs=[
|
137 |
+
prompt_input_compoent,
|
138 |
+
blueprint_input_compoent,
|
139 |
+
num_inference_steps_input_component,
|
140 |
+
],
|
141 |
+
outputs=result_output_component,
|
142 |
+
)
|
143 |
+
automatic_coloring_button = gr.Button("automatic coloring")
|
144 |
+
automatic_coloring_button.click(
|
145 |
+
automatic_coloring,
|
146 |
+
inputs=[
|
147 |
+
prompt_input_compoent,
|
148 |
+
blueprint_input_compoent,
|
149 |
+
num_inference_steps_input_component,
|
150 |
+
],
|
151 |
+
outputs=result_output_component,
|
152 |
+
)
|
153 |
+
style_transfer_button = gr.Button("style transfer")
|
154 |
+
style_transfer_button.click(
|
155 |
+
style_transfer,
|
156 |
+
inputs=[
|
157 |
+
prompt_input_compoent,
|
158 |
+
blueprint_input_compoent,
|
159 |
+
num_inference_steps_input_component,
|
160 |
+
],
|
161 |
+
outputs=result_output_component,
|
162 |
+
)
|
163 |
+
with gr.Row():
|
164 |
+
gr.Examples(
|
165 |
+
examples=[
|
166 |
+
[
|
167 |
+
os.path.join(
|
168 |
+
os.path.dirname(__file__), "README.assets", "3x9_prompt.png"
|
169 |
+
),
|
170 |
+
os.path.join(
|
171 |
+
os.path.dirname(__file__), "README.assets", "3x9_blueprint.png"
|
172 |
+
),
|
173 |
+
],
|
174 |
+
],
|
175 |
+
inputs=[prompt_input_compoent, blueprint_input_compoent],
|
176 |
+
outputs=result_output_component,
|
177 |
+
fn=lambda x, y: None,
|
178 |
+
cache_examples=True,
|
179 |
+
)
|
180 |
+
|
181 |
+
|
182 |
+
demo.launch()
|
requirements.txt
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
git+https://github.com/aihao2000/StableDiffusionReferenceOnly.git
|
2 |
+
git+https://github.com/aihao2000/anime_segmentation.git
|