Update README.md
Browse files
README.md
CHANGED
@@ -85,52 +85,16 @@ widget:
|
|
85 |
margin-bottom: 0em;
|
86 |
}
|
87 |
|
88 |
-
.
|
89 |
-
|
90 |
-
|
91 |
-
left: 0;
|
92 |
-
right: 0;
|
93 |
-
color: white;
|
94 |
-
width: 100%;
|
95 |
-
height: 40%;
|
96 |
-
display: flex;
|
97 |
-
flex-direction: column;
|
98 |
-
justify-content: center;
|
99 |
-
align-items: center;
|
100 |
-
font-size: 1em;
|
101 |
-
font-style: bold;
|
102 |
-
text-align: center;
|
103 |
-
opacity: 0;
|
104 |
-
/* Keep the text fully opaque */
|
105 |
-
background: linear-gradient(0deg, rgba(0, 0, 0, 0.8) 60%, rgba(0, 0, 0, 0) 100%);
|
106 |
-
transition: opacity .5s;
|
107 |
-
}
|
108 |
-
|
109 |
-
.custom-image-container:hover .overlay {
|
110 |
-
opacity: 1;
|
111 |
-
/* Make the overlay always visible */
|
112 |
}
|
113 |
|
114 |
-
.
|
115 |
-
|
116 |
-
-webkit-background-clip: text;
|
117 |
-
color: transparent;
|
118 |
-
/* Fallback for browsers that do not support this effect */
|
119 |
-
text-shadow: 2px 2px 4px rgba(0, 0, 0, 0.7);
|
120 |
-
/* Enhanced text shadow for better legibility */
|
121 |
-
}
|
122 |
-
|
123 |
-
.overlay-subtext {
|
124 |
-
font-size: 0.75em;
|
125 |
-
margin-top: 0.5em;
|
126 |
-
font-style: italic;
|
127 |
-
}
|
128 |
-
|
129 |
-
.overlay,
|
130 |
-
.overlay-subtext {
|
131 |
-
text-shadow: 2px 2px 4px rgba(0, 0, 0, 0.5);
|
132 |
}
|
133 |
</style>
|
|
|
134 |
<h1 class="title">
|
135 |
<span>Style Enhancer XL LoRA</span>
|
136 |
</h1>
|
@@ -138,34 +102,25 @@ widget:
|
|
138 |
<tr>
|
139 |
<td>
|
140 |
<div class="custom-image-container">
|
141 |
-
<
|
142 |
-
<img class="custom-image" src="https://huggingface.co/Linaqruf/animagine-xl/resolve/main/sample_images/image1.png" alt="sample1">
|
143 |
-
<div class="overlay"> Twilight Whispers <div class="overlay-subtext">"A serene gaze into the dusky lights"</div>
|
144 |
-
</div>
|
145 |
-
</a>
|
146 |
</div>
|
147 |
<div class="custom-image-container">
|
148 |
-
<
|
149 |
-
<img class="custom-image" src="https://huggingface.co/Linaqruf/animagine-xl/resolve/main/sample_images/image4.png" alt="sample4">
|
150 |
-
<div class="overlay"> Bloom of Youth <div class="overlay-subtext">"Amidst the dance of petals and sunbeams"</div>
|
151 |
-
</div>
|
152 |
-
</a>
|
153 |
</div>
|
154 |
</td>
|
155 |
<td>
|
156 |
<div class="custom-image-container">
|
157 |
-
<
|
158 |
-
<img class="custom-image" src="https://huggingface.co/Linaqruf/animagine-xl/resolve/main/sample_images/image2.png" alt="sample2">
|
159 |
-
<div class="overlay"> Starry-eyed Dreams <div class="overlay-subtext">"Lost in the constellation of imagination"</div>
|
160 |
-
</div>
|
161 |
-
</a>
|
162 |
</div>
|
163 |
<div class="custom-image-container">
|
164 |
-
<
|
165 |
-
|
166 |
-
|
167 |
-
|
168 |
-
|
|
|
|
|
|
|
169 |
</div>
|
170 |
</td>
|
171 |
</tr>
|
@@ -175,38 +130,33 @@ widget:
|
|
175 |
|
176 |
## Overview
|
177 |
|
178 |
-
**Style Enhancer XL LoRA** is
|
179 |
-
|
180 |
-
Like other anime-style Stable Diffusion models, it also supports Danbooru tags to generate images.
|
181 |
-
|
182 |
-
e.g. _**face focus, cute, masterpiece, best quality, 1girl, green hair, sweater, looking at viewer, upper body, beanie, outdoors, night, turtleneck**_
|
183 |
|
|
|
184 |
|
185 |
<hr>
|
186 |
|
187 |
## Model Details
|
188 |
|
189 |
- **Developed by:** [Linaqruf](https://github.com/Linaqruf)
|
190 |
-
- **Model type:** LoRA adapter
|
191 |
-
- **Model Description:**
|
192 |
- **License:** [CreativeML Open RAIL++-M License](https://huggingface.co/stabilityai/stable-diffusion-2/blob/main/LICENSE-MODEL)
|
193 |
- **Finetuned from model:** [Animagine XL 2.0](https://huggingface.co/Linaqruf/animagine-xl-2.0)
|
194 |
|
195 |
<hr>
|
196 |
|
197 |
-
## 🧨 Diffusers
|
198 |
|
199 |
-
|
200 |
-
|
|
|
201 |
pip install diffusers --upgrade
|
|
|
202 |
```
|
203 |
|
204 |
-
|
205 |
-
```
|
206 |
-
pip install invisible_watermark transformers accelerate safetensors
|
207 |
-
```
|
208 |
|
209 |
-
Running the pipeline (The default scheduler for Animagine XL 2.0 is **EulerAncestralDiscreteScheduler** but you may also declare it in the code if you want to make sure)*:
|
210 |
```py
|
211 |
import torch
|
212 |
from diffusers import (
|
@@ -215,14 +165,17 @@ from diffusers import (
|
|
215 |
AutoencoderKL
|
216 |
)
|
217 |
|
|
|
218 |
lora_model_id = "Linaqruf/style-enhancer-xl-lora"
|
219 |
lora_filename = "style-enhancer-xl.safetensors"
|
220 |
|
|
|
221 |
vae = AutoencoderKL.from_pretrained(
|
222 |
"madebyollin/sdxl-vae-fp16-fix",
|
223 |
torch_dtype=torch.float16
|
224 |
)
|
225 |
|
|
|
226 |
pipe = StableDiffusionXLPipeline.from_pretrained(
|
227 |
"Linaqruf/animagine-xl-2.0",
|
228 |
vae=vae,
|
@@ -230,13 +183,14 @@ pipe = StableDiffusionXLPipeline.from_pretrained(
|
|
230 |
use_safetensors=True,
|
231 |
variant="fp16"
|
232 |
)
|
233 |
-
|
234 |
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
|
235 |
pipe.to('cuda')
|
236 |
|
|
|
237 |
pipe.load_lora_weights(lora_model_id, weight_name=lora_filename)
|
238 |
pipe.fuse_lora(lora_scale=0.6)
|
239 |
|
|
|
240 |
prompt = "face focus, cute, masterpiece, best quality, 1girl, green hair, sweater, looking at viewer, upper body, beanie, outdoors, night, turtleneck"
|
241 |
negative_prompt = "lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry"
|
242 |
|
@@ -249,9 +203,10 @@ image = pipe(
|
|
249 |
num_inference_steps=50
|
250 |
).images[0]
|
251 |
|
|
|
252 |
pipe.unfuse_lora()
|
253 |
-
|
254 |
image.save("anime_girl.png")
|
|
|
255 |
```
|
256 |
<hr>
|
257 |
|
|
|
85 |
margin-bottom: 0em;
|
86 |
}
|
87 |
|
88 |
+
.nsfw-filter {
|
89 |
+
filter: blur(8px); /* Apply a blur effect */
|
90 |
+
transition: filter 0.3s ease; /* Smooth transition for the blur effect */
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
91 |
}
|
92 |
|
93 |
+
.custom-image-container:hover .nsfw-filter {
|
94 |
+
filter: none; /* Remove the blur effect on hover */
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
95 |
}
|
96 |
</style>
|
97 |
+
|
98 |
<h1 class="title">
|
99 |
<span>Style Enhancer XL LoRA</span>
|
100 |
</h1>
|
|
|
102 |
<tr>
|
103 |
<td>
|
104 |
<div class="custom-image-container">
|
105 |
+
<img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/6365c8dbf31ef76df4042821/kMqCcN3CpMPHO1qyEgnGk.png" alt="sample1">
|
|
|
|
|
|
|
|
|
106 |
</div>
|
107 |
<div class="custom-image-container">
|
108 |
+
<img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/6365c8dbf31ef76df4042821/pfLIDf7ZX6WJHgTlfqiQk.png" alt="sample4">
|
|
|
|
|
|
|
|
|
109 |
</div>
|
110 |
</td>
|
111 |
<td>
|
112 |
<div class="custom-image-container">
|
113 |
+
<img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/6365c8dbf31ef76df4042821/s5ZCIaETb_eKijgbLbXwU.png" alt="sample2">
|
|
|
|
|
|
|
|
|
114 |
</div>
|
115 |
<div class="custom-image-container">
|
116 |
+
<img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/6365c8dbf31ef76df4042821/qjC5jKExA_JE5BuQJ6-Ue.png" alt="sample3">
|
117 |
+
</td>
|
118 |
+
<td>
|
119 |
+
<div class="custom-image-container">
|
120 |
+
<img class="custom-image nsfw-filter" src="https://cdn-uploads.huggingface.co/production/uploads/6365c8dbf31ef76df4042821/1vvUY1qbgot5np4CPmORh.png" alt="sample1">
|
121 |
+
</div>
|
122 |
+
<div class="custom-image-container">
|
123 |
+
<img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/6365c8dbf31ef76df4042821/7OQJdRKKpZponZiZ8cOre.png" alt="sample4">
|
124 |
</div>
|
125 |
</td>
|
126 |
</tr>
|
|
|
130 |
|
131 |
## Overview
|
132 |
|
133 |
+
**Style Enhancer XL LoRA** is an advanced, high-resolution LoRA (Low-Rank Adaptation) adapter designed to enhance the capabilities of Animagine XL 2.0. This innovative model excels in fine-tuning and refining anime-style images, producing unparalleled quality and detail. It seamlessly integrates with the Stable Diffusion XL framework, and uniquely supports Danbooru tags for precise and creative image generation.
|
|
|
|
|
|
|
|
|
134 |
|
135 |
+
Example tags include _**face focus, cute, masterpiece, best quality, 1girl, green hair, sweater, looking at viewer, upper body, beanie, outdoors, night, turtleneck**_.
|
136 |
|
137 |
<hr>
|
138 |
|
139 |
## Model Details
|
140 |
|
141 |
- **Developed by:** [Linaqruf](https://github.com/Linaqruf)
|
142 |
+
- **Model type:** LoRA adapter for Stable Diffusion XL
|
143 |
+
- **Model Description:** A compact yet powerful adapter designed to augment and enhance the output of large models like Animagine XL 2.0. This adapter not only improves the style and quality of anime-themed images but also allows users to recreate the distinct 'old-school' art style of SD 1.5. It's the perfect tool for generating high-fidelity, anime-inspired visual content.
|
144 |
- **License:** [CreativeML Open RAIL++-M License](https://huggingface.co/stabilityai/stable-diffusion-2/blob/main/LICENSE-MODEL)
|
145 |
- **Finetuned from model:** [Animagine XL 2.0](https://huggingface.co/Linaqruf/animagine-xl-2.0)
|
146 |
|
147 |
<hr>
|
148 |
|
149 |
+
## 🧨 Diffusers Installation
|
150 |
|
151 |
+
Ensure the installation of the latest `diffusers` library, along with other essential packages:
|
152 |
+
|
153 |
+
```bash
|
154 |
pip install diffusers --upgrade
|
155 |
+
pip install transformers accelerate safetensors
|
156 |
```
|
157 |
|
158 |
+
The following Python script demonstrates how to utilize the Style Enhancer XL LoRA with Animagine XL 2.0. The default scheduler is EulerAncestralDiscreteScheduler, but it can be explicitly defined for clarity.
|
|
|
|
|
|
|
159 |
|
|
|
160 |
```py
|
161 |
import torch
|
162 |
from diffusers import (
|
|
|
165 |
AutoencoderKL
|
166 |
)
|
167 |
|
168 |
+
# Initialize LoRA model and weights
|
169 |
lora_model_id = "Linaqruf/style-enhancer-xl-lora"
|
170 |
lora_filename = "style-enhancer-xl.safetensors"
|
171 |
|
172 |
+
# Load VAE component
|
173 |
vae = AutoencoderKL.from_pretrained(
|
174 |
"madebyollin/sdxl-vae-fp16-fix",
|
175 |
torch_dtype=torch.float16
|
176 |
)
|
177 |
|
178 |
+
# Configure the pipeline
|
179 |
pipe = StableDiffusionXLPipeline.from_pretrained(
|
180 |
"Linaqruf/animagine-xl-2.0",
|
181 |
vae=vae,
|
|
|
183 |
use_safetensors=True,
|
184 |
variant="fp16"
|
185 |
)
|
|
|
186 |
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
|
187 |
pipe.to('cuda')
|
188 |
|
189 |
+
# Load and fuse LoRA weights
|
190 |
pipe.load_lora_weights(lora_model_id, weight_name=lora_filename)
|
191 |
pipe.fuse_lora(lora_scale=0.6)
|
192 |
|
193 |
+
# Define prompts and generate image
|
194 |
prompt = "face focus, cute, masterpiece, best quality, 1girl, green hair, sweater, looking at viewer, upper body, beanie, outdoors, night, turtleneck"
|
195 |
negative_prompt = "lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry"
|
196 |
|
|
|
203 |
num_inference_steps=50
|
204 |
).images[0]
|
205 |
|
206 |
+
# Unfuse LoRA before saving the image
|
207 |
pipe.unfuse_lora()
|
|
|
208 |
image.save("anime_girl.png")
|
209 |
+
|
210 |
```
|
211 |
<hr>
|
212 |
|