Update README.md
Browse files
README.md
CHANGED
@@ -20,6 +20,7 @@ widget:
|
|
20 |
negative_prompt: nsfw, lowres, (bad), text, error, fewer, extra, missing, worst quality, jpeg artifacts, low quality, watermark, unfinished, displeasing, oldest, early, chromatic aberration, signature, extra digits, artistic error, username, scan, [abstract]
|
21 |
example_title: 1boy
|
22 |
---
|
|
|
23 |
<style>
|
24 |
.title-container {
|
25 |
display: flex;
|
@@ -39,32 +40,12 @@ widget:
|
|
39 |
background: transparent;
|
40 |
}
|
41 |
|
42 |
-
h1.title {
|
43 |
-
margin-bottom: 0px;
|
44 |
-
line-height: 0.4em;
|
45 |
-
}
|
46 |
-
|
47 |
.title span {
|
48 |
background: -webkit-linear-gradient(45deg, #7ed56f, #28b485);
|
49 |
-webkit-background-clip: text;
|
50 |
-webkit-text-fill-color: transparent;
|
51 |
}
|
52 |
-
.subtitle {
|
53 |
-
font-size: 1.5em;
|
54 |
-
text-align: center;
|
55 |
-
color: #777;
|
56 |
-
font-family: 'Helvetica Neue', sans-serif;
|
57 |
-
text-transform: uppercase;
|
58 |
-
margin-top: 0em;
|
59 |
-
letter-spacing: 0.2em;
|
60 |
-
background: transparent;
|
61 |
-
}
|
62 |
|
63 |
-
.subtitle span {
|
64 |
-
background: -webkit-linear-gradient(45deg, #7ed56f, #28b485);
|
65 |
-
-webkit-background-clip: text;
|
66 |
-
-webkit-text-fill-color: transparent;
|
67 |
-
}
|
68 |
.custom-table {
|
69 |
table-layout: fixed;
|
70 |
width: 100%;
|
@@ -78,19 +59,22 @@ widget:
|
|
78 |
padding: 10px;
|
79 |
box-shadow: 0px 0px 0px 0px rgba(0, 0, 0, 0.15);
|
80 |
}
|
|
|
81 |
.custom-image-container {
|
82 |
position: relative;
|
83 |
width: 100%;
|
84 |
margin-bottom: 0em;
|
85 |
overflow: hidden;
|
86 |
border-radius: 10px;
|
87 |
-
transition: transform .
|
88 |
/* Smooth transition for the container */
|
89 |
}
|
|
|
90 |
.custom-image-container:hover {
|
91 |
-
transform: scale(1.
|
92 |
/* Scale the container on hover */
|
93 |
}
|
|
|
94 |
.custom-image {
|
95 |
width: 100%;
|
96 |
height: auto;
|
@@ -99,10 +83,12 @@ widget:
|
|
99 |
transition: transform .7s;
|
100 |
margin-bottom: 0em;
|
101 |
}
|
|
|
102 |
.nsfw-filter {
|
103 |
filter: blur(8px); /* Apply a blur effect */
|
104 |
transition: filter 0.3s ease; /* Smooth transition for the blur effect */
|
105 |
}
|
|
|
106 |
.custom-image-container:hover .nsfw-filter {
|
107 |
filter: none; /* Remove the blur effect on hover */
|
108 |
}
|
@@ -129,16 +115,13 @@ widget:
|
|
129 |
}
|
130 |
.custom-image-container:hover .overlay {
|
131 |
opacity: 1;
|
132 |
-
/* Make the overlay always visible */
|
133 |
}
|
134 |
.overlay-text {
|
135 |
background: linear-gradient(45deg, #7ed56f, #28b485);
|
136 |
-webkit-background-clip: text;
|
137 |
color: transparent;
|
138 |
-
/* Fallback for browsers that do not support this effect */
|
139 |
text-shadow: 2px 2px 4px rgba(0, 0, 0, 0.7);
|
140 |
-
|
141 |
-
}
|
142 |
.overlay-subtext {
|
143 |
font-size: 0.75em;
|
144 |
margin-top: 0.5em;
|
@@ -148,14 +131,13 @@ widget:
|
|
148 |
.overlay,
|
149 |
.overlay-subtext {
|
150 |
text-shadow: 2px 2px 4px rgba(0, 0, 0, 0.5);
|
151 |
-
}
|
|
|
152 |
</style>
|
|
|
153 |
<h1 class="title">
|
154 |
<span>Animagine XL 3.1</span>
|
155 |
</h1>
|
156 |
-
<p class="subtitle">
|
157 |
-
<span>Imagine Beyond 3.0</span>
|
158 |
-
</p>
|
159 |
<table class="custom-table">
|
160 |
<tr>
|
161 |
<td>
|
@@ -184,47 +166,35 @@ widget:
|
|
184 |
</tr>
|
185 |
</table>
|
186 |
|
187 |
-
**Animagine XL 3.1** is the
|
188 |
-
## What’s New in Animagine XL 3.1 ?
|
189 |
-
## Aesthetic Tags
|
190 |
-
In addition to special tags, we would like to introduce aesthetic tags based on [ShadowLilac’s Aesthetic Shadow V2](https://huggingface.co/shadowlilac/aesthetic-shadow-v2). This tag, combined with quality tag, can be used to guide the model to generate better results. Below is the list of aesthetic tag that we include in this model, sorted from the best to the worst:
|
191 |
-
|
192 |
-
- very aesthetic
|
193 |
-
- aesthetic
|
194 |
-
- displeasing
|
195 |
-
- very displeasing
|
196 |
-
|
197 |
-
## Anime-focused Dataset Additions
|
198 |
-
On Animagine XL 3.0, we mostly added characters from popular gacha games. Based on users’ feedbacks, we are adding plenty of popular anime franchises into our dataset for this model. We will release the full list of the characters that might be generated by this iteration to our HuggingFace soon, be sure to check it out when it’s up!
|
199 |
|
200 |
## Model Details
|
201 |
- **Developed by**: [Cagliostro Research Lab](https://huggingface.co/cagliostrolab)
|
202 |
-
- **
|
203 |
-
- **Model type**: Diffusion-based text-to-image generative model
|
204 |
-
- **Model Description**: Animagine XL 3.1
|
205 |
- **License**: [Fair AI Public License 1.0-SD](https://freedevproject.org/faipl-1.0-sd/)
|
206 |
-
- **
|
207 |
|
208 |
## Gradio & Colab Integration
|
209 |
|
210 |
-
|
211 |
|
212 |
-
|
213 |
-
- **Google Colab**: [Open In Colab](https://colab.research.google.com/#fileId=https%3A//huggingface.co/Linaqruf/animagine-xl/blob/main/Animagine_XL_demo.ipynb)
|
214 |
|
215 |
## 🧨 Diffusers Installation
|
216 |
|
217 |
-
|
218 |
|
219 |
```bash
|
220 |
pip install diffusers transformers accelerate safetensors --upgrade
|
221 |
```
|
222 |
|
223 |
-
|
224 |
|
225 |
```python
|
226 |
import torch
|
227 |
-
from diffusers import DiffusionPipeline
|
228 |
|
229 |
pipe = DiffusionPipeline.from_pretrained(
|
230 |
"cagliostrolab/animagine-xl-3.1",
|
@@ -235,16 +205,17 @@ pipe.to('cuda')
|
|
235 |
|
236 |
prompt = "1girl, souryuu asuka langley, neon genesis evangelion, solo, upper body, v, smile, looking at viewer, outdoors, night"
|
237 |
negative_prompt = "nsfw, lowres, (bad), text, error, fewer, extra, missing, worst quality, jpeg artifacts, low quality, watermark, unfinished, displeasing, oldest, early, chromatic aberration, signature, extra digits, artistic error, username, scan, [abstract]"
|
|
|
238 |
image = pipe(
|
239 |
prompt,
|
240 |
-
negative_prompt=negative_prompt,
|
241 |
width=832,
|
242 |
-
height=1216,
|
243 |
guidance_scale=7,
|
244 |
num_inference_steps=28
|
245 |
).images[0]
|
246 |
|
247 |
-
image.save("./asuka_test.png")
|
248 |
```
|
249 |
|
250 |
## Usage Guidelines
|
@@ -259,7 +230,7 @@ For optimal results, it's recommended to follow the structured prompt template b
|
|
259 |
|
260 |
## Special Tags
|
261 |
|
262 |
-
|
263 |
|
264 |
### Quality Modifiers
|
265 |
|
@@ -281,7 +252,7 @@ We've also streamlined our rating tags for simplicity and clarity, aiming to est
|
|
281 |
|
282 |
| Rating Modifier | Rating Criterion |
|
283 |
|-------------------|------------------|
|
284 |
-
| `
|
285 |
| `sensitive` | Sensitive |
|
286 |
| `nsfw` | Questionable |
|
287 |
| `explicit, nsfw` | Explicit |
|
@@ -300,7 +271,7 @@ We've also redefined the year range to steer results towards specific modern or
|
|
300 |
|
301 |
### Aesthetic Tags
|
302 |
|
303 |
-
We've enhanced our tagging system with aesthetic tags to refine content categorization based on visual appeal. These tags
|
304 |
|
305 |
| Aesthetic Tag | Score Range |
|
306 |
|-------------------|-------------------|
|
@@ -343,20 +314,18 @@ This model supports generating images at the following dimensions:
|
|
343 |
|
344 |
## Training and Hyperparameters
|
345 |
|
346 |
-
|
347 |
-
-
|
348 |
-
|
349 |
-
- Finetuning
|
350 |
-
- **First Stage**: Utilize labeled and curated aesthetic datasets to refine broken U-Net after pretraining
|
351 |
-
- **Second Stage**: Utilize labeled and curated aesthetic datasets to refine the model's art style and fixing bad hands and anatomy
|
352 |
|
353 |
### Hyperparameters
|
354 |
|
355 |
-
| Stage
|
356 |
-
|
357 |
-
| **Pretraining
|
358 |
-
| **
|
359 |
-
| **
|
360 |
|
361 |
## Model Comparison (Pretraining only)
|
362 |
|
@@ -378,30 +347,19 @@ This model supports generating images at the following dimensions:
|
|
378 |
|
379 |
Source code and training config are available here: https://github.com/cagliostrolab/sd-scripts/tree/main/notebook
|
380 |
|
381 |
-
|
382 |
-
|
383 |
-
While "Animagine XL 3.1" represents a significant advancement in anime text-to-image generation, it's important to acknowledge its limitations to understand its best use cases and potential areas for future improvement.
|
384 |
-
|
385 |
-
1. **Concept Over Artstyle Focus**: The model prioritizes learning concepts rather than specific art styles, which might lead to variations in aesthetic appeal compared to its predecessor.
|
386 |
-
2. **Non-Photorealistic Design**: Animagine XL 3.0 is not designed for generating photorealistic or realistic images, focusing instead on anime-style artwork.
|
387 |
-
3. **Anatomical Challenges**: Despite improvements, the model can still struggle with complex anatomical structures, particularly in dynamic poses, resulting in occasional inaccuracies.
|
388 |
-
4. **Dataset Limitations**: The training dataset of 1.2 million images may not encompass all anime characters or series, limiting the model's ability to generate less known or newer characters.
|
389 |
-
5. **Natural Language Processing**: The model is not optimized for interpreting natural language, requiring more structured and specific prompts for best results.
|
390 |
-
6. **NSFW Content Risk**: Using high-quality tags like 'masterpiece' or 'best quality' carries a risk of generating NSFW content inadvertently, due to the prevalence of such images in high-scoring training datasets.
|
391 |
|
392 |
-
|
393 |
|
394 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
395 |
|
396 |
-
|
397 |
-
|
398 |
-
- **Main:** [Seaart](https://www.seaart.ai/) For the for the Sponsorship.
|
399 |
-
- **Cagliostro Lab Collaborator:** For helping quality checking during pretraining and curating datasets during fine-tuning.
|
400 |
-
- [**Kohya SS:**](https://github.com/bmaltais/kohya_ss) For providing the training scripts for model training, project, and data management
|
401 |
-
- **Camenduru Server Community:** For invaluable insights and support and quality checking
|
402 |
-
- **NovelAI:** For inspiring how to build the datasets and label it using tag ordering and Aesthetic Tags
|
403 |
-
- [**Shadow lilac:**](https://huggingface.co/shadowlilac/aesthetic-shadow-v2) For classification models.
|
404 |
-
- [**Derrian Distro:**](https://github.com/derrian-distro/LoRA_Easy_Training_Scripts/blob/main/custom_scheduler/LoraEasyCustomOptimizer/CustomOptimizers.py) for lr scheduler
|
405 |
|
406 |
## Collaborators
|
407 |
|
@@ -415,12 +373,26 @@ We extend our gratitude to the entire team and community that contributed to the
|
|
415 |
- [Kayfahaarukku](https://huggingface.co/kayfahaarukku)
|
416 |
- [Kriz](https://huggingface.co/Kr1SsSzz)
|
417 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
418 |
## License
|
419 |
|
420 |
-
Based on Animagine XL 3.0, Animagine XL 3.1 falls under [Fair AI Public License 1.0-SD](https://freedevproject.org/faipl-1.0-sd/), compatible with Stable Diffusion models. Key points:
|
|
|
421 |
1. **Modification Sharing:** If you modify Animagine XL 3.1, you must share both your changes and the original license.
|
422 |
2. **Source Code Accessibility:** If your modified version is network-accessible, provide a way (like a download link) for others to get the source code. This applies to derived models too.
|
423 |
3. **Distribution Terms:** Any distribution must be under this license or another with similar rules.
|
424 |
4. **Compliance:** Non-compliance must be fixed within 30 days to avoid license termination, emphasizing transparency and adherence to open-source values.
|
425 |
-
|
426 |
The choice of this license aims to keep Animagine XL 3.1 open and modifiable, aligning with open source community spirit. It protects contributors and users, encouraging a collaborative, ethical open-source community. This ensures the model not only benefits from communal input but also respects open-source development freedoms.
|
|
|
20 |
negative_prompt: nsfw, lowres, (bad), text, error, fewer, extra, missing, worst quality, jpeg artifacts, low quality, watermark, unfinished, displeasing, oldest, early, chromatic aberration, signature, extra digits, artistic error, username, scan, [abstract]
|
21 |
example_title: 1boy
|
22 |
---
|
23 |
+
|
24 |
<style>
|
25 |
.title-container {
|
26 |
display: flex;
|
|
|
40 |
background: transparent;
|
41 |
}
|
42 |
|
|
|
|
|
|
|
|
|
|
|
43 |
.title span {
|
44 |
background: -webkit-linear-gradient(45deg, #7ed56f, #28b485);
|
45 |
-webkit-background-clip: text;
|
46 |
-webkit-text-fill-color: transparent;
|
47 |
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
48 |
|
|
|
|
|
|
|
|
|
|
|
49 |
.custom-table {
|
50 |
table-layout: fixed;
|
51 |
width: 100%;
|
|
|
59 |
padding: 10px;
|
60 |
box-shadow: 0px 0px 0px 0px rgba(0, 0, 0, 0.15);
|
61 |
}
|
62 |
+
|
63 |
.custom-image-container {
|
64 |
position: relative;
|
65 |
width: 100%;
|
66 |
margin-bottom: 0em;
|
67 |
overflow: hidden;
|
68 |
border-radius: 10px;
|
69 |
+
transition: transform .7s;
|
70 |
/* Smooth transition for the container */
|
71 |
}
|
72 |
+
|
73 |
.custom-image-container:hover {
|
74 |
+
transform: scale(1.05);
|
75 |
/* Scale the container on hover */
|
76 |
}
|
77 |
+
|
78 |
.custom-image {
|
79 |
width: 100%;
|
80 |
height: auto;
|
|
|
83 |
transition: transform .7s;
|
84 |
margin-bottom: 0em;
|
85 |
}
|
86 |
+
|
87 |
.nsfw-filter {
|
88 |
filter: blur(8px); /* Apply a blur effect */
|
89 |
transition: filter 0.3s ease; /* Smooth transition for the blur effect */
|
90 |
}
|
91 |
+
|
92 |
.custom-image-container:hover .nsfw-filter {
|
93 |
filter: none; /* Remove the blur effect on hover */
|
94 |
}
|
|
|
115 |
}
|
116 |
.custom-image-container:hover .overlay {
|
117 |
opacity: 1;
|
|
|
118 |
}
|
119 |
.overlay-text {
|
120 |
background: linear-gradient(45deg, #7ed56f, #28b485);
|
121 |
-webkit-background-clip: text;
|
122 |
color: transparent;
|
|
|
123 |
text-shadow: 2px 2px 4px rgba(0, 0, 0, 0.7);
|
124 |
+
|
|
|
125 |
.overlay-subtext {
|
126 |
font-size: 0.75em;
|
127 |
margin-top: 0.5em;
|
|
|
131 |
.overlay,
|
132 |
.overlay-subtext {
|
133 |
text-shadow: 2px 2px 4px rgba(0, 0, 0, 0.5);
|
134 |
+
}
|
135 |
+
|
136 |
</style>
|
137 |
+
|
138 |
<h1 class="title">
|
139 |
<span>Animagine XL 3.1</span>
|
140 |
</h1>
|
|
|
|
|
|
|
141 |
<table class="custom-table">
|
142 |
<tr>
|
143 |
<td>
|
|
|
166 |
</tr>
|
167 |
</table>
|
168 |
|
169 |
+
**Animagine XL 3.1** is an update in the Animagine XL V3 series, enhancing the previous version, Animagine XL 3.0. This open-source, anime-themed text-to-image model has been improved for generating anime-style images with higher quality. It includes a broader range of characters from well-known anime series, an optimized dataset, and new aesthetic tags for better image creation. Built on Stable Diffusion XL, Animagine XL 3.1 aims to be a valuable resource for anime fans, artists, and content creators by producing accurate and detailed representations of anime characters.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
170 |
|
171 |
## Model Details
|
172 |
- **Developed by**: [Cagliostro Research Lab](https://huggingface.co/cagliostrolab)
|
173 |
+
- **In collaboration with**: [SeaArt.ai](https://www.seaart.ai/)
|
174 |
+
- **Model type**: Diffusion-based text-to-image generative model
|
175 |
+
- **Model Description**: Animagine XL 3.1 generates high-quality anime images from textual prompts. It boasts enhanced hand anatomy, improved concept understanding, and advanced prompt interpretation.
|
176 |
- **License**: [Fair AI Public License 1.0-SD](https://freedevproject.org/faipl-1.0-sd/)
|
177 |
+
- **Fine-tuned from**: [Animagine XL 3.0](https://huggingface.co/cagliostrolab/animagine-xl-3.0)
|
178 |
|
179 |
## Gradio & Colab Integration
|
180 |
|
181 |
+
Try the demo powered by Gradio in Huggingface Spaces: [![Open In Spaces](https://img.shields.io/badge/🤗-Open%20In%20Spaces-blue.svg)](https://huggingface.co/spaces/Linaqruf/Animagine-XL)
|
182 |
|
183 |
+
Or open the demo in Google Colab: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/animagine_xl_demo.ipynb)
|
|
|
184 |
|
185 |
## 🧨 Diffusers Installation
|
186 |
|
187 |
+
First install the required libraries:
|
188 |
|
189 |
```bash
|
190 |
pip install diffusers transformers accelerate safetensors --upgrade
|
191 |
```
|
192 |
|
193 |
+
Then run image generation with the following example code:
|
194 |
|
195 |
```python
|
196 |
import torch
|
197 |
+
from diffusers import DiffusionPipeline
|
198 |
|
199 |
pipe = DiffusionPipeline.from_pretrained(
|
200 |
"cagliostrolab/animagine-xl-3.1",
|
|
|
205 |
|
206 |
prompt = "1girl, souryuu asuka langley, neon genesis evangelion, solo, upper body, v, smile, looking at viewer, outdoors, night"
|
207 |
negative_prompt = "nsfw, lowres, (bad), text, error, fewer, extra, missing, worst quality, jpeg artifacts, low quality, watermark, unfinished, displeasing, oldest, early, chromatic aberration, signature, extra digits, artistic error, username, scan, [abstract]"
|
208 |
+
|
209 |
image = pipe(
|
210 |
prompt,
|
211 |
+
negative_prompt=negative_prompt,
|
212 |
width=832,
|
213 |
+
height=1216,
|
214 |
guidance_scale=7,
|
215 |
num_inference_steps=28
|
216 |
).images[0]
|
217 |
|
218 |
+
image.save("./output/asuka_test.png")
|
219 |
```
|
220 |
|
221 |
## Usage Guidelines
|
|
|
230 |
|
231 |
## Special Tags
|
232 |
|
233 |
+
Animagine XL 3.1 utilizes special tags to steer the result toward quality, rating, creation date and aesthetic. While the model can generate images without these tags, using them can help achieve better results.
|
234 |
|
235 |
### Quality Modifiers
|
236 |
|
|
|
252 |
|
253 |
| Rating Modifier | Rating Criterion |
|
254 |
|-------------------|------------------|
|
255 |
+
| `safe` | General |
|
256 |
| `sensitive` | Sensitive |
|
257 |
| `nsfw` | Questionable |
|
258 |
| `explicit, nsfw` | Explicit |
|
|
|
271 |
|
272 |
### Aesthetic Tags
|
273 |
|
274 |
+
We've enhanced our tagging system with aesthetic tags to refine content categorization based on visual appeal. These tags are derived from evaluations made by a specialized ViT (Vision Transformer) image classification model, specifically trained on anime data. For this purpose, we utilized the model [shadowlilac/aesthetic-shadow-v2](https://huggingface.co/shadowlilac/aesthetic-shadow-v2), which assesses the aesthetic value of content before it undergoes training. This ensures that each piece of content is not only relevant and accurate but also visually appealing.
|
275 |
|
276 |
| Aesthetic Tag | Score Range |
|
277 |
|-------------------|-------------------|
|
|
|
314 |
|
315 |
## Training and Hyperparameters
|
316 |
|
317 |
+
**Animagine XL 3.1** was trained on 2x A100 80GB GPUs for approximately 15 days, totaling over 350 GPU hours. The training process consisted of three stages:
|
318 |
+
- **Pretraining**: Utilized a data-rich collection of 870k ordered and tagged images to increase Animagine XL 3.0's model knowledge.
|
319 |
+
- **Finetuning - First Stage**: Employed labeled and curated aesthetic datasets to refine the broken U-Net after pretraining.
|
320 |
+
- **Finetuning - Second Stage**: Utilized labeled and curated aesthetic datasets to refine the model's art style and improve hand and anatomy rendering.
|
|
|
|
|
321 |
|
322 |
### Hyperparameters
|
323 |
|
324 |
+
| Stage | Epochs | UNet lr | Train Text Encoder | Batch Size | Noise Offset | Optimizer | LR Scheduler | Grad Acc Steps | GPUs |
|
325 |
+
|--------------------------|--------|---------|--------------------|------------|--------------|------------|-------------------------------|----------------|------|
|
326 |
+
| **Pretraining** | 10 | 1e-5 | True | 16 | N/A | AdamW | Cosine Annealing Warm Restart | 3 | 2 |
|
327 |
+
| **Finetuning 1st Stage** | 10 | 2e-6 | False | 48 | 0.0357 | Adafactor | Constant with Warmup | 1 | 1 |
|
328 |
+
| **Finetuning 2nd Stage** | 15 | 1e-6 | False | 48 | 0.0357 | Adafactor | Constant with Warmup | 1 | 1 |
|
329 |
|
330 |
## Model Comparison (Pretraining only)
|
331 |
|
|
|
347 |
|
348 |
Source code and training config are available here: https://github.com/cagliostrolab/sd-scripts/tree/main/notebook
|
349 |
|
350 |
+
### Acknowledgements
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
351 |
|
352 |
+
The development and release of Animagine XL 3.1 would not have been possible without the invaluable contributions and support from the following individuals and organizations:
|
353 |
|
354 |
+
- **[SeaArt.ai](https://www.seaart.ai/)**: Our collaboration partner and sponsor.
|
355 |
+
- **[Shadow Lilac](https://huggingface.co/shadowlilac)**: For providing the aesthetic classification model, [aesthetic-shadow-v2](https://huggingface.co/shadowlilac/aesthetic-shadow-v2).
|
356 |
+
- **[Derrian Distro](https://github.com/derrian-distro)**: For their custom learning rate scheduler, adapted from [LoRA Easy Training Scripts](https://github.com/derrian-distro/LoRA_Easy_Training_Scripts/blob/main/custom_scheduler/LoraEasyCustomOptimizer/CustomOptimizers.py).
|
357 |
+
- **[Kohya SS](https://github.com/kohya-ss)**: For their comprehensive training scripts.
|
358 |
+
- **Cagliostrolab Collaborators**: For their dedication to model training, project management, and data curation.
|
359 |
+
- **Early Testers**: For their valuable feedback and quality assurance efforts.
|
360 |
+
- **NovelAI**: For their innovative approach to aesthetic tagging, which served as an inspiration for our implementation.
|
361 |
|
362 |
+
Thank you all for your support and expertise in pushing the boundaries of anime-style image generation.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
363 |
|
364 |
## Collaborators
|
365 |
|
|
|
373 |
- [Kayfahaarukku](https://huggingface.co/kayfahaarukku)
|
374 |
- [Kriz](https://huggingface.co/Kr1SsSzz)
|
375 |
|
376 |
+
## Limitations
|
377 |
+
|
378 |
+
While Animagine XL 3.1 represents a significant advancement in anime-style image generation, it is important to acknowledge its limitations:
|
379 |
+
|
380 |
+
1. **Anime-Focused**: This model is specifically designed for generating anime-style images and is not suitable for creating realistic photos.
|
381 |
+
2. **Prompt Complexity**: This model may not be suitable for users who expect high-quality results from short or simple prompts. The training focus was on concept understanding rather than aesthetic refinement, which may require more detailed and specific prompts to achieve the desired output.
|
382 |
+
3. **Prompt Format**: Animagine XL 3.1 is optimized for Danbooru-style tags rather than natural language prompts. For best results, users are encouraged to format their prompts using the appropriate tags and syntax.
|
383 |
+
4. **Anatomy and Hand Rendering**: Despite the improvements made in anatomy and hand rendering, there may still be instances where the model produces suboptimal results in these areas.
|
384 |
+
5. **Dataset Size**: The dataset used for training Animagine XL 3.1 consists of approximately 870,000 images. When combined with the previous iteration's dataset (1.2 million), the total training data amounts to around 2.1 million images. While substantial, this dataset size may still be considered limited in scope for an "ultimate" anime model.
|
385 |
+
6. **NSFW Content**: Animagine XL 3.1 has been designed to generate more balanced NSFW content. However, it is important to note that the model may still produce NSFW results, even if not explicitly prompted.
|
386 |
+
|
387 |
+
By acknowledging these limitations, we aim to provide transparency and set realistic expectations for users of Animagine XL 3.1. Despite these constraints, we believe that the model represents a significant step forward in anime-style image generation and offers a powerful tool for artists, designers, and enthusiasts alike.
|
388 |
+
|
389 |
## License
|
390 |
|
391 |
+
Based on Animagine XL 3.0, Animagine XL 3.1 falls under [Fair AI Public License 1.0-SD](https://freedevproject.org/faipl-1.0-sd/) license, which is compatible with Stable Diffusion models’ license. Key points:
|
392 |
+
|
393 |
1. **Modification Sharing:** If you modify Animagine XL 3.1, you must share both your changes and the original license.
|
394 |
2. **Source Code Accessibility:** If your modified version is network-accessible, provide a way (like a download link) for others to get the source code. This applies to derived models too.
|
395 |
3. **Distribution Terms:** Any distribution must be under this license or another with similar rules.
|
396 |
4. **Compliance:** Non-compliance must be fixed within 30 days to avoid license termination, emphasizing transparency and adherence to open-source values.
|
397 |
+
|
398 |
The choice of this license aims to keep Animagine XL 3.1 open and modifiable, aligning with open source community spirit. It protects contributors and users, encouraging a collaborative, ethical open-source community. This ensures the model not only benefits from communal input but also respects open-source development freedoms.
|