File size: 20,665 Bytes
46cf819
 
 
 
f612a15
 
 
 
 
 
 
 
 
dcd9ed1
f612a15
dcd9ed1
f612a15
dcd9ed1
f612a15
dcd9ed1
f612a15
46cf819
f612a15
 
 
 
 
 
 
 
 
 
 
 
 
 
2a308d1
f612a15
 
 
 
23ad966
 
 
 
 
f612a15
 
 
 
 
 
 
 
 
 
 
23ad966
2a308d1
f612a15
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
23ad966
f612a15
 
 
23ad966
f612a15
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
23ad966
f612a15
 
 
 
 
 
 
 
 
23ad966
f612a15
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9336a6c
f612a15
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dcd9ed1
f612a15
 
 
 
 
 
dcd9ed1
 
 
f612a15
 
 
 
 
dcd9ed1
 
 
f612a15
 
 
 
 
 
 
 
dcd9ed1
 
f612a15
 
 
 
 
 
dcd9ed1
f612a15
 
 
 
 
 
 
 
 
 
 
dcd9ed1
 
 
 
 
 
 
 
 
 
 
f612a15
 
 
dcd9ed1
 
 
 
 
 
 
 
f612a15
 
 
dcd9ed1
f612a15
 
dcd9ed1
 
 
 
f612a15
 
 
 
 
dcd9ed1
f612a15
dcd9ed1
 
 
 
 
 
f612a15
 
 
 
 
 
dcd9ed1
f612a15
 
 
 
 
dcd9ed1
f612a15
 
dcd9ed1
f612a15
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dcd9ed1
 
 
 
 
 
f612a15
 
 
dcd9ed1
 
 
 
 
f612a15
dcd9ed1
f612a15
 
 
dcd9ed1
 
 
 
 
 
 
 
 
 
 
 
 
f612a15
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ec4a98d
 
f612a15
 
 
 
 
 
ec4a98d
f612a15
 
 
 
 
23ad966
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
tags:
  - text-to-image
  - stable-diffusion
  - safetensors
  - stable-diffusion-xl
base_model: cagliostrolab/animagine-xl-3.0
widget:
- text: 1girl, green hair, sweater, looking at viewer, upper body, beanie, outdoors, night, turtleneck, masterpiece, best quality, very aesthetic, absurdes
  parameter:
    negative_prompt: nsfw, lowres, (bad), text, error, fewer, extra, missing, worst quality, jpeg artifacts, low quality, watermark, unfinished, displeasing, oldest, early, chromatic aberration, signature, extra digits, artistic error, username, scan, [abstract]
  example_title: 1girl
- text: 1boy, male focus, green hair, sweater, looking at viewer, upper body, beanie, outdoors, night, turtleneck, masterpiece, best quality, very aesthetic, absurdes
  parameter: 
    negative_prompt: nsfw, lowres, (bad), text, error, fewer, extra, missing, worst quality, jpeg artifacts, low quality, watermark, unfinished, displeasing, oldest, early, chromatic aberration, signature, extra digits, artistic error, username, scan, [abstract]
  example_title: 1boy
---
<style>
  .title-container {
    display: flex;
    justify-content: center;
    align-items: center;
    height: 100vh; /* Adjust this value to position the title vertically */
  }
  
  .title {
    font-size: 2.5em;
    text-align: center;
    color: #333;
    font-family: 'Helvetica Neue', sans-serif;
    text-transform: uppercase;
    letter-spacing: 0.1em;
    padding: 0.5em 0;
    background: transparent;
  }
  
  h1.title {
      margin-bottom: 0px;
      line-height: 0.4em;
  }
  
  .title span {
    background: -webkit-linear-gradient(45deg, #7ed56f, #28b485);
    -webkit-background-clip: text;
    -webkit-text-fill-color: transparent;
  }
  .subtitle {
  font-size: 1.5em;
  text-align: center;
  color: #777;
  font-family: 'Helvetica Neue', sans-serif;
  text-transform: uppercase;
  margin-top: 0em;
  letter-spacing: 0.2em;
  background: transparent;
  }
  
  .subtitle span {
    background: -webkit-linear-gradient(45deg, #7ed56f, #28b485);
    -webkit-background-clip: text;
    -webkit-text-fill-color: transparent;
  }
  .custom-table {
    table-layout: fixed;
    width: 100%;
    border-collapse: collapse;
    margin-top: 2em;
  }
  
  .custom-table td {
    width: 50%;
    vertical-align: top;
    padding: 10px;
    box-shadow: 0px 0px 0px 0px rgba(0, 0, 0, 0.15);
  }
  .custom-image-container {
    position: relative;
    width: 100%;
    margin-bottom: 0em;
    overflow: hidden;
    border-radius: 10px;
    transition: transform .4s;
    /* Smooth transition for the container */
  }
  .custom-image-container:hover {
    transform: scale(1.17);
    /* Scale the container on hover */
  }
  .custom-image {
    width: 100%;
    height: auto;
    object-fit: cover;
    border-radius: 10px;
    transition: transform .7s;
    margin-bottom: 0em;
  }
  .nsfw-filter {
    filter: blur(8px); /* Apply a blur effect */
    transition: filter 0.3s ease; /* Smooth transition for the blur effect */
  }
  .custom-image-container:hover .nsfw-filter {
    filter: none; /* Remove the blur effect on hover */
  }
  
  .overlay {
    position: absolute;
    bottom: 0;
    left: 0;
    right: 0;
    color: white;
    width: 100%;
    height: 40%;
    display: flex;
    flex-direction: column;
    justify-content: center;
    align-items: center;
    font-size: 1vw;
    font-style: bold;
    text-align: center;
    opacity: 0;
    /* Keep the text fully opaque */
    background: linear-gradient(0deg, rgba(0, 0, 0, 0.8) 60%, rgba(0, 0, 0, 0) 100%);
    transition: opacity .5s;
  }
  .custom-image-container:hover .overlay {
    opacity: 1;
    /* Make the overlay always visible */
  }
  .overlay-text {
    background: linear-gradient(45deg, #7ed56f, #28b485);
    -webkit-background-clip: text;
    color: transparent;
    /* Fallback for browsers that do not support this effect */
    text-shadow: 2px 2px 4px rgba(0, 0, 0, 0.7);
    /* Enhanced text shadow for better legibility */
  }
  .overlay-subtext {
    font-size: 0.75em;
    margin-top: 0.5em;
    font-style: italic;
  }
    
  .overlay,
  .overlay-subtext {
    text-shadow: 2px 2px 4px rgba(0, 0, 0, 0.5);
  } 
</style>
<h1 class="title">
  <span>Animagine XL 3.1</span>
</h1>
<p class="subtitle">
  <span>Imagine Beyond 3.0</span>
</p>
<table class="custom-table">
  <tr>
    <td>
      <div class="custom-image-container">
        <img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/6365c8dbf31ef76df4042821/ep_oy_NVSMQaU162w8Gwp.png" alt="sample1">
      </div>
      <div class="custom-image-container">
        <img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/6365c8dbf31ef76df4042821/FGFZgsqrhOcor5mid5eap.png" alt="sample4">
      </div>
    </td>
    <td>
      <div class="custom-image-container">
        <img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/6365c8dbf31ef76df4042821/EuvINvBsCKZQuspZHN-uF.png" alt="sample2">
      </div>
      <div class="custom-image-container">
        <img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/6365c8dbf31ef76df4042821/yyRqdHJfePKl7ytB6ieX9.png" alt="sample3">
    </td>
    <td>
      <div class="custom-image-container">
        <img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/6365c8dbf31ef76df4042821/2oWmFh728T0hzEkUtSmgy.png" alt="sample1">
      </div>
      <div class="custom-image-container">
        <img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/6365c8dbf31ef76df4042821/3yaZxWkUOenZSSNtGQR_3.png" alt="sample4">
      </div>
    </td>
  </tr>
</table>
        
**Animagine XL 3.1** is the latest version of the sophisticated open-source anime text-to-image model, building upon the capabilities of its predecessor, Animagine XL 3.0. Developed based on Stable Diffusion XL, this iteration boasts superior image generation with notable improvements in hand anatomy, efficient tag ordering, and enhanced knowledge about anime concepts. Unlike the previous iteration, we focused to make the model learn concepts rather than aesthetic.
## What’s New in Animagine XL 3.1 ?
## Aesthetic Tags
In addition to special tags, we would like to introduce aesthetic tags based on [ShadowLilac’s Aesthetic Shadow V2](https://huggingface.co/shadowlilac/aesthetic-shadow-v2). This tag, combined with quality tag, can be used to guide the model to generate better results. Below is the list of aesthetic tag that we include in this model, sorted from the best to the worst:

- very aesthetic
- aesthetic
- displeasing
- very displeasing

## Anime-focused Dataset Additions
On Animagine XL 3.0, we mostly added characters from popular gacha games. Based on users’ feedbacks, we are adding plenty of popular anime franchises into our dataset for this model. We will release the full list of the characters that might be generated by this iteration to our HuggingFace soon, be sure to check it out when it’s up!

## Model Details
- **Developed by**: [Cagliostro Research Lab](https://huggingface.co/cagliostrolab)
- **Model type**: Diffusion-based text-to-image generative model
- **Model Description**: Animagine XL 3.1 is engineered to generate high-quality anime images from textual prompts. It features enhanced hand anatomy, better concept understanding, and prompt interpretation, making it the most advanced model in its series.
- **License**: [Fair AI Public License 1.0-SD](https://freedevproject.org/faipl-1.0-sd/)
- **Finetuned from model**: [Animagine XL 3.0](https://huggingface.co/cagliostrolab/animagine-xl-3.0)

## Gradio & Colab Integration

Animagine XL 3.1 is accessible through user-friendly platforms such as Gradio and Google Colab:

- **Gradio Web UI**: [Open In Spaces](https://huggingface.co/spaces/Linaqruf/Animagine-XL)
- **Google Colab**: [Open In Colab](https://colab.research.google.com/#fileId=https%3A//huggingface.co/Linaqruf/animagine-xl/blob/main/Animagine_XL_demo.ipynb)

## 🧨 Diffusers Installation

To use Animagine XL 3.1, install the required libraries as follows:

```bash
pip install diffusers transformers accelerate safetensors --upgrade
```

Example script for generating images with Animagine XL 3.1:

```python
import torch
from diffusers import DiffusionPipeline, 

pipe = DiffusionPipeline.from_pretrained(
    "cagliostrolab/animagine-xl-3.1", 
    torch_dtype=torch.float16, 
    use_safetensors=True, 
)
pipe.to('cuda')

prompt = "1girl, souryuu asuka langley, neon genesis evangelion, solo, upper body, v, smile, looking at viewer, outdoors, night"
negative_prompt = "nsfw, lowres, (bad), text, error, fewer, extra, missing, worst quality, jpeg artifacts, low quality, watermark, unfinished, displeasing, oldest, early, chromatic aberration, signature, extra digits, artistic error, username, scan, [abstract]"
image = pipe(
    prompt, 
    negative_prompt=negative_prompt, 
    width=832,
    height=1216,
    guidance_scale=7,
    num_inference_steps=28
).images[0]

image.save("./asuka_test.png")
```

## Usage Guidelines

### Tag Ordering

For optimal results, it's recommended to follow the structured prompt template because we train the model like this:

```
1girl/1boy, character name, from what series, everything else in any order.
```

## Special Tags

Like the previous iteration, this model was trained with some special tags to steer the result toward quality, rating and when the posts was created. The model can still do the job without these special tags, but it’s recommended to use them if we want to make the model easier to handle.

### Quality Modifiers

Quality tags now consider both scores and post ratings to ensure a balanced quality distribution. We've refined labels for greater clarity, such as changing 'high quality' to 'great quality'.

| Quality Modifier | Score Criterion   |
|------------------|-------------------|
| `masterpiece`    | > 95%             |
| `best quality`   | > 85% & ≤ 95%     |
| `great quality`  | > 75% & ≤ 85%     |
| `good quality`   | > 50% & ≤ 75%     |
| `normal quality` | > 25% & ≤ 50%     |
| `low quality`    | > 10% & ≤ 25%     |
| `worst quality`  | ≤ 10%             |

### Rating Modifiers

We've also streamlined our rating tags for simplicity and clarity, aiming to establish global rules that can be applied across different models. For example, the tag 'rating: general' is now simply 'general', and 'rating: sensitive' has been condensed to 'sensitive'. 

| Rating Modifier   | Rating Criterion |
|-------------------|------------------|
| `general`         | General          |
| `sensitive`       | Sensitive        |
| `nsfw`            | Questionable     |
| `explicit, nsfw`  | Explicit         |

### Year Modifier

We've also redefined the year range to steer results towards specific modern or vintage anime art styles more accurately. This update simplifies the range, focusing on relevance to current and past eras.

| Year Tag | Year Range       |
|----------|------------------|
| `newest` | 2021 to 2024     |
| `recent` | 2018 to 2020     |
| `mid`    | 2015 to 2017     |
| `early`  | 2011 to 2014     |
| `oldest` | 2005 to 2010     |

### Aesthetic Tags

We've enhanced our tagging system with aesthetic tags to refine content categorization based on visual appeal. These tags—`very aesthetic`, `aesthetic`, `displeasing`, and `very displeasing`—are derived from evaluations made by a specialized ViT (Vision Transformer) image classification model, specifically trained on anime data. For this purpose, we utilized the model [shadowlilac/aesthetic-shadow-v2](https://huggingface.co/shadowlilac/aesthetic-shadow-v2), which assesses the aesthetic value of content before it undergoes training. This ensures that each piece of content is not only relevant and accurate but also visually appealing.

| Aesthetic Tag     | Score Range       |
|-------------------|-------------------|
| `very aesthetic`  | > 0.71            |
| `aesthetic`       | > 0.45 & < 0.71   |
| `displeasing`     | > 0.27 & < 0.45   |
| `very displeasing`| ≤ 0.27            |

## Recommended settings

To guide the model towards generating high-aesthetic images, use negative prompts like:

```
nsfw, lowres, (bad), text, error, fewer, extra, missing, worst quality, jpeg artifacts, low quality, watermark, unfinished, displeasing, oldest, early, chromatic aberration, signature, extra digits, artistic error, username, scan, [abstract]
```

For higher quality outcomes, prepend prompts with:

```
masterpiece, best quality, very aesthetic, absurdres
```

it’s recommended to use a lower classifier-free guidance (CFG Scale) of around 5-7, sampling steps below 30, and to use Euler Ancestral (Euler a) as a sampler. 

### Multi Aspect Resolution

This model supports generating images at the following dimensions:

| Dimensions        | Aspect Ratio    |
|-------------------|-----------------|
| `1024 x 1024`     | 1:1 Square      |
| `1152 x 896`      | 9:7             |
| `896 x 1152`      | 7:9             |
| `1216 x 832`      | 19:13           |
| `832 x 1216`      | 13:19           |
| `1344 x 768`      | 7:4 Horizontal  |
| `768 x 1344`      | 4:7 Vertical    |
| `1536 x 640`      | 12:5 Horizontal |
| `640 x 1536`      | 5:12 Vertical   |

## Training and Hyperparameters

- **Animagine XL 3.1** was trained on a 2x A100 GPU 80GB for roughly 15 days or over 350 gpu hours (pretraining stage). The training process encompassed three stages:
  - Continual Pretraining: 
    - **Pretraining Stage**: Utilize data-rich collection of images, this consists of 870k million ordered, tagged images, to increase Animagine XL 3.0 model knowledge.
  - Finetuning: 
    - **First Stage**: Utilize labeled and curated aesthetic datasets to refine broken U-Net after pretraining
    - **Second Stage**: Utilize labeled and curated aesthetic datasets to refine the model's art style and fixing bad hands and anatomy

### Hyperparameters

| Stage                 | Epochs | UNet lr | Train Text Encoder | Batch Size | Noise Offset | Optimizer  | LR Scheduler                  | Grad Acc Steps | GPUs |
|-----------------------|--------|---------|--------------------|------------|--------------|------------|-------------------------------|----------------|------|
| **Pretraining Stage** | 10     | 1e-5    | True               | 16         | N/A          | AdamW      | Cosine Annealing Warm Restart | 3              | 2    |
| **First Stage**       | 10     | 2e-6    | False              | 48         | 0.0357       | Adafactor  | Constant with Warmup          | 1              | 1    |
| **Second Stage**      | 15     | 1e-6    | False              | 48         | 0.0357       | Adafactor  | Constant with Warmup          | 1              | 1    |

## Model Comparison (Pretraining only)

### Training Config

| Configuration Item              | Animagine XL 3.0                         | Animagine XL 3.1                               |
|---------------------------------|------------------------------------------|------------------------------------------------|
| **GPU**                         | 2 x A100 80G                             | 2 x A100 80G                                   |
| **Dataset**                     | 1,271,990                                | 873,504                                        |
| **Shuffle Separator**           | True                                     | True                                           |
| **Num Epochs**                  | 10                                       | 10                                             |
| **Learning Rate**               | 7.5e-6                                   | 1e-5                                           |
| **Text Encoder Learning Rate**  | 3.75e-6                                  | 1e-5                                           |
| **Effective Batch Size**        | 48 x 1 x 2                               | 16 x 3 x 2                                     |
| **Optimizer**                   | Adafactor                                | AdamW                                          |
| **Optimizer Args**              | Scale Parameter: False, Relative Step: False, Warmup Init: False | Weight Decay: 0.1, Betas: (0.9, 0.99)   |
| **LR Scheduler**                | Constant with Warmup                     | Cosine Annealing Warm Restart                  |
| **LR Scheduler Args**           | Warmup Steps: 100                        | Num Cycles: 10, Min LR: 1e-6, LR Decay: 0.9, First Cycle Steps: 9,099 |

Source code and training config are available here: https://github.com/cagliostrolab/sd-scripts/tree/main/notebook 

## Limitations

While "Animagine XL 3.1" represents a significant advancement in anime text-to-image generation, it's important to acknowledge its limitations to understand its best use cases and potential areas for future improvement.

1. **Concept Over Artstyle Focus**: The model prioritizes learning concepts rather than specific art styles, which might lead to variations in aesthetic appeal compared to its predecessor.
2. **Non-Photorealistic Design**: Animagine XL 3.0 is not designed for generating photorealistic or realistic images, focusing instead on anime-style artwork.
3. **Anatomical Challenges**: Despite improvements, the model can still struggle with complex anatomical structures, particularly in dynamic poses, resulting in occasional inaccuracies.
4. **Dataset Limitations**: The training dataset of 1.2 million images may not encompass all anime characters or series, limiting the model's ability to generate less known or newer characters.
5. **Natural Language Processing**: The model is not optimized for interpreting natural language, requiring more structured and specific prompts for best results.
6. **NSFW Content Risk**: Using high-quality tags like 'masterpiece' or 'best quality' carries a risk of generating NSFW content inadvertently, due to the prevalence of such images in high-scoring training datasets.

These limitations highlight areas for potential refinement in future iterations and underscore the importance of careful prompt crafting for optimal results. Understanding these constraints can help users better navigate the model's capabilities and tailor their expectations accordingly.

## Acknowledgements

We extend our gratitude to the entire team and community that contributed to the development of Animagine XL 3.0, including our partners and collaborators who provided resources and insights crucial for this iteration.

- **Main:** For the open source grant supporting our research, thank you so much.
- **Cagliostro Lab Collaborator:** For helping quality checking during pretraining and curating datasets during fine-tuning.
- **Kohya SS:** For providing the essential training script and merged our PR about `keep_tokens_separator` or Shuffle Separator.
- **Camenduru Server Community:** For invaluable insights and support and quality checking
- **NovelAI:** For inspiring how to build the datasets and label it using tag ordering.

## Collaborators

- [Linaqruf](https://huggingface.co/Linaqruf)
- [ItsMeBell](https://huggingface.co/ItsMeBell)
- [Asahina2K](https://huggingface.co/Asahina2K)
- [DamarJati](https://huggingface.co/DamarJati)
- [Zwicky18](https://huggingface.co/Zwicky18)
- [Scipius2121](https://huggingface.co/Scipius2121)
- [Raelina](https://huggingface.co/Raelina)

## License

Based on Animagine XL 3.0, Animagine XL 3.1 falls under [Fair AI Public License 1.0-SD](https://freedevproject.org/faipl-1.0-sd/), compatible with Stable Diffusion models. Key points:
1. **Modification Sharing:** If you modify Animagine XL 3.1, you must share both your changes and the original license.
2. **Source Code Accessibility:** If your modified version is network-accessible, provide a way (like a download link) for others to get the source code. This applies to derived models too.
3. **Distribution Terms:** Any distribution must be under this license or another with similar rules.
4. **Compliance:** Non-compliance must be fixed within 30 days to avoid license termination, emphasizing transparency and adherence to open-source values.

The choice of this license aims to keep Animagine XL 3.1 open and modifiable, aligning with open source community spirit. It protects contributors and users, encouraging a collaborative, ethical open-source community. This ensures the model not only benefits from communal input but also respects open-source development freedoms.