File size: 3,513 Bytes
c3b12e8 f2ba3dd c3b12e8 f2ba3dd c3b12e8 f2ba3dd c3b12e8 f2ba3dd c3b12e8 f2ba3dd c3b12e8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 |
---
language: en
thumbnail: ./assets/example.png
tags:
- text-to-image
- diffusion models
- LoRA fine-tuning
- animagine-xl-3.0
- stable_diffusion_xl
- kohya_ss
- waifu2x
license: apache-2.0
model:
name: GirlsFrontline2-SDXL-LoRA
description: A model based on SDXL with LoRA fine-tuning for 《Girl's Frontline 2》 Text-To-Image generation.
pipeline_tag: image-generation
repo: https://huggingface.co/TfiyuenLau/GirlsFrontline2_SDXL_LoRA
library: huggingface
framework: pytorch
version: 1.0.0
pretrained_model: stable_diffusion_xl
base_model: animagine-xl-3.0
fine_tuner: kohya_ss
data_augmentation: waifu2x
task: text-to-image
---
# 基于SDXL模型LoRA微调实现《少前2:追放》文生图
![example](./assets/example.png)
## 一、Model Library
1. 微调数据集:[基于SDXL模型的《少女前线2:追放》LoRA微调数据集](https://www.kaggle.com/datasets/yukikonata/sdxl2lora)
2. 预训练模型:[stable_diffusion_xl](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/stable_diffusion_xl)
3. 底模:[animagine-xl-3.0](https://huggingface.co/cagliostrolab/animagine-xl-3.0)
4. SDXL LoRA微调训练器:[kohya_ss](https://github.com/bmaltais/kohya_ss)
5. 数据集画质增强:[waifu2x](https://github.com/nagadomi/waifu2x)
## 二、Prompt Dict
1. 少前2追放角色
* 佩里缇亚: PKPSP
* 塞布丽娜: SPAS12
* 托洛洛: AKAlfa
* 桑朵莱希: G36
* 琼玖: QBZ191
* 维普雷: Vepr12
* 莫辛纳甘: MosinNagant
* 黛烟: QBZ95
* 克罗丽科: Kroliko
* 夏克里: XCRL
* 奇塔: MP7
* 寇尔芙: TaurusCurve
* 科谢尼娅: APS
* 纳甘: Nagant1895
* 纳美西丝: OM50
* 莉塔拉: GalilARM
* 闪电: OTs14
2. Pixiv画师风格
* おにねこ(鬼猫): Onineko26
* 麻生: AsouAsabu
* mignon: Mignon
* migolu: Migolu
## 三、使用方式
1. 安装部分环境(默认已安装pytorch等必要环境)
~~~sh
pip install diffusers --upgrade
pip install transformers accelerate safetensors
~~~
2. 使用Hugging Face下载并使用底模(animagine-xl-3.0)和LoRA模型
~~~python
import torch
import datetime
from PIL import Image
import matplotlib.pyplot as plt
from diffusers import (
StableDiffusionXLPipeline,
EulerAncestralDiscreteScheduler,
AutoencoderKL
)
# LoRA Hugging Face ID
lora_id = "TfiyuenLau/GirlsFrontline2_SDXL_LoRA"
# Load VAE component
vae = AutoencoderKL.from_pretrained(
"madebyollin/sdxl-vae-fp16-fix",
torch_dtype=torch.float16
)
# Configure the pipeline
pipe = StableDiffusionXLPipeline.from_pretrained(
"cagliostrolab/animagine-xl-3.0",
vae=vae,
torch_dtype=torch.float16,
use_safetensors=True,
)
pipe.load_lora_weights(lora_id)
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
pipe.to('cuda')
~~~
3. 生成图像
~~~python
# Define Prompt
output = "./output.png"
prompt = "1girl, OTs14, gloves, looking at viewer, smile, food, holding, solo, closed mouth, sitting, yellow eyes, black gloves, masterpiece, best quality"
negative_prompt = "nsfw, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name"
# Generate Image
image = pipe(
prompt,
negative_prompt=negative_prompt,
width=1024,
height=1024,
guidance_scale=7,
num_inference_steps=28
).images[0]
# Save & Show
image.save(output)
image = Image.open(output)
plt.axis('off')
plt.imshow(image)
image.close()
~~~
|