lora-training / style training.md
khanon's picture
initial commit
7529c6f
|
raw
history blame
1.19 kB
style training
Mixer/magnet anon here, I've been doing so many LoRAs for the past week /2 weeks that my eyes hurt, and my SSD space as well.
I found good settings for concepts if you have a 4090/3090/3090Ti (basically 24GB VRAM).
For 150-200 images (I use 20 repeats), or for 800~ images I used 14 repeats average (it was 5 different concept folders):
$learning_rate = 0.0001/1e-4 (or 0.0012/1.2e-3)
$lr_warmup_ratio = 0.05
$train_batch_size = 12
$num_epochs = 4
$save_every_n_epochs=1
$scheduler="cosine_with_restarts"
$network_dim=176 (or 192)
$text_encoder_lr=1.5e-5
$unet_lr=0.00015
You have to be careful to get the concept but not the style, so play with the LR and repeats.
For characters, the settings are similar but: less repeats, netdim at 96-128 and always 1e-4 LR
Also, for concepts I didn't prune tags, but I added a first token to help the LoRA. I felt the resutls were better (but obviously you have to use more prompts), but, better results with promtps>not that good results without prompts.
(For characters I still prune the tags that define it though, it is better in that case)
------------
Have I missed any new model that's worth to try to merge/mix and such?