Training details
Hi, thanks for your wonderful LoRA! Could you kindly share more details on the training configs (like images / lrs / techniques etc)
Many thanks
Hey, I believe this lora was actually trained on the default configs on AI-toolkit.
Batch size might have increased up to 4. I think the learning rate was at 0.0004. Dim/Alpha both set at 16. Optimizer was adamw8bit. Steps got up to 5000, though this LoRA is intentionally overtrained. Dataset was kept to around 30 images of a equal balance of different photos from around the 2000s-2010s. They were all just captioned with the word 'photo'. I was trying to match the quality of my older versions, and I ended up switching to an older AI-toolkit commit and just copied over the two lines of code to fix the latents shift bug (Some other commit was the probably main influence for the better quality. May honestly just have an effect on a faster learning rate and that is all).
I do not believe the training techniques used for training this LoRA version are the best approach for better realism though. I was mostly trying to get a quick work around for vae shift issue that would still provide reliable results comparable to the previous version. I am starting to get better techniques that work with slower training rates on larger datasets and a higher rank. I may have a much better version for prompt understanding, less hand disfigurement, and more creativity hopefully within the next few weeks.
Thank you for sharing the detailed training configs! It's really helpful for me to better understand and experiment with my own setups. I'm looking forward to the new version you mentioned, especially with improved prompt understanding and detail handling. Thanks again for your sharing!