Question about Lora training scripts?
Could you kindly share which training scripts you used and also the training config in details?
Good morning, @quocanh34
I use multiple trainer toolkits based on the type and characteristics of the image dataset.
- For multi-characterized lingual datasets, I use the kohaya_ss script on L4.
- For single-type image datasets, I use one of the following:
The configuration depends on the quality of the dataset. For training, I usually go through 3β4 iterations with the same dataset using different configurations. It's rare to get the best results on the first try! On average, achieving the best results takes 2700β3400 steps, though the performance may still contain artifacts or fail in many cases. The required steps often increase with the number of images in the dataset.
For variable-dimension images or datasets with varying aspect ratios, I adjust the dimensions using outpainting. To fill the remaining aspect ratio, I use tools like:
- Magic Expand from Canva,
- Firefly's Generative Fill, or
- Flux's Outpaint for aspect ratio adjustments.