Add generated example (
#30)
14d1096
verified
-
V3-HF-Inference-images
/D-ART-18DART5_LoRA_Flux1 Repo update. Uploading the following: Sample images of training steps 2000-4000 session. LoRA file. README.md, _latent_cache archive, aptimizer.pt archive, config.yaml file. Itterations archive (2000 steps - 4000 steps LoRA File.). Style accuracy while minimizing anatomical degradation seems to have improved in this session. I`ve also just not realized that the last major commit message I uploaded the update incorectly named `PixelNinjaArt_LoRA_Flux1 Repo update.` instead of `/D-ART-18DART5_LoRA_Flux1`. I hope that doesn`t caue any confusion later on.
-
images
Add generated example (#30)
-
samples_0-2000
PixelNinjaArt_LoRA_Flux1 Repo update. Uploading the following: Sample images of training steps 0-2000 session. LoRA file. README.md, _latent_cache archive, aptimizer.pt archive, config..yaml file. Style accuracy is Noticeably better thayt previous versions I trained on civitai. Something I also did differently Regarding training in this session is that I removed a few images from the old dataset as well as made sure that the number of images was an even number. According to my own research, as well as my own experience, Having an odd number of images causes issues with memory management (think of the baking an unevern number of muffins analogy and why doing that would be ineffecient.). Due to the sheer amount of V Ram I had available for this, Doing this probably wasn`t totally necessary, but I feel it`s a good practice and habit to do. The original data set had 93 images, but I trimmed it down to eighty six this time. Anatomical degredation seem to be way less Prevalent when using this version. This current version is trained to 2000 steps, but I will likely train it up to 4000 in the future to see if I can get even better accuracy without having any anatomical degradation. In past training sessions on the on site Civitai trainer, having images that were even Remotely NSFW caused generating images to have the same style as the data said, but have severe anatomical degradation. extra limbs. bubbly looking lambs. missing limbs in general. etc etc. Well, it did happen a little bit with this version, it was nowhere near as bad as previous versions Perhaps I had different settings, or perhaps there is there`s something different going on when using the ostris ai toolkit trainer (https://github.com/ostris/ai-toolkit & https://github.com/AiArtFactory/ai-toolkit ). To make this very long commit message short, it looks promising
-
samples_2000-4000
/D-ART-18DART5_LoRA_Flux1 Repo update. Uploading the following: Sample images of training steps 2000-4000 session. LoRA file. README.md, _latent_cache archive, aptimizer.pt archive, config.yaml file. Itterations archive (2000 steps - 4000 steps LoRA File.). Style accuracy while minimizing anatomical degradation seems to have improved in this session. I`ve also just not realized that the last major commit message I uploaded the update incorectly named `PixelNinjaArt_LoRA_Flux1 Repo update.` instead of `/D-ART-18DART5_LoRA_Flux1`. I hope that doesn`t caue any confusion later on.
-
2.31 kB
initial commit
-
1.25 GB
/D-ART-18DART5_LoRA_Flux1 Repo update. Uploading the following: Sample images of training steps 2000-4000 session. LoRA file. README.md, _latent_cache archive, aptimizer.pt archive, config.yaml file. Itterations archive (2000 steps - 4000 steps LoRA File.). Style accuracy while minimizing anatomical degradation seems to have improved in this session. I`ve also just not realized that the last major commit message I uploaded the update incorectly named `PixelNinjaArt_LoRA_Flux1 Repo update.` instead of `/D-ART-18DART5_LoRA_Flux1`. I hope that doesn`t caue any confusion later on.
-
1.25 GB
PixelNinjaArt_LoRA_Flux1 Repo update. Uploading the following: Sample images of training steps 0-2000 session. LoRA file. README.md, _latent_cache archive, aptimizer.pt archive, config..yaml file. Style accuracy is Noticeably better thayt previous versions I trained on civitai. Something I also did differently Regarding training in this session is that I removed a few images from the old dataset as well as made sure that the number of images was an even number. According to my own research, as well as my own experience, Having an odd number of images causes issues with memory management (think of the baking an unevern number of muffins analogy and why doing that would be ineffecient.). Due to the sheer amount of V Ram I had available for this, Doing this probably wasn`t totally necessary, but I feel it`s a good practice and habit to do. The original data set had 93 images, but I trimmed it down to eighty six this time. Anatomical degredation seem to be way less Prevalent when using this version. This current version is trained to 2000 steps, but I will likely train it up to 4000 in the future to see if I can get even better accuracy without having any anatomical degradation. In past training sessions on the on site Civitai trainer, having images that were even Remotely NSFW caused generating images to have the same style as the data said, but have severe anatomical degradation. extra limbs. bubbly looking lambs. missing limbs in general. etc etc. Well, it did happen a little bit with this version, it was nowhere near as bad as previous versions Perhaps I had different settings, or perhaps there is there`s something different going on when using the ostris ai toolkit trainer (https://github.com/ostris/ai-toolkit & https://github.com/AiArtFactory/ai-toolkit ). To make this very long commit message short, it looks promising
-
172 MB
/D-ART-18DART5_LoRA_Flux1 Repo update. Uploading the following: Sample images of training steps 2000-4000 session. LoRA file. README.md, _latent_cache archive, aptimizer.pt archive, config.yaml file. Itterations archive (2000 steps - 4000 steps LoRA File.). Style accuracy while minimizing anatomical degradation seems to have improved in this session. I`ve also just not realized that the last major commit message I uploaded the update incorectly named `PixelNinjaArt_LoRA_Flux1 Repo update.` instead of `/D-ART-18DART5_LoRA_Flux1`. I hope that doesn`t caue any confusion later on.
-
10.6 kB
Add generated example (#30)
-
53.1 MB
PixelNinjaArt_LoRA_Flux1 Repo update. Uploading the following: Sample images of training steps 0-2000 session. LoRA file. README.md, _latent_cache archive, aptimizer.pt archive, config..yaml file. Style accuracy is Noticeably better thayt previous versions I trained on civitai. Something I also did differently Regarding training in this session is that I removed a few images from the old dataset as well as made sure that the number of images was an even number. According to my own research, as well as my own experience, Having an odd number of images causes issues with memory management (think of the baking an unevern number of muffins analogy and why doing that would be ineffecient.). Due to the sheer amount of V Ram I had available for this, Doing this probably wasn`t totally necessary, but I feel it`s a good practice and habit to do. The original data set had 93 images, but I trimmed it down to eighty six this time. Anatomical degredation seem to be way less Prevalent when using this version. This current version is trained to 2000 steps, but I will likely train it up to 4000 in the future to see if I can get even better accuracy without having any anatomical degradation. In past training sessions on the on site Civitai trainer, having images that were even Remotely NSFW caused generating images to have the same style as the data said, but have severe anatomical degradation. extra limbs. bubbly looking lambs. missing limbs in general. etc etc. Well, it did happen a little bit with this version, it was nowhere near as bad as previous versions Perhaps I had different settings, or perhaps there is there`s something different going on when using the ostris ai toolkit trainer (https://github.com/ostris/ai-toolkit & https://github.com/AiArtFactory/ai-toolkit ). To make this very long commit message short, it looks promising
-
3.68 kB
/D-ART-18DART5_LoRA_Flux1 Repo update. Uploading the following: Sample images of training steps 2000-4000 session. LoRA file. README.md, _latent_cache archive, aptimizer.pt archive, config.yaml file. Itterations archive (2000 steps - 4000 steps LoRA File.). Style accuracy while minimizing anatomical degradation seems to have improved in this session. I`ve also just not realized that the last major commit message I uploaded the update incorectly named `PixelNinjaArt_LoRA_Flux1 Repo update.` instead of `/D-ART-18DART5_LoRA_Flux1`. I hope that doesn`t caue any confusion later on.
-
155 MB
/D-ART-18DART5_LoRA_Flux1 Repo update. Uploading the following: Sample images of training steps 2000-4000 session. LoRA file. README.md, _latent_cache archive, aptimizer.pt archive, config.yaml file. Itterations archive (2000 steps - 4000 steps LoRA File.). Style accuracy while minimizing anatomical degradation seems to have improved in this session. I`ve also just not realized that the last major commit message I uploaded the update incorectly named `PixelNinjaArt_LoRA_Flux1 Repo update.` instead of `/D-ART-18DART5_LoRA_Flux1`. I hope that doesn`t caue any confusion later on.