Add generated example
f526ba8
verified
-
images
Add generated example
-
samples_0-2000
PixelNinjaArt_LoRA_Flux1 Repo update. Uploading the following: Sample images of training steps 0-2000 session. LoRA file. README.md, _latent_cache archive, aptimizer.pt archive, config..yaml file. Style accuracy is Noticeably better thayt previous versions I trained on civitai. Something I also did differently Regarding training in this session is that I removed a few images from the old dataset as well as made sure that the number of images was an even number. According to my own research, as well as my own experience, Having an odd number of images causes issues with memory management (think of the baking an unevern number of muffins analogy and why doing that would be ineffecient.). Due to the sheer amount of V Ram I had available for this, Doing this probably wasn`t totally necessary, but I feel it`s a good practice and habit to do. The original data set had 93 images, but I trimmed it down to eighty six this time. Anatomical degredation seem to be way less Prevalent when using this version. This current version is trained to 2000 steps, but I will likely train it up to 4000 in the future to see if I can get even better accuracy without having any anatomical degradation. In past training sessions on the on site Civitai trainer, having images that were even Remotely NSFW caused generating images to have the same style as the data said, but have severe anatomical degradation. extra limbs. bubbly looking lambs. missing limbs in general. etc etc. Well, it did happen a little bit with this version, it was nowhere near as bad as previous versions Perhaps I had different settings, or perhaps there is there`s something different going on when using the ostris ai toolkit trainer (https://github.com/ostris/ai-toolkit & https://github.com/AiArtFactory/ai-toolkit ). To make this very long commit message short, it looks promising
-
2.31 kB
initial commit
-
1.25 GB
PixelNinjaArt_LoRA_Flux1 Repo update. Uploading the following: Sample images of training steps 0-2000 session. LoRA file. README.md, _latent_cache archive, aptimizer.pt archive, config..yaml file. Style accuracy is Noticeably better thayt previous versions I trained on civitai. Something I also did differently Regarding training in this session is that I removed a few images from the old dataset as well as made sure that the number of images was an even number. According to my own research, as well as my own experience, Having an odd number of images causes issues with memory management (think of the baking an unevern number of muffins analogy and why doing that would be ineffecient.). Due to the sheer amount of V Ram I had available for this, Doing this probably wasn`t totally necessary, but I feel it`s a good practice and habit to do. The original data set had 93 images, but I trimmed it down to eighty six this time. Anatomical degredation seem to be way less Prevalent when using this version. This current version is trained to 2000 steps, but I will likely train it up to 4000 in the future to see if I can get even better accuracy without having any anatomical degradation. In past training sessions on the on site Civitai trainer, having images that were even Remotely NSFW caused generating images to have the same style as the data said, but have severe anatomical degradation. extra limbs. bubbly looking lambs. missing limbs in general. etc etc. Well, it did happen a little bit with this version, it was nowhere near as bad as previous versions Perhaps I had different settings, or perhaps there is there`s something different going on when using the ostris ai toolkit trainer (https://github.com/ostris/ai-toolkit & https://github.com/AiArtFactory/ai-toolkit ). To make this very long commit message short, it looks promising
-
172 MB
PixelNinjaArt_LoRA_Flux1 Repo update. Uploading the following: Sample images of training steps 0-2000 session. LoRA file. README.md, _latent_cache archive, aptimizer.pt archive, config..yaml file. Style accuracy is Noticeably better thayt previous versions I trained on civitai. Something I also did differently Regarding training in this session is that I removed a few images from the old dataset as well as made sure that the number of images was an even number. According to my own research, as well as my own experience, Having an odd number of images causes issues with memory management (think of the baking an unevern number of muffins analogy and why doing that would be ineffecient.). Due to the sheer amount of V Ram I had available for this, Doing this probably wasn`t totally necessary, but I feel it`s a good practice and habit to do. The original data set had 93 images, but I trimmed it down to eighty six this time. Anatomical degredation seem to be way less Prevalent when using this version. This current version is trained to 2000 steps, but I will likely train it up to 4000 in the future to see if I can get even better accuracy without having any anatomical degradation. In past training sessions on the on site Civitai trainer, having images that were even Remotely NSFW caused generating images to have the same style as the data said, but have severe anatomical degradation. extra limbs. bubbly looking lambs. missing limbs in general. etc etc. Well, it did happen a little bit with this version, it was nowhere near as bad as previous versions Perhaps I had different settings, or perhaps there is there`s something different going on when using the ostris ai toolkit trainer (https://github.com/ostris/ai-toolkit & https://github.com/AiArtFactory/ai-toolkit ). To make this very long commit message short, it looks promising
-
6.76 kB
Add generated example
-
53.1 MB
PixelNinjaArt_LoRA_Flux1 Repo update. Uploading the following: Sample images of training steps 0-2000 session. LoRA file. README.md, _latent_cache archive, aptimizer.pt archive, config..yaml file. Style accuracy is Noticeably better thayt previous versions I trained on civitai. Something I also did differently Regarding training in this session is that I removed a few images from the old dataset as well as made sure that the number of images was an even number. According to my own research, as well as my own experience, Having an odd number of images causes issues with memory management (think of the baking an unevern number of muffins analogy and why doing that would be ineffecient.). Due to the sheer amount of V Ram I had available for this, Doing this probably wasn`t totally necessary, but I feel it`s a good practice and habit to do. The original data set had 93 images, but I trimmed it down to eighty six this time. Anatomical degredation seem to be way less Prevalent when using this version. This current version is trained to 2000 steps, but I will likely train it up to 4000 in the future to see if I can get even better accuracy without having any anatomical degradation. In past training sessions on the on site Civitai trainer, having images that were even Remotely NSFW caused generating images to have the same style as the data said, but have severe anatomical degradation. extra limbs. bubbly looking lambs. missing limbs in general. etc etc. Well, it did happen a little bit with this version, it was nowhere near as bad as previous versions Perhaps I had different settings, or perhaps there is there`s something different going on when using the ostris ai toolkit trainer (https://github.com/ostris/ai-toolkit & https://github.com/AiArtFactory/ai-toolkit ). To make this very long commit message short, it looks promising
-
3.68 kB
PixelNinjaArt_LoRA_Flux1 Repo update. Uploading the following: Sample images of training steps 0-2000 session. LoRA file. README.md, _latent_cache archive, aptimizer.pt archive, config..yaml file. Style accuracy is Noticeably better thayt previous versions I trained on civitai. Something I also did differently Regarding training in this session is that I removed a few images from the old dataset as well as made sure that the number of images was an even number. According to my own research, as well as my own experience, Having an odd number of images causes issues with memory management (think of the baking an unevern number of muffins analogy and why doing that would be ineffecient.). Due to the sheer amount of V Ram I had available for this, Doing this probably wasn`t totally necessary, but I feel it`s a good practice and habit to do. The original data set had 93 images, but I trimmed it down to eighty six this time. Anatomical degredation seem to be way less Prevalent when using this version. This current version is trained to 2000 steps, but I will likely train it up to 4000 in the future to see if I can get even better accuracy without having any anatomical degradation. In past training sessions on the on site Civitai trainer, having images that were even Remotely NSFW caused generating images to have the same style as the data said, but have severe anatomical degradation. extra limbs. bubbly looking lambs. missing limbs in general. etc etc. Well, it did happen a little bit with this version, it was nowhere near as bad as previous versions Perhaps I had different settings, or perhaps there is there`s something different going on when using the ostris ai toolkit trainer (https://github.com/ostris/ai-toolkit & https://github.com/AiArtFactory/ai-toolkit ). To make this very long commit message short, it looks promising
-
155 MB
PixelNinjaArt_LoRA_Flux1 Repo update. Uploading the following: Sample images of training steps 0-2000 session. LoRA file. README.md, _latent_cache archive, aptimizer.pt archive, config..yaml file. Style accuracy is Noticeably better thayt previous versions I trained on civitai. Something I also did differently Regarding training in this session is that I removed a few images from the old dataset as well as made sure that the number of images was an even number. According to my own research, as well as my own experience, Having an odd number of images causes issues with memory management (think of the baking an unevern number of muffins analogy and why doing that would be ineffecient.). Due to the sheer amount of V Ram I had available for this, Doing this probably wasn`t totally necessary, but I feel it`s a good practice and habit to do. The original data set had 93 images, but I trimmed it down to eighty six this time. Anatomical degredation seem to be way less Prevalent when using this version. This current version is trained to 2000 steps, but I will likely train it up to 4000 in the future to see if I can get even better accuracy without having any anatomical degradation. In past training sessions on the on site Civitai trainer, having images that were even Remotely NSFW caused generating images to have the same style as the data said, but have severe anatomical degradation. extra limbs. bubbly looking lambs. missing limbs in general. etc etc. Well, it did happen a little bit with this version, it was nowhere near as bad as previous versions Perhaps I had different settings, or perhaps there is there`s something different going on when using the ostris ai toolkit trainer (https://github.com/ostris/ai-toolkit & https://github.com/AiArtFactory/ai-toolkit ). To make this very long commit message short, it looks promising