Will there be an EasyFluff a10 version?
I see that you made an a10 version for non-vpred models, but it looks like a9 is still the latest for EasyFluff. Will there be an a10 version?
EF + HLL is by far the most flexible model I've tried. PonyXL just makes me miserable in comparison. Thanks for all your work fam.
No. I'm not planning to make anything for 1.5, EF or SDXL at the moment.
Waiting for SD3
Understandable. I hope SD3 is actually decent to work with and that the censorship can be undone without too much damage. SDXL was kind of a disaster.
Well in any case thanks for making these!
I know you already said no, but it looks like EasyFluff just got updated.
https://civitai.com/models/129996/easyfluff
It looks like a merge with another model that some anon made:
https://rentry.org/fluffusion
It also looks like Fluffusion might get a Stable Cascade model with a 1024x base resolution. I'm just letting you know in case you don't browse the boards I found these on (which you probably do).
HLL-fluff was trained on fluffyrock, not on EF itself. And old EF is a merge based on fluffyrock.
New EF is based on a different model so it doesn't work properly anymore - https://litter.catbox.moe/qj174w.png
I can try resuming on the new EF.
Fluffusion might get a Stable Cascade model
The one called "Resonance"? It looks interesting. It will be useful it if SD3 is bad or if SD3 never releases
I was testing the new models last night (including the anime model linked in that Rentry), and yeah, they are definitely different. I was getting some pretty weird results even without using HLL with it, but maybe I just set it up wrong. I can't tell if the new versions are better or worse yet, but the anon that made Fluffusion said that it had "everything and more," so I'm assuming it's probably good.
It will be useful it if SD3 is bad or if SD3 never releases
https://twitter.com/chrlaf/status/1772228848387522728
Looks like 3-5 more weeks until SD3 releases with the weights and source code. Looks like they are taking their time to make it as "safe" as possible, so I expect it suck really bad by default. Hopefully they open source enough so that people can undo the damage, because the prompt understanding capabilities do look pretty great.
After messing with the new EasyFluff a bit and comparing the tag lists between Fluffusion and Fluffyrock, it does seem like Fluffusion covers more niche tags, whereas Fluffyrock (and thus the old Easyfluff version) has some unspecified omissions.
A very clear weakness the new Easyfluff has is the ability to generate realistic women (since it looks like photos aren't allowed on e621), and that's mainly the reason I think the old Easyfluff + HLL mogs every other model I've tried. The ability to generate realistic women with full booru tagging for cosplays, positions, etc. is unlike any other "realistic" model I've tried. It usually comes out uncanny at first, but after doing a second denoising pass (usually around 0.25-0.3) with epiCRealism Natural Sin (and sometimes a third pass with FaceDetailer), it comes out looking like a good photo.
TL;DR: If you are updating your dataset at all, I think the photo realism part is a standout feature. No rush though. If you release anything at all I'll consider it an act of God, because cycling through Civitai garbage makes me want to end myself.
Ok, I'll probably finish the new version in a day or two.
So far, it looks like this:
Those are some pretty cool results. Looks great!
Is the "source_photo" tag something you came up with? I don't see that tag on e621, Gelbooru, or Danbooru. For the a9 version of the lora I have been using "realistic, photo real" near the end of the prompt as a way to front-load anime concepts like specific clothing into the image and then "style" it to realism. I have never been sure if that's the correct way though, since "photo real" isn't an actual tag on any of those websites either, and I just took it from that outdated Rentry guide (https://rentry.org/5exa3).
Relevant booru tags are "real_life", "photo_(medium)", "cosplay_photo" and "realistic"
I used original booru tags + autotagger and added some extra tags like "source_photo", source_wallpaper, bright/dark + some images also had VLM-generated descriptions.
It's messy and inconsistent, so I don't know which way of prompting is correct
Thanks, that's helpful. Will any of those new tags show up in the CSV file, or do I just have to know/guess what the added tags are?
SD3 turned out to be a sad but expected disappointment. Looks like some people were already testing training for Resonance for a few months, found this retry: https://rentry.org/resonance-lora-training