Mixup augmentation

#1
by Ash2703 - opened

Did you use mixup only in same categories or both safe and unsafe, Combining safe and unsafe will always result in an unsafe image i assume or was the confidence distributed still?

I can see for male nsfw images it is not showing correct for male images , any idea how can improve this ?

Sorry for the delayed response. The mixup implementation from timm was used.

@akhileshpulsemusic For the data we sought to have a good representation of a variety of different forms of NSFW, however it is possible that male NSFW content could be under-representation in the data. I don't have a way to quantify this at the moment but I observed the model making far fewer errors on male NSFW than other models I compared it to as there are definitely a good number of training examples for it to learn from. If you have more representative training data then it would definitely be possible to fine-tune this model further to improve it for your use case, I recommend timm for training code.

OwenElliott changed discussion status to closed

I don't think mixup implementation on timm is beneficial or even correct for this use case
Mixup will create soft labels (60% safe, 40% nsfw) which is not possible in our use case, an image is either safe or unsafe, and mixing a safe image with unsafe is 100% unsafe all the time

Either mixup should happen within the same category, (nsfw, nsfw), (safe, safe) or not happen at all

Is this a fair point?

Sign up or log in to comment