A discussion of some frequently asked questions about DIS training
I prepared a training set of 60,000 images containing people, plants, cars, and other objects, but the results were not good, and there were some problems. I tried to test the same images on other people's trained DIS models, and the same problem occurred, and I haven't found the cause of the problem yet. Does anyone have any idea what is causing this and how to make a training set to fix it. The image below shows the problem:
How did you create the training set of 60.000? Did you use real photos?
Did you apply the "intermediate feature supervision"?
hypar["interm_sup"] = False ## in-dicate if activate intermediate feature supervision
According to the paper it helps to improve the understanding of the model. For me the results were horrible with turning that on.
My next attempt is to provide more data based on 3D models and apply grid dropouts so that the model is forced to get smarter.
How did you create the training set of 60.000? Did you use real photos?
Did you apply the "intermediate feature supervision"?
hypar["interm_sup"] = False ## in-dicate if activate intermediate feature supervision
According to the paper it helps to improve the understanding of the model. For me the results were horrible with turning that on.
My next attempt is to provide more data based on 3D models and apply grid dropouts so that the model is forced to get smarter.
I don't have interm_sup enabled.
I am training by collecting publicly available datasets as well as a large number of png images and synthesizing the background image myself. The result has not been very good, the translucency problem has not been solved yet, I tried to ask the author of DIS, but he did not reply.
I'm looking for a place to communicate about DIS training and discuss model training issues.
I'm looking for a place to communicate about DIS training and discuss model training issues.
Sounds good! I created a discord channel: https://discord.gg/3PaqGFza