-
Notifications
You must be signed in to change notification settings - Fork 188
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Loss function #37
Comments
I was reading the paper and just wondering the same, came here and didn't find it either |
Hi, I will begin to try some loss function design next week. Once I have some useful results, I will report it here. |
I read the paper and found that the author in the experimental part mentioned to follow the training method of "Sparse-to-dense: depth prediction from sparse depth samples and a single image"and L1 loss was used in that paper. I tried to train the network with L1 loss, and the result was very bad. |
I used the "depth loss" in this paper and seems that the training is starting to converge. I also suspect that I have a shitty dataset, and that is why I am getting so many noise (although you can see the depth more or less in some sense in the high level, there is a lot of noise, piwelwise) |
which dataset are you using? |
Thanks for this great work. I am currently trying to train fast-depth with my own dataset. I have noticed there is not training scripts. So I would like to ask, which depth losses are used in training?
It would be very nice, If anyone can give me a suggestion about which losses should I pick.
The text was updated successfully, but these errors were encountered: