Open
Description
Thanks for your work, I was reading the ”DepthLab“ paper and was curious about the Fig.3 depth results, e.g. how is depth anyting v2 used to implement the depth complementation task? Is there any comparison made if the depth is predicted directly using RGB image, which as usual will have better smoothed boundaries?
Metadata
Metadata
Assignees
Labels
No labels