In this work we present a comprehensive exploration of generalization to unseen shapes in single-view 3D reconstruction. We introduce SDFNet, an architecture combining 2.5D sketch estimation with a continuous shape regressor for signed distance functions of objects. We show new findings on the impact of rendering variability and adopting a 3-DOF VC (3 Degree-of-Freedom Viewer Centered) coordinate representation on generalization to object shapes not seen during training. Our model can generalize to objects of unseen categories, and to objects from a significantly different shape dataset. Link to our paper and link to our project webpage.
This repository consists of the code for rendering, training and evaluating SDFNet as well as baseline method Occupancy Networks. Code to repoduce results for baseline method GenRe can be found here.
Follow instructions in SDFNet README
Follow instruction in GenRe README
Follow instructions in Rendering README