By Chen Liu, Jimei Yang, Duygu Ceylan, Ersin Yumer, and Yasutaka Furukawa
This paper presents the first end-to-end neural architecture for piece-wise planar reconstruction from a single RGB image. The proposed network, PlaneNet, learns to directly infer a set of plane parameters and corresponding plane segmentation masks. For more details, please refer to our CVPR 2018 paper or visit our project website.
Python 2.7, TensorFlow (>= 1.0), numpy, opencv 3.
Please run the following commands to compile the library for the crfasrnn module.
cd cpp
sh compile.sh
cd ..
To train the network, you also need to run the following commands to compile the library for computing the set matching loss. (See here for details.)
cd nndistance
make
cd ..
We convert ScanNet data to .tfrecords files for training and testing. The .tfrecords file can be downloaded from here.
To train the network from the pretrained DeepLab network, please first download the DeepLab model here (under the Caffe to TensorFlow conversion), and then run the following command.
python train_planenet.py --restore=0 --modelPathDeepLab="path to the deep lab model" --dataFolder="folder which contains tfrecords files"
Please first download our trained network from here and put the uncompressed folder under ./checkpoint folder.
To evaluate the performance against existing methods, please run:
python evaluate.py --dataFolder="folder which contains tfrecords files"
Please first download our trained network (see [Evaluation](### Evaluation) section for details). Script predict.py predicts and visualizes custom images (if "customImageFolder" is specified) or ScanNet testing images (if "dataFolder" is specified).
python predict.py --customImageFolder="folder which contains custom images"
python predict.py --dataFolder="folder which contains tfrecords files" [--startIndex=0] [--numImages=30]
This will generate visualization images, a webpage containing all the visualization, as well as cache files under folder "predict/".
Same commands can be used for various applications by providing optional arguments, applicationType, imageIndex, textureImageFilename, and some application-specific arguments. The following commands are used to generate visualizations in the submission. (The TV application needs more manual specification for better visualization.)
python predict.py --dataFolder=/mnt/vision/Data/PlaneNet/ --textureImageFilename=texture_images/CVPR.jpg --imageIndex=118 --applicationType=logo_texture --startIndex=118 --numImages=1
python predict.py --dataFolder=/mnt/vision/Data/PlaneNet/ --textureImageFilename=texture_images/CVPR.jpg --imageIndex=118 --applicationType=logo_video --startIndex=118 --numImages=1
python predict.py --dataFolder=/mnt/vision/Data/PlaneNet/ --textureImageFilename=texture_images/checkerboard.jpg --imageIndex=72 --applicationType=wall_texture --wallIndices=7,9 --startIndex=72 --numImages=1
python predict.py --dataFolder=/mnt/vision/Data/PlaneNet/ --textureImageFilename=texture_images/checkerboard.jpg --imageIndex=72 --applicationType=wall_video --wallIndices=7,9 --startIndex=72 --numImages=1
python predict.py --customImageFolder=my_images/TV/ --textureImageFilename=texture_images/TV.mp4 --imageIndex=0 --applicationType=TV --wallIndices=2,9
python predict.py --customImageFolder=my_images/ruler --textureImageFilename=texture_images/ruler_36.png --imageIndex=0 --applicationType=ruler --startPixel=950,444 --endPixel=1120,2220
Note that, the above script generate image sequences for video applications. Please run the following command under the image sequence folder to generate a video:
ffmpeg -r 60 -f image2 -s 640x480 -i %04d.png -vcodec libx264 -crf 25 -pix_fmt yuv420p video.mp4
To check out the pool ball application, please run the following commands.
python predict.py --customImageFolder=my_images/pool --imageIndex=0 --applicationType=pool --estimateFocalLength=False
cd pool
python pool.py
Use mouse to play:)
If you have any questions, please contact me at [email protected].