Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to obtain mesh and mesh normal images from the paper #45

Open
angieAAAAA opened this issue Oct 6, 2024 · 6 comments
Open

How to obtain mesh and mesh normal images from the paper #45

angieAAAAA opened this issue Oct 6, 2024 · 6 comments

Comments

@angieAAAAA
Copy link

Thank you for your excellent work and for providing the code. I have a question regarding the results shown in the paper, specifically the mesh and mesh normal images.

Could you please provide more details on how to obtain the mesh and mesh normal visualizations (like the ones shown in readme)? Are there any specific steps or tools in the code that need to be used to generate these visualizations?

I would really appreciate any guidance or scripts you could share to help me reproduce those results.

@danpeng2
Copy link
Contributor

danpeng2 commented Oct 9, 2024

Hello, we use the Open3D tool to display the mesh and normals. After opening the mesh in Open3D, press the shortcut ctrl+9 to toggle the normal map, and press 9 to save the image.

@angieAAAAA
Copy link
Author

Thanks for your reply. However,when running experiments on the Courthouse dataset, the generated normal maps are noticeably less smooth compared to the results shown in the paper. Could this be related to dataset processing, parameter settings, or model training issues? I would appreciate any insights or suggestions to address this.
屏幕截图 2024-10-14 112000

@danpeng2
Copy link
Contributor

Hello, the result is quite poor. Are your data preprocessing and training parameters consistent with the README?

@angieAAAAA
Copy link
Author

I followed the instructions in the README and downloaded the Courthouse dataset from the official website, but I encountered a "CUDA out of memory" error. To work around this, I extracted 1/5 of the images from a mid-range distance. Then, I used the convert.py script from the 3DGS project to generate the corresponding camera parameters.

After that, I followed the Custom Dataset section in the PGSR README and ran the following commands:

python train.py -s data/courthouse -m output/courthouse --max_abs_split_points 0 --opacity_cull_threshold 0.05
python render.py -m output/courthouse --max_depth 10.0 --voxel_size 0.01

@danpeng2
Copy link
Contributor

Extracting 1/5 of the images results in overly sparse data, making high-quality reconstruction impossible. You can refer to the script scripts/run_tnt.py, and add the argument ‘--data_device cpu’ to cache the data on the CPU, preventing GPU memory overflow.

@angieAAAAA
Copy link
Author

Thanks for your suggestions! I will try them out and see how it goes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants