Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

On indoor datasets, there will be holes in the walls. #10

Open
MrLihj opened this issue Jul 23, 2024 · 5 comments
Open

On indoor datasets, there will be holes in the walls. #10

MrLihj opened this issue Jul 23, 2024 · 5 comments

Comments

@MrLihj
Copy link

MrLihj commented Jul 23, 2024

Screenshot from 2024-07-23 09-38-45
When I tested on the Auditorium in Tanks and Temples as well as some self-collected indoor datasets, the problem shown in the figure below occurred. How can I solve it?

@MrLihj MrLihj changed the title In indoor datasets, there will be holes in the walls. On indoor datasets, there will be holes in the walls. Jul 23, 2024
@danpeng2
Copy link
Contributor

Reconstructing with weak textures without any prior information can be quite challenging. There are a few suggestions that might alleviate these issues: 1) Disable the abs splitting strategy by setting max_abs_split_points to 0; 2) Increase the opacity clipping threshold. Have you tried these two strategies?

@MrLihj
Copy link
Author

MrLihj commented Jul 23, 2024

Reconstructing with weak textures without any prior information can be quite challenging. There are a few suggestions that might alleviate these issues: 1) Disable the abs splitting strategy by setting max_abs_split_points to 0; 2) Increase the opacity clipping threshold. Have you tried these two strategies?

I tried (max_abs_split_points = 0, opacity_cull_threshold = 0.1), but the results are still not very good.
Screenshot from 2024-07-23 11-42-56
Screenshot from 2024-07-23 11-43-08

@danpeng2
Copy link
Contributor

00 11 Our training commands:

python train.py -s data_path -m out_path -r2 --densify_abs_grad_threshold 0.002 --single_view_weight_from_iter 0 --multi_view_weight_from_iter 0

We have identified weak textures, complex textures, and non-global illumination variations in this scene. The 3DGS performs poorly in texture fitting under these conditions, so we reduced the split threshold. Meanwhile, to prevent irreversible geometric overfitting due to early splitting in weak texture areas, we increased the smoothness of these areas and activated the geometric regularization term earlier. However, these measures only mitigate the issue. The fundamental problem is that the current 3DGS baseline does not consider weak textures and complex environmental lighting. Integrating more advanced 3DGS methods that account for complex environments or utilizing prior knowledge will help further improve geometric accuracy.

@LaFeuilleMorte
Copy link

Just a discussion: As far as I know, The indoor scenes has suffered from multiple bad conditions which are harmful for optimization:

  1. textureless surfaces, the influence of this are two folds: (1) textureless will introduces errors for SFM methods to estimate camera pose and point cloud reconstruction. (2) 3DGS are sensitive to initial point clouds and camera pose which will lead to suboptimal results.
  2. Very constrained field of view. Capturing photos in room will always have some limits. Due to the limited space for the camera to move. The camera cannot capture the room pixels in a far enough distance. As a consequence, the photo will have much less overlaps comparing to outdoor scenes. Which could introduce error both in SFM and 3DGS.
  3. Big change in illumination, this could lead to discrepancies in multi-view photometric loss which guides the 3DGS to optimize.

@saprrow
Copy link

saprrow commented Aug 13, 2024

Just a discussion: As far as I know, The indoor scenes has suffered from multiple bad conditions which are harmful for optimization:

  1. textureless surfaces, the influence of this are two folds: (1) textureless will introduces errors for SFM methods to estimate camera pose and point cloud reconstruction. (2) 3DGS are sensitive to initial point clouds and camera pose which will lead to suboptimal results.
  2. Very constrained field of view. Capturing photos in room will always have some limits. Due to the limited space for the camera to move. The camera cannot capture the room pixels in a far enough distance. As a consequence, the photo will have much less overlaps comparing to outdoor scenes. Which could introduce error both in SFM and 3DGS.
  3. Big change in illumination, this could lead to discrepancies in multi-view photometric loss which guides the 3DGS to optimize.

how about using learning a based method like LOFTR to initialize camera pose?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants