Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Results on DTU datasets #56

Open
Tao-11-chen opened this issue Aug 6, 2024 · 1 comment
Open

Results on DTU datasets #56

Tao-11-chen opened this issue Aug 6, 2024 · 1 comment

Comments

@Tao-11-chen
Copy link

Hello, thanks very much for sharing your amazing work.
I'm trying to reproduce the DTU Cross-Generalization Test.

I followed all the data preparation instructions in the REAMDE including using convert_dtu.py and used this command to generate the final results:

python -m src.main +experiment=dtu checkpointing.load=checkpoints/re10k.ckpt mode=test dataset/view_sampler=evaluation dataset.view_sampler.index_path=assets/evaluation_index_dtu_nctx2.json dataset.view_sampler.num_context_views=2 test.compute_scores=true

However, some of the results looks noisy like the images below and the final average PSNR is 13.91, I wonder if there is something wrong with my testing procedure or if it is working normally.

000023

000024

Thanks in advance.

Sincerely

@donydchen
Copy link
Owner

Hi @Tao-11-chen, sorry for the late reply; I have been busy in the past few weeks.

I just ran the testing on my machine using the released code and command, and below are the scores I got, which matched precisely with the paper-reported ones.

psnr 13.942063219845295
ssim 0.47347488859668374
lpips 0.3857552495319396

Just so you know, I tested it on a 3090 GPU. Although I am unaware of what makes the difference between your experimental environment and ours, the score you got is close to the reported one and could be considered correct.

The visual images you got are correct. The cross-dataset experiment is highly challenging, and our MVSplat only manages to show promising results in some specific settings, e.g., nearby viewpoints, as we illustrated in the paper.

If you feel that the dataset preprocessed issue causes the difference in the score and you are keen to explore further, feel free to email me to get our preprocessed DTU for further comparison. Besides, if you are interested in improving MVSplat on the DTU dataset, more related discussions can be found at #18.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants