-
Notifications
You must be signed in to change notification settings - Fork 21
Open
Description
Hi Brummi,
I tried to evaluate your provided model on the KITTI-raw and KITTI-360 datasets, both yielded suboptimal results
- KITTI-360
- testing image: The unzipped PNG image (w/o preprocessing)
- my evaluated results
o_acc: 0.944 | ie_acc: 0.771 | ie_rec: 0.439 - results on the paper:
o_acc: 0.95 | ie_acc: 0.82 | ie_rec: 0.47
- KITTI-raw
- testing image: kitti-raw image (transformed to .jpg as in monodepth2)
- my evaluated results
abs_rel: 0.102 | rmse: 4.409 | a1: 0.881 - results on the paper:
abs_rel: 0.102 | rmse: 4.407 | a1: 0.882
Even using your provided model, there is a large evaluation gap in KITTI-360, where for the ie_acc, the gap is 0.771 v.s. 0.82. Though the KITTI-raw score has little difference from yours, the numbers are not exactly the same. I hope to make sure:
- If I should use the preprocessed images for KITTI-360 for evaluation
- If some Python environment settings influence scores. Currently, I use PyTorch-2.0
I also observed further performance decline with my own trained model, i.e., for KITTI-raw, abs_rel: 0.104 | rmse: 4.554 | a1: 0.874, for KITTI-360 o_acc: 0.948 | ie_acc: 0.784 | **ie_rec: 0.369**. Can you provide some suggestions to faithfully reproduce your results?
Thank you for your information!
Metadata
Metadata
Assignees
Labels
No labels