Skip to content

Evaluation setting and performance #43

@ccc870206

Description

@ccc870206

Hi,
I download the provided pre-trained models for outdoor scene weakly wsupervised. Then I use test.py and evaluation.py to evaluate on Kitti dataset.
However, I notice that the default value for parameter vel_depth in this function generate_depth_map is True.
I want to confirm if this is the same setting in your paper since the value is True or False has a different meaning with reference to this issue in monodepth2.

Moreover, I get the evaluation performance that is little different from the one in the paper.

This is the performance I get: (KITTI, cap: 1-50, vel_depth=True)
Abs Rel, Sq Rel, RMSE, RMSE log, δ < 1.25, δ < 1.25^2, δ < 1.25^3
0.1724, 1.1905, 4.7413, 0.2496, 0.7648, 0.9104, 0.9653

This is the performance I get: (KITTI, cap: 1-50, vel_depth=False)
0.1737, 1.2287, 4.7301, 0.2510, 0.7627, 0.9074, 0.9637

This is the performance in paper: (KITTI, cap: 1-50)
Abs Rel, Sq Rel, RMSE, RMSE log, δ < 1.25, δ < 1.25^2, δ < 1.25^3
0.169, 1.23, 4.717, 0.245, 0.769, 0.912, 0.965

I'm not sure if this difference is reasonable or I make some mistake in the evaluation method.
Many thanks.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions