Skip to content

Evaluation results not matching as per the paper #24

@proxymallick

Description

@proxymallick

Hi,
I have a quick question related to the results shown in Table 1 and Table 2 of the paper.

  1. I trained the model exactly without any change but on a single GPU machine for exactly the number of iterations as mentioned in the log file and I am not getting any results close to the claimed results. Do you think its because of the change in the multiGPU to a single GPU run, there is a performance drop??
  2. For your information here is my result and that of the log file which you provided as per this github readme page
    My results when I run it for exacly 32k iterations.
mAP WI AOSE AP@K P@K R@K AP@U P@U R@U
76.79 0.00 0.00 76.79 18.72 93.44 77.03 15.92 92.86

Your Result

mAP WI AOSE AP@K P@K R@K AP@U P@U R@U
80.02 0.00 0.00 80.02 32.70 91.74 76.66 33.46 88.64

This is what I get when I run :
python tools/train_net.py --num-gpus 1 --config-file configs/faster_rcnn_R_50_FPN_3x_opendet.yaml

  1. Also, what seed did you use? I see that CFG.SEED is set to -1 to achieve non-deterministic behaviour but each time I run detectron2 uses a randomly generated seed.
    Screenshot from 2023-08-11 08-55-21

Can you please help me out? Thank you
Regards
Prakash

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions