-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
A question about the code #1
Comments
For different MIL Learners, their localizers output instance localization scores respectively. For Localization Completeness, we add discrepancy loss among these instance localization scores. But this may hurt the instance localization ability of some localizers, meanwhile deteriorate the detection performance and image classification ability of MIL Learners. So we add |
Thanks very much! |
Our implementation of PCL is almost the same as the official one PCL, expect that our codes are more concise and less complicated. Besides, the method of getting the cluster centers is a little different. There are two ways of getting the cluster centers as mentioned in paper PCL:
And in our implementation of PCL, we just choose the first method that selecting the highest scoring proposals as the proposal cluster centers. And for testing the standard PCL via this repo, we may modify codes for getting proposal cluster centers. For example, now in D-MIL, the code is instane_selector or get_highest_score_proposals. We can replace it with code borrowed from this link PCL. |
I will upload another branch for testing the standard PCL method soon. |
Hi Wei, can you tell me why the ce losses are multiplied by 0.00001?
cls_loss_0 = 0.00001 * cross_entropy_losses(im_cls_prob_0, labels.type(im_cls_prob_0.dtype))
The text was updated successfully, but these errors were encountered: