-
Notifications
You must be signed in to change notification settings - Fork 3
Open
Description
Hello,
I have been looking through your code for the cls loss and I have come across two places, where I have some questions.
The first place is here. I am not quite sure why the sum of the probabilities is clamped. Is this better, than, for example, normalizing it?
probs_ = torch.clamp(probs_, 0., 1.)
My second question concerns the same file in this location. Here there is the variable data_balancing_weight
declared as 0 and never assigned a new value. It is then multiplied with occ_pts_prob_error.mean()
, which then, if I am not missing anything, should not have any impact in the loss calculation. Is this intended?
data_balancing_weight=0
occ_pts_prob_error = torch.abs(1 - occ_pts_prob)
free_pts_prob_error = torch.abs(0 - free_pts_prob)
# L1 LOSS
cls_loss = occ_pts_prob_error.mean()*data_balancing_weight + free_pts_prob_error.mean()
loss += cls_loss
Thank you for your help!
Metadata
Metadata
Assignees
Labels
No labels