You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for sharing the code. I tried to do a quick test after following all the data preprations. However, the output results are a bit strange IMG_1.jpg Gt 172.00 Pred 180049, specially in the count as you see below.
There's a problem with your weight loading. The authors have strict = False when loading their weights, this means that if your weights are not compatible, it won't throw you an error. Go change that to True and you will see what the problem is.
Because the keys in pre-trained models' state dict is different from the keys of models defined in the code. First set strict = True and then I solved it in this manner:
print("=> loading checkpoint '{}'".format(args['pre']))
checkpoint = torch.load(args['pre'])
pre_state_dict = checkpoint['state_dict']
new_pre_state_dict = OrderedDict()
for key in model.state_dict().keys():
if "module."+key in pre_state_dict.keys():
new_pre_state_dict[key] = pre_state_dict["module."+key]
model.load_state_dict(new_pre_state_dict, strict=True)
args['start_epoch'] = checkpoint['epoch']
args['best_pred'] = checkpoint['best_prec1']
Hi,
Thank you for sharing the code. I tried to do a quick test after following all the data preprations. However, the output results are a bit strange
IMG_1.jpg Gt 172.00 Pred 180049
, specially in the count as you see below.Am I missing something?
P.S: I am testing the model on CPU.
Best,
The text was updated successfully, but these errors were encountered: