-
Notifications
You must be signed in to change notification settings - Fork 65
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
merge_type = "tsharpen" cause "nan" #4
Comments
HI, I am not sure if thats right, did you solve the problem? |
I change my code use tta.transforms instead of TTAWrapper. Then I can do normalization to the tensors(make it to [0,1]), and that could avoid the negative value in tensors. So x=x**0.5 will not bring "nan". I think many models will do normalization before sending images to it like this: So I think we should scale the output tensors to [0, 1] to avoid this problem. |
The same story with gmean. In my case ResNet has FC defined by nn.Linear which outputs both positive and negative values. And ClassificationTTAWrapper in merge_mode='gmean' draws some nans. |
Use sigmoid/softmax activation for model output to avoid nan values |
I use the example like:
model = tta.SegmentationTTAWrapper(model, tta.aliases.d4_transform(), merge_mode="tsharpen")
when I use this model to predict, I found the output of model has value "nan"...
I look up the source code of this project, I have found when tsharpen model will do :
x = x**0.5
Is it the negative value in the tensors pass throught this operation will cause "nan"?
The text was updated successfully, but these errors were encountered: