You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When computing saliency maps (and likely also GradCAMs), the returned gradients are always normalized to the range (0, 1). Since this is an affine transformation, there is no way to reproduce the exact values of the gradients, since the information about zero values is lost. For instance, we may wish to compare the true gradient values to know which pixels in an image increase a class score versus decrease it, and the relative magnitude of those things. Right now, we can get the negative values and the positive values separately, but I don't think we can actually infer the relative magnitudes.
I think this would likely be an easy fix. Perhaps you could add a boolean keyword argument for normalizing the data or not prior to output.
The text was updated successfully, but these errors were encountered:
When computing saliency maps (and likely also GradCAMs), the returned gradients are always normalized to the range (0, 1). Since this is an affine transformation, there is no way to reproduce the exact values of the gradients, since the information about zero values is lost. For instance, we may wish to compare the true gradient values to know which pixels in an image increase a class score versus decrease it, and the relative magnitude of those things. Right now, we can get the negative values and the positive values separately, but I don't think we can actually infer the relative magnitudes.
I think this would likely be an easy fix. Perhaps you could add a boolean keyword argument for normalizing the data or not prior to output.
The text was updated successfully, but these errors were encountered: