Filter Visualizations and Heatmaps
Activation visualization show how successive convolution layers transform their input.
We do this by visualizing intermediate activations, which display the feature maps that are
output by various convolution and pooling layers in a network, given a certain input. The output
of these layer is called an Activation.
This shows us how an input is decomposed into the different filters learned by the network.
We are generating an input image that maximizes the filter output activations. Thus we are computing
and using that estimate to update the input.
Activation Maximization loss simply outputs small values for large filter activations{we are minimizing losses during gradient descent iterations}. This allows us to understand what sort of input patterns activate a particular filter.
The best way to conceptualize what your cnn perceives is to visualize the Dense Layer Visualizations.
Heat maps of class activations are very useful in identifying which part of an image led the CNN to the final classification. It becomes very important when analyzing misclassified data. We are going to use an implementation of Class Activation Map (CAM) that was used in the paper, "Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization"