Skip to content

Latest commit

 

History

History
55 lines (45 loc) · 3.02 KB

README.md

File metadata and controls

55 lines (45 loc) · 3.02 KB

What-CNNs-See

Filter Visualizations and Heatmaps

Activation Visualization (activation_maximization.py)

Activation visualization show how successive convolution layers transform their input.

We do this by visualizing intermediate activations, which display the feature maps that are output by various convolution and pooling layers in a network, given a certain input. The output of these layer is called an Activation.

This shows us how an input is decomposed into the different filters learned by the network.
We are generating an input image that maximizes the filter output activations. Thus we are computing


and using that estimate to update the input.


Activation Maximization loss simply outputs small values for large filter activations{we are minimizing losses during gradient descent iterations}. This allows us to understand what sort of input patterns activate a particular filter.

The best way to conceptualize what your cnn perceives is to visualize the Dense Layer Visualizations.

15 21 1 2 3

These were the activation map of some letters, while training Handwritten Alphabets


Heat Maps of Class Activations (heatmaps.py)

Heat maps of class activations are very useful in identifying which part of an image led the CNN to the final classification. It becomes very important when analyzing misclassified data. We are going to use an implementation of Class Activation Map (CAM) that was used in the paper, "Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization"

Images and their corresponding Heat maps.


tiger_man tiger_cam

stripped_shark shark_cam1

tiger tiger_cam1