Skip to content

Commit d8d722d

Browse files
authored
docs: Updated notebook ref in READMEs (#102)
* docs: Updated notebook ref in READMEs * docs: Updated quicktour
1 parent 0f10df0 commit d8d722d

File tree

3 files changed

+31
-20
lines changed

3 files changed

+31
-20
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11

22
# TorchCAM: class activation explorer
33

4-
[![License: Apache 2.0](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](LICENSE) [![Codacy Badge](https://app.codacy.com/project/badge/Grade/25324db1064a4d52b3f44d657c430973)](https://www.codacy.com/gh/frgfm/torch-cam/dashboard?utm_source=github.com&utm_medium=referral&utm_content=frgfm/torch-cam&utm_campaign=Badge_Grade) ![Build Status](https://github.com/frgfm/torch-cam/workflows/tests/badge.svg) [![codecov](https://codecov.io/gh/frgfm/torch-cam/branch/master/graph/badge.svg)](https://codecov.io/gh/frgfm/torch-cam) [![Docs](https://img.shields.io/badge/docs-available-blue.svg)](https://frgfm.github.io/torch-cam) [![Pypi](https://img.shields.io/badge/pypi-v0.2.0-blue.svg)](https://pypi.org/project/torchcam/) [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/frgfm/torch-cam) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/frgfm/torch-cam/blob/master/notebooks/quicktour.ipynb)
4+
[![License: Apache 2.0](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](LICENSE) [![Codacy Badge](https://app.codacy.com/project/badge/Grade/25324db1064a4d52b3f44d657c430973)](https://www.codacy.com/gh/frgfm/torch-cam/dashboard?utm_source=github.com&utm_medium=referral&utm_content=frgfm/torch-cam&utm_campaign=Badge_Grade) ![Build Status](https://github.com/frgfm/torch-cam/workflows/tests/badge.svg) [![codecov](https://codecov.io/gh/frgfm/torch-cam/branch/master/graph/badge.svg)](https://codecov.io/gh/frgfm/torch-cam) [![Docs](https://img.shields.io/badge/docs-available-blue.svg)](https://frgfm.github.io/torch-cam) [![Pypi](https://img.shields.io/badge/pypi-v0.2.0-blue.svg)](https://pypi.org/project/torchcam/) [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/frgfm/torch-cam) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/frgfm/notebooks/blob/main/torch-cam/quicktour.ipynb)
55

66
Simple way to leverage the class-specific activation of convolutional layers in PyTorch.
77

docs/source/notebooks/quicktour.rst

Lines changed: 28 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ Latest stable release
1313

1414
.. code-block:: python
1515
16-
>>> !pip install torch-cam
16+
>>> !pip install torchcam
1717
1818
From source
1919
-----------
@@ -29,6 +29,20 @@ Now go to ``Runtime/Restart runtime`` for your changes to take effect!
2929
Basic usage
3030
===========
3131

32+
.. code-block:: python
33+
34+
>>> %matplotlib inline
35+
>>> # All imports
36+
>>> import matplotlib.pyplot as plt
37+
>>> import torch
38+
>>> from torch.nn.functional import softmax, interpolate
39+
>>> from torchvision.io.image import read_image
40+
>>> from torchvision.models import resnet18
41+
>>> from torchvision.transforms.functional import normalize, resize, to_pil_image
42+
>>>
43+
>>> from torchcam.cams import SmoothGradCAMpp, LayerCAM
44+
>>> from torchcam.utils import overlay_mask
45+
3246
.. code-block:: python
3347
3448
>>> # Download an image
@@ -40,24 +54,13 @@ Basic usage
4054
.. code-block:: python
4155
4256
>>> # Instantiate your model here
43-
>>> from torchvision.models import resnet18
4457
>>> model = resnet18(pretrained=True).eval()
4558
4659
4760
4861
Illustrate your classifier capabilities
4962
---------------------------------------
5063

51-
.. code-block:: python
52-
53-
>>> %matplotlib inline
54-
>>> # All imports
55-
>>> from torchvision.io.image import read_image
56-
>>> from torchvision.transforms.functional import normalize, resize, to_pil_image
57-
>>> import matplotlib.pyplot as plt
58-
>>> from torchcam.cams import SmoothGradCAMpp, LayerCAM
59-
>>> from torchcam.utils import overlay_mask
60-
6164
.. code-block:: python
6265
6366
>>> cam_extractor = SmoothGradCAMpp(model)
@@ -69,6 +72,7 @@ Illustrate your classifier capabilities
6972
>>> out = model(input_tensor.unsqueeze(0))
7073
>>> # Retrieve the CAM by passing the class index and the model output
7174
>>> cams = cam_extractor(out.squeeze(0).argmax().item(), out)
75+
WARNING:root:no value was provided for `target_layer`, thus set to 'layer4'.
7276
7377
.. code-block:: python
7478
@@ -104,18 +108,14 @@ Advanced tricks
104108
Extract localization cues
105109
-------------------------
106110
107-
.. code-block::python
108-
109-
>>> import torch
110-
>>> from torch.nn.functional import softmax, interpolate
111-
112111
.. code-block::python
113112
114113
>>> # Retrieve the CAM from several layers at the same time
115114
>>> cam_extractor = LayerCAM(model)
116115
>>> # Preprocess your data and feed it to the model
117116
>>> out = model(input_tensor.unsqueeze(0))
118117
>>> print(softmax(out, dim=1).max())
118+
WARNING:root:no value was provided for `target_layer`, thus set to 'layer4'.
119119
tensor(0.9115, grad_fn=<MaxBackward1>)
120120
121121
@@ -135,6 +135,11 @@ Extract localization cues
135135
>>> axes[1].imshow(seg); axes[1].axis('off'); axes[1].set_title(name)
136136
>>> plt.show()
137137
138+
.. code-block:: python
139+
140+
>>> # Once you're finished, clear the hooks on your model
141+
>>> cam_extractor.clear_hooks()
142+
138143
139144
Fuse CAMs from multiple layers
140145
------------------------------
@@ -153,6 +158,7 @@ Fuse CAMs from multiple layers
153158
>>> # This time, there are several CAMs
154159
>>> for cam in cams:
155160
>>> print(cam.shape)
161+
torch.Size([28, 28])
156162
torch.Size([14, 14])
157163
torch.Size([7, 7])
158164
@@ -175,3 +181,8 @@ Fuse CAMs from multiple layers
175181
>>> # Plot the overlayed version
176182
>>> result = overlay_mask(to_pil_image(img), to_pil_image(fused_cam, mode='F'), alpha=0.5)
177183
>>> plt.imshow(result); plt.axis('off'); plt.title(" + ".join(cam_extractor.target_names)); plt.show()
184+
185+
.. code-block:: python
186+
187+
>>> # Once you're finished, clear the hooks on your model
188+
>>> cam_extractor.clear_hooks()

notebooks/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,5 +4,5 @@ Here are some notebooks compiled for users to better leverage the library capabi
44

55
| Notebook | Description | |
66
|:----------|:-------------|------:|
7-
| [Quicktour](https://github.com/frgfm/torch-cam/blob/master/notebooks/quicktour.ipynb) | A presentation of the main features of TorchCAM | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/frgfm/torch-cam/blob/master/notebooks/quicktour.ipynb) |
8-
| [Latency benchmark](https://github.com/frgfm/torch-cam/blob/master/notebooks/latency_benchmark.ipynb) | How to benchmark the latency of a CAM method | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/frgfm/torch-cam/blob/master/notebooks/latency_benchmark.ipynb) |
7+
| [Quicktour](https://github.com/frgfm/notebooks/blob/main/torch-cam/quicktour.ipynb) | A presentation of the main features of TorchCAM | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/frgfm/notebooks/blob/main/torch-cam/quicktour.ipynb) |
8+
| [Latency benchmark](https://github.com/frgfm/notebooks/blob/main/torch-cam/latency_benchmark.ipynb) | How to benchmark the latency of a CAM method | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/frgfm/notebooks/blob/main/torch-cam/latency_benchmark.ipynb) |

0 commit comments

Comments
 (0)