Skip to content

Commit 5f510c3

Browse files
authored
refactor: Reorganized package hierarchy (#106)
* refactor: Moved torchcam.cams to torchcam.methods * test: Renamed unittests * refactor: Updated scripts * refactor: Updated demo app * chore: Updated conda recipe * docs: Udpated documentation and README * docs: Updated docstrings
1 parent 8abb3ea commit 5f510c3

File tree

19 files changed

+73
-73
lines changed

19 files changed

+73
-73
lines changed

.conda/meta.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ test:
2828
# Python imports
2929
imports:
3030
- torchcam
31-
- torchcam.cams
31+
- torchcam.methods
3232
- torchcam.utils
3333
requires:
3434
- python

README.md

Lines changed: 22 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -20,15 +20,15 @@ Simple way to leverage the class-specific activation of convolutional layers in
2020

2121
TorchCAM leverages [PyTorch hooking mechanisms](https://pytorch.org/tutorials/beginner/former_torchies/nnft_tutorial.html#forward-and-backward-function-hooks) to seamlessly retrieve all required information to produce the class activation without additional efforts from the user. Each CAM object acts as a wrapper around your model.
2222

23-
You can find the exhaustive list of supported CAM methods in the [documentation](https://frgfm.github.io/torch-cam/cams.html), then use it as follows:
23+
You can find the exhaustive list of supported CAM methods in the [documentation](https://frgfm.github.io/torch-cam/methods.html), then use it as follows:
2424

2525
```python
2626
# Define your model
2727
from torchvision.models import resnet18
2828
model = resnet18(pretrained=True).eval()
2929

3030
# Set your CAM extractor
31-
from torchcam.cams import SmoothGradCAMpp
31+
from torchcam.methods import SmoothGradCAMpp
3232
cam_extractor = SmoothGradCAMpp(model)
3333
```
3434

@@ -44,7 +44,7 @@ Once your CAM extractor is set, you only need to use your model to infer on your
4444
from torchvision.io.image import read_image
4545
from torchvision.transforms.functional import normalize, resize, to_pil_image
4646
from torchvision.models import resnet18
47-
from torchcam.cams import SmoothGradCAMpp
47+
from torchcam.methods import SmoothGradCAMpp
4848

4949
model = resnet18(pretrained=True).eval()
5050
cam_extractor = SmoothGradCAMpp(model)
@@ -131,7 +131,7 @@ This project is developed and maintained by the repo owner, but the implementati
131131
<img src="https://github.com/frgfm/torch-cam/releases/download/v0.2.0/video_example_wallaby.gif" /></a>
132132
</p>
133133
<p align="center">
134-
<em>Source: <a href="https://www.youtube.com/watch?v=hZJN5BzKfxk">YouTube video</a> (activation maps created by <a href="https://frgfm.github.io/torch-cam/latest/cams.html#torchcam.cams.LayerCAM">Layer-CAM</a> with a pretrained <a href="https://pytorch.org/vision/stable/models.html#torchvision.models.resnet18">ResNet-18</a>)</em>
134+
<em>Source: <a href="https://www.youtube.com/watch?v=hZJN5BzKfxk">YouTube video</a> (activation maps created by <a href="https://frgfm.github.io/torch-cam/latest/methods.html#torchcam.methods.LayerCAM">Layer-CAM</a> with a pretrained <a href="https://pytorch.org/vision/stable/models.html#torchvision.models.resnet18">ResNet-18</a>)</em>
135135
</p>
136136

137137

@@ -182,24 +182,24 @@ In the table below, you will find a latency benchmark (forward pass not included
182182

183183
| CAM method | Arch | GPU mean (std) | CPU mean (std) |
184184
| ------------------------------------------------------------ | ------------------ | ------------------ | -------------------- |
185-
| [CAM](https://frgfm.github.io/torch-cam/latest/cams.html#torchcam.cams.CAM) | resnet18 | 0.11ms (0.02ms) | 0.14ms (0.03ms) |
186-
| [GradCAM](https://frgfm.github.io/torch-cam/latest/cams.html#torchcam.cams.GradCAM) | resnet18 | 3.71ms (1.11ms) | 40.66ms (1.82ms) |
187-
| [GradCAMpp](https://frgfm.github.io/torch-cam/latest/cams.html#torchcam.cams.GradCAMpp) | resnet18 | 5.21ms (1.22ms) | 41.61ms (3.24ms) |
188-
| [SmoothGradCAMpp](https://frgfm.github.io/torch-cam/latest/cams.html#torchcam.cams.SmoothGradCAMpp) | resnet18 | 33.67ms (2.51ms) | 239.27ms (7.85ms) |
189-
| [ScoreCAM](https://frgfm.github.io/torch-cam/latest/cams.html#torchcam.cams.ScoreCAM) | resnet18 | 304.74ms (11.54ms) | 6796.89ms (415.14ms) |
190-
| [SSCAM](https://frgfm.github.io/torch-cam/latest/cams.html#torchcam.cams.SSCAM) | resnet18 | | |
191-
| [ISCAM](https://frgfm.github.io/torch-cam/latest/cams.html#torchcam.cams.ISCAM) | resnet18 | | |
192-
| [XGradCAM](https://frgfm.github.io/torch-cam/latest/cams.html#torchcam.cams.XGradCAM) | resnet18 | 3.78ms (0.96ms) | 40.63ms (2.03ms) |
193-
| [LayerCAM](https://frgfm.github.io/torch-cam/latest/cams.html#torchcam.cams.LayerCAM) | resnet18 | 3.65ms (1.04ms) | 40.91ms (1.79ms) |
194-
| [CAM](https://frgfm.github.io/torch-cam/latest/cams.html#torchcam.cams.CAM) | mobilenet_v3_large | N/A* | N/A* |
195-
| [GradCAM](https://frgfm.github.io/torch-cam/latest/cams.html#torchcam.cams.GradCAM) | mobilenet_v3_large | 8.61ms (1.04ms) | 26.64ms (3.46ms) |
196-
| [GradCAMpp](https://frgfm.github.io/torch-cam/latest/cams.html#torchcam.cams.GradCAMpp) | mobilenet_v3_large | 8.83ms (1.29ms) | 25.50ms (3.10ms) |
197-
| [SmoothGradCAMpp](https://frgfm.github.io/torch-cam/latest/cams.html#torchcam.cams.SmoothGradCAMpp) | mobilenet_v3_large | 77.38ms (3.83ms) | 156.25ms (4.89ms) |
198-
| [ScoreCAM](https://frgfm.github.io/torch-cam/latest/cams.html#torchcam.cams.ScoreCAM) | mobilenet_v3_large | 35.19ms (2.11ms) | 679.16ms (55.04ms) |
199-
| [SSCAM](https://frgfm.github.io/torch-cam/latest/cams.html#torchcam.cams.SSCAM) | mobilenet_v3_large | | |
200-
| [ISCAM](https://frgfm.github.io/torch-cam/latest/cams.html#torchcam.cams.ISCAM) | mobilenet_v3_large | | |
201-
| [XGradCAM](https://frgfm.github.io/torch-cam/latest/cams.html#torchcam.cams.XGradCAM) | mobilenet_v3_large | 8.41ms (0.98ms) | 24.21ms (2.94ms) |
202-
| [LayerCAM](https://frgfm.github.io/torch-cam/latest/cams.html#torchcam.cams.LayerCAM) | mobilenet_v3_large | 8.02ms (0.95ms) | 25.14ms (3.17ms) |
185+
| [CAM](https://frgfm.github.io/torch-cam/latest/methods.html#torchcam.methods.CAM) | resnet18 | 0.11ms (0.02ms) | 0.14ms (0.03ms) |
186+
| [GradCAM](https://frgfm.github.io/torch-cam/latest/methods.html#torchcam.methods.GradCAM) | resnet18 | 3.71ms (1.11ms) | 40.66ms (1.82ms) |
187+
| [GradCAMpp](https://frgfm.github.io/torch-cam/latest/methods.html#torchcam.methods.GradCAMpp) | resnet18 | 5.21ms (1.22ms) | 41.61ms (3.24ms) |
188+
| [SmoothGradCAMpp](https://frgfm.github.io/torch-cam/latest/methods.html#torchcam.methods.SmoothGradCAMpp) | resnet18 | 33.67ms (2.51ms) | 239.27ms (7.85ms) |
189+
| [ScoreCAM](https://frgfm.github.io/torch-cam/latest/methods.html#torchcam.methods.ScoreCAM) | resnet18 | 304.74ms (11.54ms) | 6796.89ms (415.14ms) |
190+
| [SSCAM](https://frgfm.github.io/torch-cam/latest/methods.html#torchcam.methods.SSCAM) | resnet18 | | |
191+
| [ISCAM](https://frgfm.github.io/torch-cam/latest/methods.html#torchcam.methods.ISCAM) | resnet18 | | |
192+
| [XGradCAM](https://frgfm.github.io/torch-cam/latest/methods.html#torchcam.methods.XGradCAM) | resnet18 | 3.78ms (0.96ms) | 40.63ms (2.03ms) |
193+
| [LayerCAM](https://frgfm.github.io/torch-cam/latest/methods.html#torchcam.methods.LayerCAM) | resnet18 | 3.65ms (1.04ms) | 40.91ms (1.79ms) |
194+
| [CAM](https://frgfm.github.io/torch-cam/latest/methods.html#torchcam.methods.CAM) | mobilenet_v3_large | N/A* | N/A* |
195+
| [GradCAM](https://frgfm.github.io/torch-cam/latest/methods.html#torchcam.methods.GradCAM) | mobilenet_v3_large | 8.61ms (1.04ms) | 26.64ms (3.46ms) |
196+
| [GradCAMpp](https://frgfm.github.io/torch-cam/latest/methods.html#torchcam.methods.GradCAMpp) | mobilenet_v3_large | 8.83ms (1.29ms) | 25.50ms (3.10ms) |
197+
| [SmoothGradCAMpp](https://frgfm.github.io/torch-cam/latest/methods.html#torchcam.methods.SmoothGradCAMpp) | mobilenet_v3_large | 77.38ms (3.83ms) | 156.25ms (4.89ms) |
198+
| [ScoreCAM](https://frgfm.github.io/torch-cam/latest/methods.html#torchcam.methods.ScoreCAM) | mobilenet_v3_large | 35.19ms (2.11ms) | 679.16ms (55.04ms) |
199+
| [SSCAM](https://frgfm.github.io/torch-cam/latest/methods.html#torchcam.methods.SSCAM) | mobilenet_v3_large | | |
200+
| [ISCAM](https://frgfm.github.io/torch-cam/latest/methods.html#torchcam.methods.ISCAM) | mobilenet_v3_large | | |
201+
| [XGradCAM](https://frgfm.github.io/torch-cam/latest/methods.html#torchcam.methods.XGradCAM) | mobilenet_v3_large | 8.41ms (0.98ms) | 24.21ms (2.94ms) |
202+
| [LayerCAM](https://frgfm.github.io/torch-cam/latest/methods.html#torchcam.methods.LayerCAM) | mobilenet_v3_large | 8.02ms (0.95ms) | 25.14ms (3.17ms) |
203203

204204
**The base CAM method cannot work with architectures that have multiple fully-connected layers*
205205

demo/app.py

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,8 @@
1212
from torchvision import models
1313
from torchvision.transforms.functional import normalize, resize, to_pil_image, to_tensor
1414

15-
from torchcam import cams
15+
from torchcam import methods
16+
from torchcam.methods._utils import locate_candidate_layer
1617
from torchcam.utils import overlay_mask
1718

1819
CAM_METHODS = ["CAM", "GradCAM", "GradCAMpp", "SmoothGradCAMpp", "ScoreCAM", "SSCAM", "ISCAM", "XGradCAM", "LayerCAM"]
@@ -56,12 +57,12 @@ def main():
5657
if tv_model is not None:
5758
with st.spinner('Loading model...'):
5859
model = models.__dict__[tv_model](pretrained=True).eval()
59-
default_layer = cams.utils.locate_candidate_layer(model, (3, 224, 224))
60+
default_layer = locate_candidate_layer(model, (3, 224, 224))
6061

6162
target_layer = st.sidebar.text_input("Target layer", default_layer)
6263
cam_method = st.sidebar.selectbox("CAM method", CAM_METHODS)
6364
if cam_method is not None:
64-
cam_extractor = cams.__dict__[cam_method](
65+
cam_extractor = methods.__dict__[cam_method](
6566
model,
6667
target_layer=target_layer.split("+") if len(target_layer) > 0 else None
6768
)

docs/source/index.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -49,7 +49,7 @@ Gradient-based methods
4949
:caption: Package Reference
5050
:hidden:
5151

52-
cams
52+
methods
5353
utils
5454

5555

docs/source/cams.rst renamed to docs/source/methods.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
1-
torchcam.cams
2-
=============
1+
torchcam.methods
2+
================
33

44

5-
.. currentmodule:: torchcam.cams
5+
.. currentmodule:: torchcam.methods
66

77

88
Class activation map

docs/source/notebooks/quicktour.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ Basic usage
4040
>>> from torchvision.models import resnet18
4141
>>> from torchvision.transforms.functional import normalize, resize, to_pil_image
4242
>>>
43-
>>> from torchcam.cams import SmoothGradCAMpp, LayerCAM
43+
>>> from torchcam.methods import SmoothGradCAMpp, LayerCAM
4444
>>> from torchcam.utils import overlay_mask
4545
4646
.. code-block:: python

scripts/cam_example.py

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@
1818
from torchvision import models
1919
from torchvision.transforms.functional import normalize, resize, to_pil_image, to_tensor
2020

21-
from torchcam import cams
21+
from torchcam import methods
2222
from torchcam.utils import overlay_mask
2323

2424

@@ -44,16 +44,16 @@ def main(args):
4444
[0.485, 0.456, 0.406], [0.229, 0.224, 0.225]).to(device=device)
4545

4646
if isinstance(args.method, str):
47-
methods = [args.method]
47+
cam_methods = [args.method]
4848
else:
49-
methods = [
49+
cam_methods = [
5050
'CAM',
5151
'GradCAM', 'GradCAMpp', 'SmoothGradCAMpp',
5252
'ScoreCAM', 'SSCAM', 'ISCAM',
5353
'XGradCAM', 'LayerCAM'
5454
]
5555
# Hook the corresponding layer in the model
56-
cam_extractors = [cams.__dict__[name](model, enable_hooks=False) for name in methods]
56+
cam_extractors = [methods.__dict__[name](model, enable_hooks=False) for name in cam_methods]
5757

5858
# Homogenize number of elements in each row
5959
num_cols = math.ceil((len(cam_extractors) + 1) / args.rows)

scripts/eval_latency.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@
1414
import torch
1515
from torchvision import models
1616

17-
from torchcam import cams as methods
17+
from torchcam import methods
1818

1919

2020
def main(args):

test/test_cams_cam.py renamed to test/test_methods_activation.py

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -7,18 +7,18 @@
77
import torch
88
from torchvision.models import mobilenet_v2
99

10-
from torchcam.cams import cam
10+
from torchcam.methods import activation
1111

1212

1313
def test_base_cam_constructor(mock_img_model):
1414
model = mobilenet_v2(pretrained=False).eval()
1515
# Check that multiple target layers is disabled for base CAM
1616
with pytest.raises(ValueError):
17-
_ = cam.CAM(model, ['classifier.1', 'classifier.2'])
17+
_ = activation.CAM(model, ['classifier.1', 'classifier.2'])
1818

1919
# FC layer checks
2020
with pytest.raises(TypeError):
21-
_ = cam.CAM(model, fc_layer=3)
21+
_ = activation.CAM(model, fc_layer=3)
2222

2323

2424
def _verify_cam(activation_map, output_size):
@@ -52,7 +52,7 @@ def test_img_cams(cam_name, target_layer, fc_layer, num_samples, output_size, mo
5252

5353
target_layer = target_layer(model) if callable(target_layer) else target_layer
5454
# Hook the corresponding layer in the model
55-
extractor = cam.__dict__[cam_name](model, target_layer, **kwargs)
55+
extractor = activation.__dict__[cam_name](model, target_layer, **kwargs)
5656

5757
with torch.no_grad():
5858
scores = model(mock_img_tensor)
@@ -61,7 +61,7 @@ def test_img_cams(cam_name, target_layer, fc_layer, num_samples, output_size, mo
6161

6262

6363
def test_cam_conv1x1(mock_fullyconv_model):
64-
extractor = cam.CAM(mock_fullyconv_model, fc_layer='1')
64+
extractor = activation.CAM(mock_fullyconv_model, fc_layer='1')
6565
with torch.no_grad():
6666
scores = mock_fullyconv_model(torch.rand((1, 3, 32, 32)))
6767
# Use the hooked data to compute activation map
@@ -85,7 +85,7 @@ def test_video_cams(cam_name, target_layer, num_samples, output_size, mock_video
8585
kwargs['num_samples'] = num_samples
8686

8787
# Hook the corresponding layer in the model
88-
extractor = cam.__dict__[cam_name](model, target_layer, **kwargs)
88+
extractor = activation.__dict__[cam_name](model, target_layer, **kwargs)
8989

9090
with torch.no_grad():
9191
scores = model(mock_video_tensor)

test/test_cams_core.py renamed to test/test_methods_core.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66
import pytest
77
import torch
88

9-
from torchcam.cams import core
9+
from torchcam.methods import core
1010

1111

1212
def test_cam_constructor(mock_img_model):

0 commit comments

Comments
 (0)