Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Zero value MeanIoU for Absent Classes in Ground Truth #2866

Open
Niccolo-Tallone opened this issue Dec 11, 2024 · 1 comment
Open

Zero value MeanIoU for Absent Classes in Ground Truth #2866

Niccolo-Tallone opened this issue Dec 11, 2024 · 1 comment
Labels
bug / fix Something isn't working help wanted Extra attention is needed v1.5.x

Comments

@Niccolo-Tallone
Copy link

🐛 Bug

In the context of multiclass semantic segmentation, the torchmetrics.segmentation.MeanIoU implementation with the per_class=True parameter, assigns an Intersection over Union (IoU) value of 0 for classes that are absent from the ground truth and correctly not predicted by the model. This behavior skews the mean IoU for those classes toward zero, even when predictions are perfect in cases where the class is present.

This can lead to misleading results for datasets with classes that are often absent from the images. The correct behavior should be to exclude such cases entirely when calculating the mean IoU for a class.

To Reproduce

Code Example 1: Absent Class

import torch
from torchmetrics.segmentation import MeanIoU

# Define the number of classes
num_classes = 3

# Initialize the MeanIoU metric
metric = MeanIoU(num_classes=num_classes, per_class=True, input_format='index')

# Example: Ground truth and predictions with class 2 absent
# preds and target are categorical masks (integer type)

target = torch.tensor([
    [0, 1],  # Ground truth: class 0, class 1
    [1, 0],  # Ground truth: class 1, class 0
])

preds = torch.tensor([
    [0, 1],  # Predictions: class 0, class 1
    [1, 0],  # Predictions: class 1, class 0
])

# Reset the metric (optional if not reused)
metric.reset()

# Update the metric with the predictions and target
metric.update(preds, target)

# Compute the mean IoU per class
miou_per_class = metric.compute()

# Print the results
for class_idx, iou in enumerate(miou_per_class):
    print(f"Class {class_idx} IoU: {iou:.4f}")

which output:

Class 0 IoU: 1.00
Class 1 IoU: 1.00
Class 2 IoU: 0.00

Issue: The IoU for Class 2 is 0.0, which is misleading because the class is absent and should not affect the IoU calculation.

Code Example 2: Perfect Predictions for All Classes

import torch
from torchmetrics.segmentation import MeanIoU

# Define the number of classes
num_classes = 3

# Initialize the MeanIoU metric
metric = MeanIoU(num_classes=num_classes, per_class=True, input_format='index')
# Example: Ground truth and predictions perfectly consistent
# preds and target are categorical masks (integer type)
target = torch.tensor([
    [0, 1],  # Ground truth: class 0, class 1
    [1, 0],  # Ground truth: class 1, class 0
    [2, 2],  # Ground truth: class 2, class 2
])


preds = torch.tensor([
    [0, 1],  # Predictions: class 0, class 1
    [1, 0],  # Predictions: class 1, class 0
    [2, 2],  # Predictions: class 2, class 2
])

# Reset the metric (optional if not reused)
metric.reset()

# Update the metric with the predictions and target
metric.update(preds, target)

# Compute the mean IoU per class
miou_per_class = metric.compute()

# Print the results
for class_idx, iou in enumerate(miou_per_class):
    print(f"Class {class_idx} IoU: {iou:.4f}")

which output:

Class 0 IoU: 0.66
Class 1 IoU: 0.66
Class 2 IoU: 0.33

Issue: All classes are perfectly predicted, yet the IoUs are significantly below 1.0. This behavior is due to the fact that the IoU is zero when the class is absent. In fact in this case we obtain 0.66 ( 2/3) for classes 0 and 1 which are present in the first two samples but not in the third, and 0.33 (1/3) for class 2 present only in the last sample.
Detailed computations for class 0, with 3 samples:

Class 0 Iou sample 0:  1 #perfectly predicted
Class 0 Iou sample 1:  1 #perfectly predicted
Class 0 Iou sample 2:  0 #since it is absent
Class 0 Mean IoU = (1+1+0) / 3 = 0.66

Expected behavior

The expected behaviour should be to exclude samples that do not contain a specific class and for which that class was correctly not predicted, from the calculation of the mean Iou for that class.
A possible solution to this problem would be to implement ‘micro’ aggregation, where for each sample, the confusion matrix is simply updated and the IoU is calculated at the end, from the global confusion matrix.

Environment

  • TorchMetrics version: 1.5.2
  • PyTorch version: 2.4.1+cu121
  • Pytohn version: 3.10.15

Additional context

Assigning 1 instead of 0 would not be a proper solution because it would bring the mean to be very close to 1, in a nutshell it would be a bit like reversing the problem. Especially if the dataset is very unbalanced and there are many images that do not contain a certain class, then the Mean IoU of that class will have a value very close to 0 or 1, depending on the value we decide to assign to the images that do not contain it. This prevents us from understanding the true performance of the model on the images in which that class is present, which are also the ones in which we are usually most interested.

@Niccolo-Tallone Niccolo-Tallone added bug / fix Something isn't working help wanted Extra attention is needed labels Dec 11, 2024
Copy link

Hi! thanks for your contribution!, great first issue!

@Borda Borda added the v1.5.x label Dec 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug / fix Something isn't working help wanted Extra attention is needed v1.5.x
Projects
None yet
Development

No branches or pull requests

2 participants