Skip to content

Output batch size changes when input image size is modified #1

Open
@venki-lfc

Description

@venki-lfc

Hi,
First of all, thanks for a great paper! Its quite interesting to see how you try to separate ID and OOD classes.

When I try to apply this concept for my data, I am facing some issues during the WRN model creation.
My input transform looks like the following

transform = T.Compose([
    T.Resize(320, interpolation=T.InterpolationMode.BICUBIC),
    T.CenterCrop(300),
    T.ToTensor(),
    T.Normalize(mean=[0.485, 0.456, 0.406],
                std=[0.229, 0.224, 0.225]), ])

As you can see my images are of the size 300 (In contrast to 32 that you guys do for CIFAR-10).

Now when I feed these images to the model, the output batch size is totally different from what I feed in. For example: With the above transformation and batch size of 2, my output size is (162, num_classes).
But if the transformation size is 32, I get the right output (2, num_classes). May I know what is happening here?

Another doubt: Can your OECC concept be applied to any other model? EfficientNet for example?

Many thanks!
Venki

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions