Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Output batch size changes when input image size is modified #1

Open
venki-lfc opened this issue Mar 2, 2023 · 0 comments
Open

Output batch size changes when input image size is modified #1

venki-lfc opened this issue Mar 2, 2023 · 0 comments

Comments

@venki-lfc
Copy link

Hi,
First of all, thanks for a great paper! Its quite interesting to see how you try to separate ID and OOD classes.

When I try to apply this concept for my data, I am facing some issues during the WRN model creation.
My input transform looks like the following

transform = T.Compose([
    T.Resize(320, interpolation=T.InterpolationMode.BICUBIC),
    T.CenterCrop(300),
    T.ToTensor(),
    T.Normalize(mean=[0.485, 0.456, 0.406],
                std=[0.229, 0.224, 0.225]), ])

As you can see my images are of the size 300 (In contrast to 32 that you guys do for CIFAR-10).

Now when I feed these images to the model, the output batch size is totally different from what I feed in. For example: With the above transformation and batch size of 2, my output size is (162, num_classes).
But if the transformation size is 32, I get the right output (2, num_classes). May I know what is happening here?

Another doubt: Can your OECC concept be applied to any other model? EfficientNet for example?

Many thanks!
Venki

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant