You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I reprocted your code on aircraft as your setting. But my reprocted results is Seen: 60.98% Unseen: 36.83%,All:48.90; But in your paper, your result is
About the learning rate adjust strategy: "We use a cosine annealing based learning rate scheduler accompanied by a linear warmup, where we set the base learning rate to 0.1 and set the warmup length to 10 epochs". In the latest version of code, you remove it. It does affect performance. So should this operation apply all dataset?
About the fine-grained datasets preprocess:
In your papers, the preprocess is as follows:
The input resolution of CIFAR-10 and CIFAR-100 images is 32×32; Tiny ImageNet images are slightly larger, i.e., 64×64. For the fine-grained datasets the images vary in size and aspect ratio. Therefore, for computational efficiency, we pre-process the images for fine-grained datasets and resize them to 256×256 resolution; this pre-processing operation is performed for both train and test images in all of our experiments.
But in your code, the data transformer is as follows:
I reprocted your code on aircraft as your setting. But my reprocted results is Seen: 60.98% Unseen: 36.83%,All:48.90; But in your paper, your result is
About the learning rate adjust strategy: "We use a cosine annealing based learning rate scheduler accompanied by a linear warmup, where we set the base learning rate to 0.1 and set the warmup length to 10 epochs". In the latest version of code, you remove it. It does affect performance. So should this operation apply all dataset?
About the fine-grained datasets preprocess:
In your papers, the preprocess is as follows:
But in your code, the data transformer is as follows:
So the size of inputs is 224*224 or 256*256? Thanks~
The text was updated successfully, but these errors were encountered: