Open
Description
Hi @implus, thanks for the nice work of reproducing the segmentation results of MAE!
I checked the log you provided, and noticed that unexpected keys
equals to norm.weight, norm.bias
https://github.com/implus/mae_segmentation/blob/main/log/20220131_012835.log#L229
Does it mean that the pre-trained model is first fine-tuned on ImageNet-1K, and then be loaded as the backbone in segmentation?
Is this a common practice for self-supervised methods?
Metadata
Metadata
Assignees
Labels
No labels