Skip to content

About the Pre-trained Model #4

Open
@Haochen-Wang409

Description

@Haochen-Wang409

Hi @implus, thanks for the nice work of reproducing the segmentation results of MAE!

I checked the log you provided, and noticed that unexpected keys equals to norm.weight, norm.bias
https://github.com/implus/mae_segmentation/blob/main/log/20220131_012835.log#L229

Does it mean that the pre-trained model is first fine-tuned on ImageNet-1K, and then be loaded as the backbone in segmentation?
Is this a common practice for self-supervised methods?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions