Skip to content

Freezing of batch normalization layers during fine-tuning #346

@himanshunaidu

Description

@himanshunaidu

Greetings,
First of all, really appreciate the work you have done. This has been really useful for my experiments.

Now, to my question, I have seen that generally, for fine-tuning, the batch normalization layers are frozen.
Did the authors of this repository try something like that?

Just curious about it. I do plan to implement it myself (any tips on that would be appreciated btw), but I was wondering if the authors of this repo or the paper have any insights on it.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions