Skip to content

Handle inconsistent encoder & processor inputs #838

@mo-mburnand

Description

@mo-mburnand

Is your feature request related to a problem? Please describe.

A recent use case of mine, I have wanted to take a fine-tuned model but adapt it to be used with a different analyses/dataset, which has only a subset of the variables that the original model was trained on. In my case it's deemed important to use the full level set as well.

My plan would be to freeze the processor and train a new encoder/decoder that can hopefully learn to map from a datasets with more levels & reduced variables to the latent representation that is expected of the processor.

To my understanding however, if transfer_learning = True, the compare_variables function catches a mismatch in request & model variables.

But if transfer_learning = False, it loads the model from checkpoint and ultimately fails on an index error when it instantiates a normaliser for the processor because the mapper is longer than the data statistics loaded.

I believe they would both also fail at trainer. fit() because of a tensor size mismatch.

Describe the solution you'd like

I assume it's important in most use cases to enforce checks, that the variables requested are consistent with the loaded model. Therefore would a config flag to turn off these checks make sense?

Any input from others appreciated.

Describe alternatives you've considered

Another options could be to train an entirely new model on reduced variables & more levels but that feels like wasted compute.

Additional context

No response

Organisation

Met Office

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    Status

    To be triaged

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions