You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When attempting to load model weights from a checkpoint in VGT, the model's state dictionary keys do not match the checkpoint keys due to potential prefixing issues. This mismatch results in parameters not being loaded correctly, which lead to a significant increase in "total_loss".
In my opinion, no prefixes are necessary. However, if prefixes are needed, we can implement a simple check to address this issue.
In MyDetectionCheckpointer.py, check if prefixes are required for the keys:
When attempting to load model weights from a checkpoint in VGT, the model's state dictionary keys do not match the checkpoint keys due to potential prefixing issues. This mismatch results in parameters not being loaded correctly, which lead to a significant increase in "total_loss".
In my opinion, no prefixes are necessary. However, if prefixes are needed, we can implement a simple check to address this issue.
In MyDetectionCheckpointer.py, check if prefixes are required for the keys:
Here’s the function to determine if prefixes are needed:
After implementing these changes, re-training from a checkpoint works flawlessly.
The text was updated successfully, but these errors were encountered: