You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In distributed settings with the validation epochs less than the default patience (10), it appears the ReduceLROnPlateau scheduler is not reducing the learning rate, even though the globally averaged total_loss would appear to dictate it should. This could be because the global total_loss is only created in the logging, and the local total_loss does not meet the criteria for ReduceLROnPlateau.
The text was updated successfully, but these errors were encountered:
In distributed settings with the validation epochs less than the default patience (10), it appears the ReduceLROnPlateau scheduler is not reducing the learning rate, even though the globally averaged total_loss would appear to dictate it should. This could be because the global total_loss is only created in the logging, and the local total_loss does not meet the criteria for ReduceLROnPlateau.
The text was updated successfully, but these errors were encountered: