Skip to content

on_validation_epoch_end() during sanity checking will not run if val-accuracy-interval is greater than max epochs. #1025

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

reddykkeerthi
Copy link
Contributor

@reddykkeerthi reddykkeerthi commented Apr 14, 2025

Fixes #859

As part of the issue investigation, I tested the initial behavior by creating a dataset with annotations and passing it to the Jupyter notebook. I monkey-patched on_validation_epoch_end() to track and print how often full validation runs and at which epochs. For the test, I set val_accuracy_interval = 30 and max_epochs = 5 (i.e., val_accuracy_interval > max_epochs). As per the documentation, full validation should not run in this case. But it is actually running full validation.
image

Initally sanity checking happens -
image

Right after the sanity check, we see on_validation_epoch_end() being called and the complete validation check running -
image

After training on epoch 0, we see on_validation_epoch_end() being called again and the complete validation check running, since current_epoch is 0 (0 % 30 = 0) -
image

In the final phase, all epochs trigger on_validation_epoch_end(), but the complete validation check runs only twice—once during sanity checking and once after training when current_epoch = 0 -
image

All epochs have completed, and the final result is shown below:
image

All epochs have completed, and the final result is shown below. As expected, complete validation runs four times: during sanity checking, and when current_epoch is 0, 2, and 4
image

Now, I have modified the code to include a condition that checks if val_accuracy_interval is less than or equal to max_epochs. In this test, val_accuracy_interval = 30 and max_epochs = 5 (i.e., val_accuracy_interval > max_epochs). For this case, we can see that on_validation_epoch_end() is being called, but complete validation is not performed -
image

In addition to the Jupyter notebook testing, I’ve added test_validation_interval_greater_than_epochs in test_main.py to verify that no full validation metrics are logged when val_accuracy_interval (set to 3) is greater than max_epochs (set to 2). The test checks that box_precision, box_recall, and empty_frame_accuracy are not present in logged metrics, as full validation is skipped in this case.

Note: The case where val_accuracy_interval is less than or equal to max_epochs is already covered by test_evaluate_on_epoch_interval in the same file.

Please review and let me know if anything else is needed. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Don't run on_validation_epoch_end() during sanity checking if val-accuracy-interval is greater than max epochs.
1 participant