-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
problems about hyperparameters configuration #1
Comments
Hi, Thanks for your interest in this implementation. Unfortunately I'm not maintaining this code regularly and it is lacking more documentation (sorry for that). About the two parameters you mentioned:
|
I found that when you increase the train size, the train loss will decrease steadily. |
Does the so-called train size means the batch size? |
Sorry, I re-tried and found when you set |
Hi! I think your LDMNet-pytorch project is really excellent, but I found out that if I ran using 'python main.py with mnist' then the train loss would greatly increase at last. I think this may be due to that you're confused by these two different concepts——"epoch" and "iteration". And in the paper the author use the term 'epoch' instead of 'iteration'.
When I change the hyparameters by using 'python main.py with mnist epochs_update=200 max_epochs=50000' (which means I use 100 iterations for every epoch), I find that the train loss will steadily decrease rather than greatly increase.
Looking forward to your reply, thanks a lot!
The text was updated successfully, but these errors were encountered: